text
stringlengths 0
2.11M
| id
stringlengths 33
34
| metadata
dict |
---|---|---|
Department of Applied Physics, University of Tokyo, Hongo 7-3-1, 113-8656, Japan Recently, the two-site Kitaev model hosting Majorana edge states was experimentally realized based on double quantum dots. In this context, we construct two-band effective models describing Majorana edge states of a finite-length Kitaev chain by using the isospectral matrix reduction method. We analytically estimate the robustness of Majorana edge states as a function of model parameters. We also study effects of coupling to an environment based on non-Hermitian Hamiltonians derived from the Lindblad equation. We study three types of dissipation, a local one, an adjacent one and a global one. It is found that the Majorana zero-energy edge states acquire nonzero energy such as E∝±( iγ) ^L for the local dissipation, where γ is the magnitude of the dissipation and L is the length of the chain. On the other hand, the Majorana zero-energy edge states acquire nonzero energy such as E∝± iγ irrespective of the length L for the global dissipation. Hence, the Majorana edge states are robust against the local dissipation but not against the global one. Our results will be useful for future studies on Majorana edge states based on quantum dots. Robustness of Majorana edge states of short-length Kitaev chains coupled with environment Motohiko Ezawa January 14, 2024 =========================================================================================§ INTRODUCTION Majorana fermions are key for topological quantum computation<cit.>. The Majorana edge states form a nonlocal qubit, which is robust for local perturbation. Thus, the qubit based on Majorana fermions will resolve the problem of decoherence in quantum computation. Majorana fermions are materialized in topological superconductors<cit.>. A simplest model of a topological superconductor hosting Majorana fermions is the Kitaev chain<cit.>. Despite the simpleness of the model, it is hard to materialize it because it is hard to realize the p-wave superconducting order on the lattice. Recently, the two-site Kitaev chain was experimentally realized in double quantum dots<cit.>. In addition, the three-site Kitaev chain was also experimentally realized in a nanowire device<cit.>It evokes studies on the Minimal Kitaev chain based on double quantum dots<cit.> and short-length quantum dots<cit.>. However, the Majorana edge states in the two-site Kitaev chain is not topologically protected. Indeed, we need a precise tuning of the model parameters so that the Majorana edge states exist exactly at the zero energy. On the other hand, the Majorana edge states are robust if the length of the Kitaev chain is long enough. It is hard to increase the number of quantum dots in the current technology. There are studies on the condition for the existence of the Majorana edge states to have exact zero energy<cit.>. However, there is no construction of the two-band effective model based on Majorana edge states for a finite-length chain. It is intriguing if we can estimate the robustness of the Majorana edge states for a finite-length Kitaev chain. It is discussed <cit.> that the robustness of the Majorana zero-energy states increases exponentially as a function of the length of the Kitaev chain.The platform of the Majorana fermions such as a quantum dot system has an interaction with another system such as a substrate. In general, the coupling to bath makes the system an open quantum system. It is commonly analyzed based on the Lindblad equation<cit.>. The short-time dynamics is well described by a non-Hermitian Hamiltonian derived from the Lindblad equation<cit.>. A Kitaev chain with loss and gain is studied in the context of non-Hermitian Hamiltonian<cit.>.In this paper, we construct effective two-band models for the Majorana edge states by using the isospectral matrix reduction method<cit.>. In addition, we study the effect of the coupling between an environment based on the non-Hermitian Hamiltonian formalism. We study three cases of dissipation. The first one is a local dissipation, where there are hoppings within a single site via an environment as in Fig.<ref>(a). The second one is an adjacent dissipation, where there are hopping between nearest-neighbor sites via an environment as in Fig.<ref>(b). The last one is a global dissipation, where the every site is coherently coupled via an environment as in Fig.<ref>(c). We find that the Majorana edge states are robust against the local dissipation because E∝±( iγ) ^L but not against the global dissipation because E∝± iγ irrespective of the length L, whenthe amplitude of the dissipation γ is small enough.§ KITAEV CHAIN The Kitaev p-wave superconductor model is defined on the 1D lattice as<cit.>Ĥ=-μ∑_x=1^L-1c_x^†c_x-t∑_x=1^L-1( c_x^†c_x+1+c_x+1^†c_x) -∑_x=1^L-1( Δ c_xc_x+1+Δ c_x+1^†c_x^†) ,where μ is the chemical potential, t>0 is the nearest-neighbor hopping strength, Δ >0 is the p-wave pairing amplitude of the superconductor, and L is the length of the chain. §.§ Majorana representation The system is topological for |μ| <2t, where Majorana edge states emerge at both the edges of the chain. We rewrite the fermion operator in terms of two Majorana operators as c_x=( γ _B,x+iγ _A,x/2) , c_x^†=γ _B,x-iγ _A,x/2,where these Majorana operators satisfy γ _α ,x=γ _α ,x^†,{γ _α ,x,γ _α ^',x^'} =2δ _αα ^'δ _xx^'with α =A,B. The Hamiltonian is rewritten in terms of Majorana operators asĤ=-μ/2∑_x=1^L-1( 1+iγ _B,xγ _A,x) -i∑_x=1^L-1[ ( Δ +t) γ _B,xγ _A,x+1+( Δ -t) γ _A,xγ _B,x+1] . When μ =0 and t=Δ≠ 0, where the system is topological, the Hamiltonian is simplified asĤ=-2it∑_x=1^L-1γ _B,xγ _A,x+1=4t∑_x=1^L-1( d_x^†d_x-1/2) ,where d_x=1/2( γ _A,x+1+iγ _B,x) , d_x^†=1/2( γ _A,x+1-iγ _B,x) .The system is exactly solvable. The ground states are given by d_x|0⟩ _d=0 with the energy -2t, whose excited states are |1⟩ _d=d_x^†|0⟩ _d with the energy 2t. They constitute the bulk band. Apart from them, because this Hamiltonian does not contain γ _A,1 and γ _B,N, there exist two Majorana states perfectly localized at the two edge sites and exactly at the zero energy. A non-local fermion operator is defined from them as f=1/2( γ _A,1+iγ _B,N) .A qubit (| 0⟩ _qubit,| 1⟩ _qubit) is constructed such as f| 0⟩ _qubit=0 and | 1⟩ _ qubit=f^†| 0⟩ _qubit. It is interesting to note that the Majorana chain even with L=2 is possible to support a qubit.We show the energy spectrum of (<ref>) as a function of Δ /tfor L=2,3,4,5,6 in Fig.<ref>(a1)∼(a5) and that as a function of μ /t in Fig.<ref>(b1)∼(b5). Especially, the energy has a linear dependence E=|Δ -t| for the two-site Kitaev chain with μ =0 as shown in Fig.<ref>(a1). On the other hand, the energy has a parabolic dependence E∝μ ^2 for the two-site Kitaev chain with Δ =t as shown in Fig.<ref>(b1).The real part of the energy is exactly zero for odd length model as shown in Fig.<ref>(a2) and (a4). We will analytically verify these results by deriving an effective two-band model in the following.§ OPEN QUANTUM SYSTEM Effects of the coupling between the system and an environment are described by the Lindblad equation<cit.> for the density matrix ρ asdρ/dt=-i/ħ[ Ĥ,ρ] +( ∑_αL_αρ L_α^†-1/2{ L_α^†L_α,ρ}) ,where L is the Lindblad operator describing the dissipation. This equation is rewritten in the form ofdρ/dt=-i/ħ( Ĥ_effρ -ρĤ_eff^†) +∑_αL_αρ L_α^†,where H_eff is a non-Hermitian effective Hamiltonian defined by H_eff≡ H+H_dissipation with the dissipation Hamiltonian Ĥ_dissipation≡ -iħ/2L^†L.It describes a short-time dynamics<cit.>. We consider three types of dissipations as illustrated in Fig.<ref>. §.§ Local dissipation First, we study the local dissipation<cit.>, where the Lindblad operators are given by L_x^-=√(γ _-)c_x, L_x^+=√(γ _+) c_x^†,where γ _± represent the dissipation. They describes the effect that the particle is coming in and out of a single site. The corresponding dissipation Hamiltonian readsĤ_dissipation=-iħ/2∑_x=1^L[ γ c_x^†c_x+γ _+] ,where γ≡γ _--γ _+. By introducing a complex chemical potentialμ̃=μ +iħ/2γ ,the effect of the local dissipation is fully taken into account. We show the energy spectrum as a function of γ in Fig.<ref>. The real part of the energy for odd length is exactly zero as shown in Fig.<ref>(a2) and (a4). On the other hand, the imaginary part of the energy for even length is exactly zero if the dissipation is smaller than a certain critical value |γ| <|γ _critical |. The flat region of the zero-energy Majorana edge states is expanded for longer chains. We derive these properties based on an effective model with the use of the isospectral matrix reduction method in Sec.<ref>. §.§ Adjacent dissipation Next, we study the adjacent dissipation<cit.>, where the Lindblad operators are given by L_x^-=√(γ _-/2)( c_x+c_x+1) , L_j^+= √(γ _+/2)( c_x^†+c_x+1^†) .They describe the effect that the particle can hop between nearest-neighbor sites via an environment. The corresponding dissipation Hamiltonian readsĤ_dissipation-γ _+/2 =-iħ/2∑_x=1^L-1γ/2( c_x^†c_x+c_x^†c_x+1+c_x+1^†c_x+c_x+1^†c_x+1) .The effect of the adjacent dissipation is taken into account by considering the complex chemical potential and the complex hopping defined byμ̃=μ +iħ/2γ ,t̃=t+iħ/2 γ .We show the energy spectrum as a function of γ in Fig.<ref>. The imaginary part of the energy acquire nonzero energy as a linear function of γ for L=2 as shown in Fig.<ref>(b1). This is because the two Majorana edge states couple directly via the adjacent dissipation term. It follows from Fig.<ref> that deviations from the zero energy of the Majorana edge states becomes smaller for longer chains. Comparing Figs.<ref> and <ref>, the Majorana edge states are found to be more fragile than the local dissipation. We derive these properties based on an effective model with the use of the isospectral matrix reduction method in Sec.<ref>. §.§ Global dissipation Finally, we study the global dissipation, where the Lindblad operators are given byL_x^-=√(γ _-)∑_x=1^Lc_x, L_x^+=√(γ _+)∑_x=1^Lc_x^†.They describe the effect that all particles are equally coupled via an environment. The corresponding dissipation Hamiltonian reads Ĥ_dissipation = -iħ/2γ∑_x,x^'=1^L( c_x^†c_x+c_x^†c_x^'+c_x^'^†c_x+c_x^'^†c_x^')+γ _+,which is highly nonlocal. We show the energy spectrum as a function of γ in Fig.<ref> for L=2,3,4,5. It is found from Fig.<ref> (b1)126(b5) that the imaginary part of the energy is linear as a function of γ irrespective of the length of the chain although the slope becomes smaller for a longer chain. Hence, the Majorana edge states are not robust in the presence of the global dissipation even for a long Kitaev chain. It is natural because the global dissipation term couples the Majorana edge states directly. We derive these properties based on an effective model with the use of the isospectral matrix reduction method in Sec.<ref>.§ MINIMAL KITAEV CHAIN In the presence of the local dissipation or the adjacent dissipation, the Hamiltonian of the two-site Kitaev chain reads Ĥ=([ c_1^† c_2^† c_1 c_2 ] ) H([ c_1; c_2; c_1^†; c_2^† ] ) ,withH=([ -μ̃ -t̃ 0 Δ; -t̃ -μ̃-Δ 0; 0-Δμ̃t̃; Δ 0t̃μ̃ ] )where we use the complex chemical potential μ̃ in (<ref>) for the local dissipation and the complex chemical potential μ̃ and the complex hopping t̃ in (<ref>) for the adjacent dissipation. The energy spectrum is exactly obtained as E=±t̃±√(Δ ^2+μ̃^2).We note that the adjacent dissipation and the global dissipation is identical for L=2.§ ISOSPECTRAL MATRIX REDUCTION METHOD It is impossible to exactly diagonalize the Hamiltonian matrix hosting Majorana edge states except for the exactly solvable parameter μ =0, Δ =± t for the Kitaev chain for L≥ 3.We derive an effective two-band model near the zero-energy describing the Majorana edge states based on the isospectral matrix reduction method<cit.>. We first diagonalize the Hamiltonian matrix for μ =0, Δ =t asH^' ≡ UHU^-1= diag.{ 0,0,-2t,-2t,⋯ ,-2t,2t,2t,⋯ ,2t} .The first two zero-energy states correspond to the Majorana edge states. Then we divide H^' into the form ofH^'=([ H_1 V; V^† H_2 ] ) ,where H_1 is the 2× 2 matrix and H_2 is the (2L-2)× (2L-2) matrix. The eigenequation reads([ H_1 V; V^† H_2 ] ) ([ ψ _1; ψ _2 ] ) =E([ ψ _1; ψ _2 ] ) ,from which we derive ψ _2=( E-H_2) ^-1V^†ψ _1.Then, we obtain a single nonlinear eigen equation for ψ _1 as H̃( E) ψ _1=Eψ _1,where H̃( E) =H_1+V( E-H_2) ^-1V^†.Exact solutions may be obtained by solving the nonlinear equation. However, it is practically impossible to solve it because it is an algebraic equation of the order of E^2L. Instead, we seek a solution in the vicinity of the zero energy, where the Hamiltonian is well approximated by H_eff≡ H_1-VH_2^-1V^†.The second term is written in the form of-VH_2^-1V^†=Fσ _x.We explicitly determine H_eff in what follows.§.§ Hermitian model First, we derive an effective two-band model for the Hermitian system. We find H_1=( t-Δ) σ _xfor L=2 and H_1=0 for L≥ 3. On the other hand, we findF=1/( Δ +t) ^L-1∑_m=0^⌊ L/2⌋([ L-m; m ] ) μ ^L-2m( Δ ^2-t^2) ^2mfor L≥ 2. The energy is given by E=± F. If μ =0, we have F=0 for odd L, which well describes the energy near the zero energy as shown in Fig.<ref>(a2) and (a4). On the other hand, we find F=-( Δ -t) ^L/2/( Δ +t) ^L/2+1 ∝( Δ -t) ^L/2for even L, which well describes the energy near the zero energy as shown in Fig.<ref>(a1), (a3) and (a5). It is almost zero for small |Δ -t| and large L, where the Majorana edge states are robust.If Δ =t, we have F=-μ ^L/( 2t) ^L-1∝μ ^L.It well fits the energy near the zero energy as shown in Fig.<ref> (b1)∼(b5). It is almost zero for small μ and large L, where the Majorana edge states are robust. §.§ Local dissipation Next, we derive effective two-band models for the local dissipation with Δ =t and μ =0. We find H_eff=-( iγ) ^L/( 2t) ^L-1for L≥ 2. This formula well explains the fact that the real [imaginary] part of the energy is zero for even [odd] L as shown in Fig.<ref>(a2) and (a4), [(b1) and (b3)]. Hence the Majorana edge states becomes robust for a long chain. §.§ Adjacent dissipation We derive effective two-band models for the adjacent dissipation with μ =0. We find H_1=( t+iγ -Δ) σ _xfor L=2 and H_1=0 for L≥ 3. On the othe hand, F=-∑_m=0^⌊ L/2⌋([ L-m; m ] ) ( iγ) ^L-2m/( Δ +t) ^L-1 ( Δ ^2-( t+iγ) ^2) ^mfor L≥ 2. For small γ (|γ /t|≪ 1 ), they are explicitly given by[F=-iγ -γ ^2/2t+⋯for L=2,;F=-3iγ ^3/4t^2+γ ^2/2t+⋯for L=3,;F=-iγ ^3/t^2-γ ^2/2t+⋯for L=4,; F=-iγ ^3/4t^2-γ ^4/4t^3+⋯for L=5,; F=iγ ^3/4t^2+γ ^4/t^3+⋯for L=6, ]In general,F=iaγ ^A+bγ ^B+⋯ ,with certain real numbers a and b, whereA=2⌊L+1/4⌋ +1, B=2⌊ L-1/4⌋ +2.The robustness of the Majorana edge modes is enhanced when the chain length becomes longer. However, it is more fragile than the local dissipation. §.§ Global dissipation Finally, we derive effective two-band models for the global dissipation withΔ =t and μ =0. We findH_1=iγσ _x, F=( L-1) γ ^2/ 2t+i( L-1) γσ _xfor L≥ 2. For small γ (|γ /t|≪ 1 ), the Hamiltonian is approximated asH_eff≃( iγ +( L-1) γ ^2/2t ) σ _x.The real part of the energy is parabolic as a function of γirrespective of the length L as shown in Fig.<ref>(a1)∼(a5), while the imaginary part of the energy is linear as a function of γirrespective of the length L as shown in Fig.<ref>(b1)∼(b5). Hence, the robustness of the Majorana edge states is not enhanced even if we use a chain with long length L in the presence of the global dissipation.Actually, the energy spectrum in the vicinity of the zero energy is exactly obtained without using the isospectral matrix reduction method at the point μ =0 and Δ =t, whose results readsF=iL/2γ +2t+i√(( L/2γ) ^2-2i( L-2) tγ -4t^2).It gives the same result as in (<ref>) for small γ (|γ /t|≪ 1).§ DISCUSSION We have constructed two-band effective models describing the Majorana edge states in the presence of three types of dissipation. We have found the even-odd effect on the stability of the Majorana edge states. We have also found that the robustness of the Majorana edge states is affected by the type of dissipation, where the global dissipation is detrimental for the stability of the Majorana edge states. The local dissipation may be realized when the system couples with the substrate where the real coordinate is a good quantum number. On the other hand, the global dissipation may be realized when the system couples with the substrate where the momentum coordinate is a good quantum number. Our results will be useful for experimental realization of the Kitaev chain based on quantum dots.This work is supported by CREST, JST (Grants No. JPMJCR20T2) and Grants-in-Aid for Scientific Research from MEXT KAKENHI (Grant No. 23H00171). 99 Brav2 S. B. Bravyi and A. Yu. Kitaev, Fermionic Quantum Computation, Annals of Physics 298, 210 (2002)Ivanov D. A. Ivanov, Non-Abelian statistics of half-quantum vortices in p-wave superconductors, Phys. Rev. Lett. 86, 268 (2001).KitaevTQC A. Kitaev, Fault-tolerant quantum computation by anyons, Ann. Phys. 303, 2 (2003).DasTQC S. Das Sarma, M. Freedman, and C. Nayak, Topologically protected qubits from a possible non-Abelian fractional quantum Hall state, Phys. Rev. Lett. 94, 166802 (2005).TQC C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Non-Abelian anyons and topological quantum computation, Rev. Mod. Phys. 80, 1083 (2008).EzawaTQC Motohiko Ezawa, Systematic construction of topological-nontopological hybrid universal quantum gates based on many-body Majorana fermion interactions arXiv:2304.06260Qi X.-L. Qi, S.-C. Zhang, Topological insulators and superconductors, Rev. Mod. Phys. 83, 1057 (2011).Alicea J. Alicea, New directions in the pursuit of Majorana fermions in solid state systems, Rep. Prog. Phys. 75, 076501 (2012)Sato M. Sato and Y. Ando, Topological superconductors: a review, Rep. Prog. Phys. 80, 076501 (2017).AliceaBraid J. Alicea, Y. Oreg, G. Refael, F. von Oppen and M.P.A. Fisher, Non-Abelian statistics and topological quantum information processing in 1D wire networks, Nat. Phys. 7, 412 (2011).Kitaev01 A. Yu Kitaev, Unpaired Majorana fermions in quantum wiresm Phys.-Usp. 44 131 (2001)Dvir Tom Dvir, Guanzhong Wang, Nick van Loo, Chun-Xiao Liu, Grzegorz P. Mazur, Alberto Bordin, Sebastiaan L. D. ten Haaf, Ji-Yin Wang, David van Driel, Francesco Zatelli, Xiang Li, Filip K. Malinowski, Sasa Gazibegovic, Ghada Badawy, Erik P. A. M. Bakkers, Michael Wimmer and Leo P. Kouwenhoven, Realization of a minimal Kitaev chain in coupled quantum dots, Nature 614, 445 (2023)Bordin Alberto Bordin, Xiang Li, David van Driel, Jan Cornelis Wolff, Qingzhen Wang, Crossed Andreev reflection and elastic co-tunneling in a three-site Kitaev chain nanowire device, arXiv:2306.07696Tsin Athanasios Tsintzis, Ruben Seoane Souto and Martin Leijnse, Creating and detecting poor man's Majorana bound states in interacting quantum dots Phys. Rev. B 106, L201404 (2022)LiuB Chun-Xiao Liu, Haining Pan, F. Setiawan, Michael Wimmer and Jay D. Sau, Fusion protocol for Majorana modes in coupled quantum dotsm Phys. Rev. B 108, 085437 (2023)Koch Rouven Koch, David van Driel, Alberto Bordin, Jose L. Lado and Eliska Greplova, Adversarial Hamiltonian learning of quantum dots in a minimal Kitaev chain, arXiv:2304.10852TsinRev Athanasios Tsintzis, Ruben Seoane Souto, Karsten Flensberg, Jeroen Danon and Martin Leijnse, Roadmap towards Majorana qubits and nonabelian physics in quantum dot-based minimal Kitaev chains, arXiv:2306.16289Pino D. Michel Pino, Ruben Seoane Souto and Ramon Aguado, Minimal Kitaev–transmon qubit based on double quantum dots, arXiv:2309.12313Samu William Samuelson, Viktor Svensson and Martin Leijnse, A minimal quantum dot-based Kitaev chain with only local superconducting proximity effect, arXiv:2310.03536Mohse Mahan Mohseni, Hassan Allami, Daniel Miravet, David J. Gayowsky, Marek Korkusinski, Pawel Hawrylak, Majorana excitons in a Kitaev chain of semiconductor quantum dots in a nanowire, arXiv:2307.00100Sout Ruben Seoane Souto, Athanasios Tsintzis, Martin Leijnse and Jeroen Danon , Probing Majorana localization in minimal Kitaev chains through a quantum dot, arXiv:2308.14751Mile Sebastian Miles, David van Driel, Michael Wimmer and Chun-Xiao Liu, Kitaev chain in an alternating quantum dot-Andreev bound state array, arXiv:2309.15777Kao Hsien-chung Kao, Phys. Rev. B 90, 245435, Chiral zero modes in superconducting nanowires with Dresselhaus spin-orbit coupling, Phys. Rev. B 90, 245435 (2014)Hedge Suraj Hegde, Vasudha Shivamoggi, Smitha Vishveshwara and Diptiman Sen, Quench dynamics and parity blocking in Majorana wires New J. Phys. 17 053036 (2015)Zvy A. A. Zvyagin, Majorana bound states in the finite-length chain Low Temp. Phys. 41 625 (2015)ZengC Chuanchang Zeng, Christopher Moore, Apparao M. Rao, Tudor D. Stanescu, and Sumanta Tewari, Analytical solution of the finite-length Kitaev chain coupled to a quantum dot Phys. Rev. B 99, 094523 (2019)Leum Nico Leumer, Magdalena Marganska, Bhaskaran Muralidharan and Milena Grifoni, Exact eigenvectors and eigenvalues of the finite Kitaev chain and its topological properties J. Phys.: Condens. Matter 32 445502 (2020)Lind G. Lindblad, On the generators of quantum dynamical semigroups, Commun. Math. Phys. 48, 119 (1976)Yuce C. Yuce, Majorana edge modes with gain and loss, Phys. Rev. A 93, 062130 (2016)Zeng Qi-Bo Zeng, Baogang Zhu, Shu Chen, L. You, and Rong Lu, Phys. Rev. A 94, 022119 (2016)Kawabata Kohei Kawabata, Yuto Ashida, Hosho Katsura, Masahito Ueda, Parity-time-symmetric topological superconductor, Phys. Rev. B 98, 085116 (2018)KawabataNC Kohei Kawabata, Sho Higashikawa, Zongping Gong, Yuto Ashida and Masahito Ueda, Topological unification of time-reversal and particle-hole symmetries in non-Hermitian physics Nat. Com. 10, 297 (2019)SatoX Kohei Kawabata, Ken Shiozaki, Masahito Ueda and Masatoshi Sato, Symmetry and Topology in Non-Hermitian Physics, Phys. Rev. X 9, 041015 (2019)Shibata Naoyuki Shibata and Hosho Katsura, Dissipative spin chain as a non-Hermitian Kitaev ladder, Phys. Rev. B B 99, 174303 (2019)EzawaMajo Motohiko Ezawa, Braiding of Majorana-like corner states in electric circuits and its non-Hermitian generalization, Phys. Rev. B 100, 045407 (2019)Zhao Xiao-Ming Zhao, Cui-Xian Guo, Su-Peng Kou, Lin Zhuang, and Wu-Ming Liu Phys. Rev. B 104, 205131, Defective Majorana zero modes in a non-Hermitian Kitaev chain (2021)Lieu Simon Lieu, Non-Hermitian Majorana modes protect degenerate steady states, Phys. Rev. B 100, 085110 (2019)Eek Lumen Eek, Anouar Moustaj, Malte Rontgen, Vincent Pagneux, Vassos Achilleos, and Cristiane Morais Smith, Emergent non-Hermitian models, arXiv:2310.11988Ront Malte Rontgen, Xuelong Chen, Wenlong Gao, Maxim Pyzh, Peter Schmelcher, Vincent Pagneux, Vassos Achilleos and Antonin Coutant, Latent Su–Schrieffer–Heeger models, arXiv:2310.07619Diehl Sebastian Diehl, Enrique Rico, Mikhail A. Baranov and Peter Zoller, Topology by dissipation in atomic quantum wires, Nature Physics 7, 971 (2011) | http://arxiv.org/abs/2310.18083v1 | {
"authors": [
"Motohiko Ezawa"
],
"categories": [
"cond-mat.supr-con",
"cond-mat.mes-hall"
],
"primary_category": "cond-mat.supr-con",
"published": "20231027120623",
"title": "Robustness of Majorana edge states of short-length Kitaev chains coupled with environment"
} |
firstpage–lastpage Machine Learning Infused Distributed Optimization for Coordinating Virtual Power Plant Assets Meiyi Li, Student Member, IEEE, Javad Mohammadi, Senior Member, IEEE2023-10-25 ===================================================================================================The inter-band correlations between optical/UV and X-ray luminosities of active galactic nuclei (AGN) are important for understanding the disc-coronal connection, as well as using AGN as standard candles for cosmology. It is conventional to measure the X-ray luminosity at rest frame 2 keV and compare to the UV luminosity at the rest-frame 2500 Å, but the wavelength-dependence was never well explored. In this work, we adopt a well-defined sample of 1169 unobscured quasars in the redshift range 0.13 – 4.51, and apply the direct-correlation method to explore how the correlation with the 2 keV luminosity changes at different optical/UV wavelengths, from 1280 – 5550 Å where the spectral quality is high. We find that the luminosity at all UV continuum wavelengths correlates with the X-ray luminosity similarly to that at 2500 Å, and that these correlations are better than at the optical wavelengths. Strong self-correlation is also found in the broadband optical/UV continuum, supporting the scenario that it is dominated by the disc emission. Correlations of various emission lines are also investigated (e.g. C iv,C iii], Mg ii, Hβ, [O iii]λλ 4959/5007), including the Baldwin effect and correlations involving line-widths. We find the forms of these line correlations are different, and they are also different from their underlying continua, suggesting various complexities in the line-generation process. We discuss these results in the disc-wind scenario. Our study confirms that the rest-frame 2500 Å is a good wavelength to represent the optical/UV continual properties of quasars, and shows the advantages of the direct-correlation method. accretion, accretion discs - galaxies: active - galaxies: nuclei. § INTRODUCTION The relationship observed between the X-rays and optical/UV emission of active galactic nuclei (AGN) has been studied for decades (e.g. ). Observationally, these two quantities are correlated with a slope of ≃0.6 and an observed dispersion that varies from 0.4-0.2 dex depending on the sample selection. From a physical perspective, it is generally accepted that the primary hard X-ray emission arises from a compact hot corona (e.g. ) around the super-massive black hole, while the optical/UV continuum mainly comes from the outer region of the accretion disc ().However, no adequate physical model able to explain such a relation between the X-ray and optical/UV emission exists yet (i.e. the physical connection between the disc and warm/hot corona in unknown; see the recent review by ).This non-linear UV/X-ray luminosity correlation is also important because it can be used to study cosmology <cit.>. Quasars can be employed as standard candles to measure distances (in particular at z > 2) and to constrain cosmological parameters (e.g. ). Furthermore, it has been verified, across a wide redshift range, that the slope of the UV/X-ray luminosity correlation do not show any statistically significant redshift evolution up to redshift of about 6 (see e.g. ) and that the dispersion of the UV/X-ray luminosity correlation is mainly caused by measurement uncertainties and some intrinsic variability (e.g. ).The adoption of the luminosity at 2500 Å is somewhat arbitrary. This choice can be traced back to Maarten Schmidt's seminal paper in 1968 (). Their motivations for employing 2500 Å were that this wavelength is not affected by emission lines and lies in the middle of wavelength range (i.e. 1700 – 3500 Å) where the continuum has a power law shape. Additionally, the optical/UV continuum depends on the mass, mass accretion rate and spin of the black hole (), and is also affected by other components such as dust extinction (e.g. ) and UV iron lines (e.g. ). A fundamental question is thus whether 2500 Å can effectively be a good proxy for the properties of optical/UV continuum of quasars in general, especially whether its correlation with X-rays is representative. Therefore, to provide a better understanding of the physics driving the X-ray-to-UV relation, one should analyse whether the luminosity correlation between the optical/UV and X-rays has a wavelength dependence <cit.>. Recently, <cit.> performed a complete UV spectroscopic analysis of a sample of ≃1800 quasars with SDSS optical spectra andX-ray serendipitous observations. In the X-rays, they analysed the spectra of all the sample objects at redshift z > 1.9, while they considered catalogued photometric measurements at lower redshifts. They find that the monochromatic fluxes at 1 keV and 2500 Å are, respectively, the best X-ray and UV continuum indicators among those that are typically available. However, the best-fit slope that they obtained, by employing spectroscopic UV fluxes and analysing the X-ray-to-UV relation in narrow redshift bins, is somewhat flatter (≃0.46) than what observed with photometric values (≃0.57, see their Figure 3). In addition, by correlating the optical/UV data with the X-ray luminosity, we can study the origin of the optical/UV continuum and various emission/absorption lines. <cit.> correlated the 2–10 keV luminosity with the optical luminosity for a sample of 51 AGN with Sloan Digital Sky Survey (SDSS) spectra, and derived the variation of the correlation coefficient with wavelength, namely the optical-to-X-ray correlation spectrum (OXCS). The main advantage of OXCS is that it is not model-dependent (i.e. does not depend on the modelling of the continuum and line profile), so it can directly reflect the intrinsic correlations of the continuum and emission lines (including individual line components) with X-rays. The OXCS reported by <cit.> revealed various correlation features. For example, the strongest correlation was found between hard X-rays and [O iii], which is stronger than the continuum and emission lines at other optical wavelengths. The Hβ broad line component correlates more strongly with X-rays than with the narrow line component. These results provide important clues for our understanding of the relationship between different emission line regions and their ionization sources. However, the sample in <cit.> is relatively small (51 AGN) and is limited to z < 0.4, so their SDSS spectra cannot cover UV in the rest-frame. Therefore, it is necessary to use larger quasar samples covering wider ranges of redshifts. In addition, we can replace the X-ray luminosity with UV luminosity, in order to examine the self-correlation of the emission in the optical/UV band. This UV self-correlation can be used to distinguish different optical/UV continual components, and also to better understand the origins of optical/UV lines.Large quasar samples at various redshifts with well-calibrated SDSS spectra offer a good opportunity to address the questions mentioned above. Hence, the main motivation of this work is to investigate the wavelength dependence of the optical/UV and X-ray luminosity correlations using a large sample of SDSS quasars, and so to test the basis for the choice of 2500 Å. Also, we extend the OXCS of <cit.> into the UV band, and explore the UV self-correlation spectrum. These results improve our understanding of the origin of the optical/UV continuum and various emission lines.The structure of this paper is as follows. First, we describe the sample and its statistical properties in Section <ref>. Then in Section <ref> we show the optical/UV and X-ray correlation spectra and the optical/UV self-correlation spectra. Section <ref> presents the wavelength dependence of the regression parameters of the optical/UV and X-ray luminosity correlations. Discussion about the choice of 2500 Å, further correlation analyses and the origin of various optical/UV emission lines are presented in Section <ref>. The main results of this work are summarized in the final section. A flat universe model with H_0 = 72 km s^-1 Mpc^-1, Ω_Λ = 0.73, Ω_ M = 0.27 is adopted throughout the paper. § THE QUASAR SAMPLE§.§ Sample SelectionThe study of the optical/UV and X-ray luminosity correlation requires a large, unobscured quasar sample with high-quality optical/UV and X-ray data. The parent sample of this work is taken from <cit.>, which contains 2421 quasars with optical spectroscopy from the Sloan Digital Sky Survey Data Release 14 (SDSS DR14) and X-ray observations from eitheror . This parent sample is one of the cleanest quasar samples, in the sense that all quasars with strong extinction (i.e. dust reddening and X-ray absorption), host galaxy contamination and Eddington bias have been excluded (seefor more details). Since the aim of this work is to explore the wavelength dependence of the inter-band correlation, it is necessary to use the SDSS spectra with the best absolute flux calibration.In fact, there is a known issue with the flux calibration of quasar spectroscopy in DR12 and prior data releases <cit.>. Briefly, BOSS quasar targets are observed with an offset on the focal plane to increase the S/N around the Lyα, but since the standard stars are not observed in the same fashion, the derived flux correction is not suited for quasars. This results in an overall bluer quasar spectra. A correction has been developed and applied to the individual exposures and the final coadded spectra in DR14 and later data releases, which improved the flux calibration, although there are still significant residuals[<https://www.sdss3.org/dr9/spectro/caveats.php>] <cit.>. We thus crossmatch the parent sample with the SDSS DR7 catalogue, finding 1278 unique sources. All of these sources have the sciencePrimary flag equal to 1, indicating that the spectra are good for scientific analysis[<https://skyserver.sdss.org/dr7/en/help/docs/glossary.asp>]. There are 128 sources having multiple SDSS spectra, for which we choose the spectrum with the highest signal-to-noise (S/N) for our subsequent analysis.We then visually inspect all the SDSS spectra and further exclude 105 sources whose spectra contain data gaps, or if the averaged S/N is < 10 (i.e. sn_median_all>10, which is the S/N per observation over the entire wavelength range and for all observations), or the continuum shows a suspicious down-turn towards the blue end of the spectrum which is most likely dueto reddening. The final clean sample contains 1169 quasars.§.§ Data PreparationThe SDSS DR7 spectra were downloaded from the SDSS Data Archive Server[<http://das.sdss.org/www/html/das2.html>]. For the purpose of investigating the detailed wavelength dependence of various correlation parameters, we must rebin all de-redshifted SDSS spectra on common wavelengths. A relatively low spectral resolution of λ/Δλ=200 is adopted to increase the S/N in each wavelength bin. Using a larger λ/Δλ will mainly result in more noisy spectra, but will not change the results of this work. The wavelength coverage of SDSS is 3800 – 9200 Å, but the instrument throughput decreases towards the edges[<http://classic.sdss.org/dr7/instruments/spectrographs/index.html>] where the S/N also decreases, and so we restrict the spectral range to 4000 – 8600 Å for the following analysis. All spectra are de-reddened for Galactic extinction along their line of sight, using the dust map of <cit.> and the reddening curve of <cit.>, and then de-redshifted to their respective individual rest-frames.The catalogue table of <cit.>[<https://cdsarc.cds.unistra.fr/viz-bin/cat/J/A+A/642/A150>] contains the X-ray flux at 2 keV (rest-frame) and the UV flux at 2500 Å from photometry. However, the photometric flux may include contamination from both emission and absorption lines (e.g. strong UV Fe ii lines) in the filter's bandpass, so we considered the spectral flux to study the correlation in our analysis. We then check whether there is statistical agreement between the photometric and the spectral flux values. We first convert all the fluxes to luminosities, and then cross-correlate the photometric luminosity with the spectral luminosity at 2500 Å. Note that only 902 sources in our sample are used for this analysis, because these all lie in the redshift range of 0.60 – 2.44 (corresponding to 4000–8600 Å), and so the SDSS spectra can the rest-frame 2500 Å.As shown in Figure <ref> left panel, the two estimates of luminosities at 2500 Å are very consistent with each other. We apply two regression algorithms to check the correlation. The first is the orthogonal distance regression (ODR) algorithm, which treats the two variables symmetrically. The second is the LINMIX algorithm[<https://linmix.readthedocs.io/en/latest/src/linmix.html>] (), which uses Monte Carlo Markov Chains (MCMC) and returns a reliable estimate of the intrinsic scatter. Both algorithms take into account variable uncertainties (seefor more details). The ODR algorithm obtains a slope of 1.0046 ± 0.0044 and an intercept of -0.119 ± 0.136 (blue dash line), and the LINMIX algorithm finds a slope of 0.998 ± 0.005 and an intercept of 0.094 ± 0.148 (green dash line). Both are consistent with the X=Y relation (red solid line). The intrinsic scatter is found to be 6.63 ± 0.17 per cent. These results confirm that the photometric and spectral fluxes are all acceptably well calibrated, and the contamination by nearby emission (or absorption) lines in the photometric flux is not significant. Therefore our SDSS spectra can be used to study the wavelength dependence of the correlations, and the results can then be compared with previous studies based on the photometric flux at 2500 Å. §.§ Sample Properties and Distributions The redshift distributions of the luminosities at 2500 Å and 2 keV are shown in the right panel of Figure <ref>. These are similar to the parent sample in <cit.>. Since we applied a S/N cut, some high-redshift sources in the parent sample were excluded due to their low spectral quality, so the redshift range of the current sample is 0.13 – 4.51 with a mean redshift of 1.37 (see the distributions of various parameters in Table <ref> and Figure <ref>a). The mean spectral S/N is 66.8 for the entire sample, confirming that the sample has good SDSS spectral quality. The luminosity at 2 keV covers the range of 42.7 – 46.2 with a mean value of 44.3 (erg s^-1 in logarithm), and the luminosity at 2500 Å has a range of 44.5 – 47.5 with a mean value of 45.8. Then the SDSS DR14 catalogue of <cit.> is used to extract the black hole masses for the entire sample. These are the fiducial single-epoch virial masses measured from various broad emission lines in the optical/UV band. The mass range covered by our sample is 7.04 – 11.09 with a mean value of 8.92 in logarithm. Then, given the optical/UV luminosity and the black hole mass, the mass accretion rate through the outer disc (ṁ=Ṁ/Ṁ_ Edd) can be measured for every source (e.g. ). This is done by fitting the SDSS spectrum with the AGN broadband spectral energy distribution (SED) model optxagnf (), with the black hole mass fixed at the viral mass, spin fixed at zero and R_cor fixed at the innermost stable circular orbit i.e. assuming the accretion flow is a standard disc[A detailed SED study of the sample will be published in another paper.]. Note that ṁ is not necessarily equal to the Eddington ratio (L_ bol/L_ Edd), because the bolometric luminosity (L_ bol) also depends on the radiative efficiency of the accretion flow, which in turn depends on the physical properties of the flow and the black hole spin (e.g. ). The viewing angle may also affect the observed L_ bol (e.g. ). Hence we prefer to use ṁ as a more robust indicator of the mass accretion state of an AGN. We find that ṁ of the sample covers a range of -2.70 – 1.09 with a mean value of -1.14 in logarithm (i.e. the mean ṁ is 7.2 per cent). Besides the overall statistical properties presented above, we also check the sample distribution and quality at various wavelengths. Firstly, we examine the sample size at different wavelengths in the rest-frame, as shown in Figure <ref>a. The maximal number (926 quasars) is found at the wavelength of 2672 Å, which then decreases towards both sides. There are 903 quasars at 2500 Å. We choose a conservative threshold of n=50 to ensure that the correction analysis at each wavelength is statistically robust. This threshold corresponds to a wavelength cut at 6360 Å, and so we restrict the following analysis to the rest-frame wavelength range of 1000 – 6360 Å.Then, we check the redshift distribution and the mean S/N of different sources in each wavelength bin, as shown Figures <ref>b and <ref>c. The sample's redshift range increases naturally towards short wavelengths. At λ∼ 1000 Å, the redshifts are mostly above 3. The redshift range at 2500 Å is 0.61 – 2.42. For the S/N, the mean value reaches its maximum of 67 at the Mg ii line, and decreases towards both sides. The mean S/N at 2500 Å is 51.5. The wavelength range with the mean S/N ≥ 10 is 1280 – 5550 Å, where the results of correlation analysis should be more reliable.Finally, we examine the composite optical/UV spectrum for the entire sample. As described in <cit.>, the composite spectrum is sensitive to the method of assembly. Since the optical/UV underlying continua of quasars generally have a power law shape, we choose the geometric mean algorithm, so that the continuum's shape of the composite spectrum reflects the mean slope of the input sample. To further suppress the bias due to the sample variation at different wavelengths, we divide the sample into three subsamples for three redshift ranges (0.13 – 0.8, 0.8 – 1.8, 1.8 – 4.51), and produce a composite spectrum in each interval. With these three composite spectra, we use the geometric mean to connect different spectra in the two wavebands of 2300 – 3000 Å and 1470 – 3000 Å, and finally derive a total composite spectrum. The absolute flux of the composite spectrum is trivial, so the choice of the above wavelength ranges are arbitrary. The resulting composite spectrum[Our new quasar composite spectrum can be downloaded from <https:// www.dropbox.com/s/mk3zgwnxe9w66tr/quasar_compspec_Jin2023.txt?dl=0>] is shown in Figure <ref>d, and is compared to the quasar composite spectrum of <cit.> which contains 2204 quasars covering the redshift range of 0.044 – 4.789. We find that these two composite spectra are very similar. The main difference is that the underlying continuum of our new composite spectrum is slightly steeper, which is likely because the parent sample of this study has been filtered more strongly to exclude sources with significant extinction and host galaxy contamination (). § THE DIRECT CORRELATION SPECTRUM The correlation between the optical/UV and X-ray luminosities of quasars has been extensively studied, generally based on their luminosity at 2500 Å. The method of direct optical to X-ray correlation spectrum (OXCS, ) offers a new model-independent way to investigate the correlation with X-rays at different wavelengths in the optical band. By applying this method to a sample of 51 unobscured AGN with high-quality multi-wavelength data, <cit.> found that the optical continuum is generally well correlated with the X-ray emission, and some principal emission lines (e.g. the broad Hβ, [OIII] λ5007) correlate better with the X-rays than with the continuum. In this section we extend the OXCS method by applying it to a much larger sample and over a wider wavelength range, including both optical and UV, and then replace the X-ray luminosity with the UV luminosity to explore, for the first time, the optical/UV self-correlation. §.§ Optical/UV and X-ray Correlation Spectrum (OUXCS) §.§.§ The First Type: OUXCS-1Following the OXCS method we cross-correlate the monochromatic luminosity at each wavelength within 1000 – 6300 Å with the 2 keV luminosity (L_ 2 keV) for the subsample available at that wavelength, and then plot the Pearson correlation coefficient against the wavelength (hereafter: the first type of optical/UV and X-ray correlation spectrum, OUXCS-1). The results are shown in Figure <ref>a (black solid line). The data below 1280 Å and above 5550 Å are affected by the low spectral quality (see Figure <ref>c), so we only consider the results in the range of 1280 – 5550 Å to be reliable.Firstly, OUXCS-1 confirms the results reported previously by <cit.>, including the strong optical continuum's correlation and stronger line correlations for Hβ and [O iii]λ5007. Secondly, OUXCS-1 shows that L_ 2 keV correlates more strongly with the UV continuum below 3500 Å than with the optical range. In addition, correlations with emission lines such as Mg ii, C iii] and C iv are stronger than with their local continua.Figure <ref>b and Figure <ref>b shows that subsamples at different wavelengths span different redshift and luminosity ranges, thus the general shape of the observed OUXCS-1 is mainly driven by the accretion disc continuum properties of the subsample at different wavelengths. To quantify this effect, we considered thebest-fit disc model of optxagnf for every source (see Section <ref>) and recalculate OUXCS-1 directly from the model. The result is shown in Figure <ref>a (red solid line). Indeed, we find that the shape of the model's OUXCS-1 follows the observed OUXCS-1 well, including the drop in correlation at longer wavelengths. The blue line (right hand axis) show the range of X-ray luminosity spanned by the sample at each wavelength. A larger span in luminosity gives a better correlation as the trend is dominated by the intrinsic spectral changes rather than the dispersion (see Figure <ref>). Therefore, the general shape of the observed OUXCS-1 is mainly due to the properties of the subsample at different wavelengths. There are clearly some correlated emission line features in the observed OUXCS-1, and differences also exist in the correlated continuum below 1500 Å and above 4000 Å, thus a difference between the model and the observed OUXCS-1 general shape in these ranges could be due to additional spectral features (e.g. emission lines, Feii complex) with respect to the accretion disc continuum.We then examine the possible presence of biases driven by subsample's size and luminosity range, separately. To assess the bias of subsample size, we first randomly pick 50 sources from the subsample at each wavelength, and then calculate the correlation coefficient again. In this case, the sample size is wavelength invariant. These random objects are shown with grey points in Figure <ref>a. For visualisation purposes, we rebin the data by using 50 data points per interval (orange points). The rebinned correlation curve is consistent with the original OUXCS-1. We then randomly select subsamples with sizes ranging from 50 to 500 at the wavelength of 2500 Å and calculate their correlation coefficients. We find that the correlation coefficient does not change significantly with the sample size (see Figure <ref>b). Therefore, we conclude that the change of subsample size has a negligible effect on the shape of OUXCS-1.The change of subsample's luminosity range (i.e. the difference between maximum and minimum luminosity of the sub-sample at each wavelength) may also affect the correlation. The blue dash line in Figure <ref>a shows the range of 2 keV luminosity of the subsample at each wavelength. It peaks at ∼ 3000 Å and decreases towards both sides, which is similar to the general shape of OUXCS-1. To understand this effect, we plot the correlations at a series of wavelengths in Figure <ref>. It shows that the intrinsic dispersion is similar at these wavelengths (also see Table <ref>), and the main difference is the optical/UV luminosity range. It also shows that as the optical/UV luminosity range decreases, the intrinsic dispersion impacts the correlation more strongly. This can be demonstrated in a more quantitative way with the sample of 903 quasars at 2500 Å. The mean 2500 Å luminosity of this subsample is 30.7 in the units of logarithmic erg s^-1 Hz^-1. Then we took the subsample within the luminosity range of 30.7 ± x and calculated the Pearson's correlation coefficient ρ_ p. We found that as x decreases from 1.5 dex to 1.0, 0.75, 0.5, 0.25, 0.15 dex, ρ_ p decreases from 0.78 to 0.74, 0.70, 0.60, 0.31, 0.17. Thus we confirm that the shape of OUXCS-1 is affected by the subsample's luminosity range at different wavelengths. However, we also notice that below ∼ 2000 Å, the shape of OUXCS-1 is different from the change of luminosity range, implying that not all curvatures seen in OUXCS-1 can be explained by the change of luminosity range. §.§.§ The Second Type: OUXCS-2To minimise the effect of luminosity range, we develop another method to construct OUXCS for a wide wavelength range. Firstly, we divide the entire sample into seven subsamples covering seven different wavelength ranges, including 1000 – 1550 Å, 1200 – 1800 Å, 1500 – 2500 Å, 2200 – 3500 Å, 2900 – 4500 Å, 4400 – 5800 Å and 5500 – 6000 Å. Then we calculate the OUXCS for each one of them. The seven OUXCS are shown by different colours in Figure <ref> Panel-a1. In this case, each OUXCS is built from the same subsample, so its shape is not affected by the change of luminosity range, only the normalization is affected. Then we renormalize the OUXCS of the subsample at ∼ 5000 Å (red line) to match OUXCS-1 within 4900 – 5100 Å, and then renormalize the rest of the OUXCS one by one so that they all join together smoothly. We take the average value within their overlapping wavelength ranges. This jointed OUXCS is called the second type of OUXCS (hereafter: OUXCS-2), as shown by the grey line in Figure <ref> Panel-a1.The selection of wavelength range is mainly based on the range's width and subsample's size. The larger the wavelength range, the fewer sources within that range, and the representativeness of correlation is lower. Meanwhile, it is also necessary to have some overlap between different wavelength ranges, so as to evaluate the difference in correlation between different subsamples in the overlap region. The specific choice of wavelength range has a certain degree of arbitrariness, but because the correlation spectra of different subsamples in the overlap regions are basically consistent, so choosing different wavelength ranges should not bring significant differences to the general shape of OUXCS-2. Thus we can conclude that the effect of luminosity range is minimized in OUXCS-2.Figure <ref> Panel-a2 compares these two types of OUXCS. OUXCS-2 appears much flatter than OUXCS-1, which is because the bias in luminosity range is minimized in OUXCS-2, and both confirm similar good correlations across 1280 – 5550 Å. OUXCS-1 and OUXCS-2 also show similar line correlations, including a stronger correlation in the broad emission line component of both Mg ii and Hβ, but weaker in the narrow component. The correlations at C iv, C iii] and [O iii]λλ4959/5007 are also enhanced significantly. Interestingly, there is some evidence that the Fe ii complex before on both sides of Hβ seems to behave the opposite with respect to the other emissions lines. In addition, both OUXCS display stronger correlation with the UV continuum than with the optical, with a correlation parameter that peaks at 2500 – 3800 Å. The statistical error on the OUXCS is ∼ 0.01, so the difference between optical and UV in OUXCS-2 is statistically significant. §.§ Optical/UV Self-Correlation Spectrum (OUSCS) While the 2 keV luminosity may physically represent the intensity of the corona emission, including both the hot corona and probably some fraction of the warm corona, the 2500 Å luminosity could be dominated by emission from the disc itself (although an extended warm corona may also contribute significantly to the UV emission for some AGN, see ). Thus, by taking the luminosity at 2500 Å, instead of 2 keV as the correlation variable, we can examine the self-correlation of the disc emission in the optical/UV, as well as any correlations for various emission lines with the disc continuum.The photometric luminosity at 2500 Å (L_ 2500,phot) is used because it is available for the entire sample <cit.>. This new version of correlation spectrum is named the optical/UV self-correlation spectrum (OUSCS). Likewise, we first create the first type of OUSCS, namely OUSCS-1, as shown in Figure <ref>b (black solid line). The self-correlation with L_ 2500,phot is approaching unity across the entire waveband, and is much stronger than the correlations with L_ 2 keV, and the self-correlation is also stronger in the UV band than optical band.There are many absorption features in OUSCS-1, indicating that the correlation of the emission lines with the UV luminosity is weaker than with their underlying continua. To examine the effect of sample bias, we also calculate the OUSCS-1 from the disc model, as shown by the red solid line in Figure <ref>b. The comparison between the observed and model OUSCS-1 suggests that the overall shape is driven by the disc emission, whilst the small-scale residual fluctuations in the observed OUSCS-1 have a different origin than the disc continuum (e.g. emission lines). The observed OUSCS-1 remains flat below 2000 Å, which is mostly because the input disc model does not provide a good match of the observed spectra in the UV band. This finding is consistent with the ones published by <cit.>. By analysing a sample of ≃700 SDSS AGN, they found that the data do not match the predictions made by any current accretion flow model. Specifically, they observed that either the disc is completely covered by a warm Comptonisation layer, whose properties change with accretion rate, or the accretion flow structure is different to that of the standard disc models (see their Figure 10 and discussion in their Section 5).The gradual decrease of the observed OUSCS-1 towards longer wavelengths also seems to follow the decrease of the luminosity range (blue dash line), so we create OUSCS within the aforementioned seven different wavebands, each of which is based on the same subsample (see Figure <ref> Panel-b1). These individual short OUSCS are then joined together, and renormalized to OUSCS-1 within 4900 – 5100 Å, to create the second type of OUSCS, namely OUSCS-2.Figure <ref> Panel-b2 compares the two types of OUSCS. Likewise, OUSCS-2 is flatter than OUSCS-1 after suppressing the bias of luminosity range. The correlations remain good across the entire optical/UV band, and the UV continuum show slightly better correlations than optical. The statistical error of OUSCS is ∼ 0.001, so the difference between optical and UV in OUSCS-2 is also statistically significant. The peak correlation lies within 2500 – 3500 Å. In addition, OUSCS-2 also shows various absorption-like features. Similar to OUXCS-2, the broad component of Hβ shows stronger correlation, while the correlation of the narrow component is weaker. However, C iv, C iii], Mg ii and [O iii]λλ4959/5007 all appear absorption-like, suggesting that their correlations with L_ 2500,phot are weaker than with their underlying continua. The absorption-like features at C iv and [O iii] are particularly strong. This is probably due to their higher ionization energies (> 50 eV), which also leads to their higher correlations with the 2 keV luminosity in OUXCS. We discuss these in more detail in Section <ref>. § WAVELENGTH DEPENDENCES OF THE INTER-BAND REGRESSION PARAMETERS Since OUXCS shows consistent strong inter-band correlations between optical/UV and X-ray, it is instructive to derive the regression parameters at every wavelength and to examine their wavelength dependencies. For ease of comparison with previous studies (e.g. ), we use the following equation and then apply the same LINMIX algorithm to perform a regression analysis at each wavelength, log(L_ 2 keV)=γlog(L_ optuv) + βwhere L_ 2 keV and L_ optuv are the X-ray and optical/UV luminosities in the units of erg s^-1 Hz^-1. The regression parameters include the slope γ and intercept β, as well as the intrinsic (random) scatter δ around the regression. Figure <ref> plots these parameters as a function of wavelength. Firstly, for the direct comparison with previous studies, we derive the regression parameters at 2500 Å with γ=0.643±0.017, β=6.92±0.52 and δ=0.229±0.006. These results are statistically consistent with previous studies using the photometric luminosity at 2500 Å (e.g. ). Secondly, we find that γ is 0.6 – 0.7 and β is 6 – 8 for the entire continuum within 1250 – 5550 Å where the sample-averaged S/N is > 10, suggesting that the slope and intercept do not have strong wavelength dependences. Thirdly, we find that the intrinsic scatter at 2500 Å is small compared with the other wavelengths. There results suggest that 2500 Å is indeed a good, representative choice that can be used to study the optical/UV and X-ray luminosity correlations. However, Figure <ref> also shows that the regression parameters are sensitive to the presence of emission lines (e.g. C iv, C iii], Mg ii, Hβ and [O iii]λλ4959/5007). Where a spectral line is present the slope and the intercept change significantly. This result suggeststhat, compared with the optical/UV continuum, the emission lines have different correlations with the X-rays. Furthermore, we also find that different velocity components of a single emission line can exhibit a different correlation. For example, there is a significant blue wing in the composite C iv and C iii] line profiles (see Figure <ref>d and Section <ref>), but this wing component does not emerge in Figure <ref>. This is consistent with the recently reported lack of a correlation between the velocity of these lines and the X-ray luminosity (see Fig.13 in ).To investigate the correlations for the emission lines and their underlying continua separately, we perform a local line profile fitting to separate the line flux from its underlying continuum. The continuum is fitted locally by a straight line, and the emission lines are fitted with multiple Gaussian components. This is performed for the C iv, C iii], Mg ii, Hβ and [O iii]λ5007 lines locally, for the sources whose SDSS spectra exhibit these lines. Examples of the line-fitting results are shown in Figure <ref>. The correlations for individual lines and continua are shown in Figure <ref>, and the regression parameters are listed in Table <ref>. Some quasars in our sample also display strong absorption features in the C iv, C iii] lines, which can severely affect the line profile modelling. Other quasars may have incomplete line profiles because the line is located at the edge of the spectral range. We exclude these sources. This is why for some lines (C iv, C iii], Hβ) listed in Table <ref> the number of sources with a line flux is slightly smaller than for the continuum flux.We find that the emission lines and continua are both well correlated with the X-ray luminosity. The underlying continua of all the lines show a regression slope of 0.6 – 0.7 and intercept 6 – 8, fully consistent with other wavelengths. But the parameters found for individual lines are indeed significantly different from the continua (see Table <ref>), confirming that these optical/UV lines have different correlations with the X-ray luminosity.Similar results are obtained when we correlate these lines and their underlying continua with the 2500 Å luminosity (i.e. L_ 2500, phot). The underlying continua show consistent regression parameters with the other wavelengths, while the lines themselves show different parameters. For AGN in general, there is an anti-correlation between the continuum luminosity and the equivalent width of broad emission lines, which is known as the Baldwin effect (). Such an anti-correlation can be found for various optical/UV broad emission lines, although the slope can change significantly with the ionizing potential (). For example, <cit.> used the luminosity at 1450 Å as the continuum luminosity, and then reported the slope to be -0.20 ± 0.03 for C iv, -0.09 ± 0.02 for C iii], -0.09 ± 0.01 for Mg ii and 0.01 ± 0.03 for Hβ. Note that in the Baldwin effect the continuum luminosity is used as the independent variable. For our sample we treat L_ 2500, phot as the independent variable and derive the correlation and slope for individual lines (see Figure <ref>). We find the slope to be -0.14 ± 0.04 for C iv, -0.08 ± 0.03 for C iii], -0.08 ± 0.01 for Mg ii and 0.10 ± 0.02 for Hβ. Therefore, with our large and clean quasar sample, we confirm the slopes of the Baldwin effect for various broad emission lines as reported by <cit.>.Furthermore, we find the slope to be -0.16 ± 0.05 for [O iii] λ5007 line, which was not reported by <cit.> for the high-luminosity range.§ DISCUSSION§.§ Is 2500 Å Representative for the Optical/UV Continuum? Based on the results presented in the previous sections we can addressthe question proposed at the beginning of this paper: whether the conventional choice of 2500 Å is a good choice to represent the optical/UV luminosity and its correlation with the X-rays, for the large quasar sample. As shown by the OUXCS-2 in Figure <ref> Panel-a2 , there is a strong correlation with the X-ray luminosity at every wavelength within 1280 – 5550Å if the spectral quality is high. The correlation coefficients found at other wavelengths are consistent with those at 2500 Å. The weak wavelength-dependence of the regression parameters shown in Figure <ref> also suggest that the slope and intercept found at 2500 Å are similar to those found at the other wavelengths. The regression parameters only change significantly at the wavelengths covering strong emission lines. But after these lines are subtracted, their underlying continua still show similar correlations as seen at other wavelengths. Therefore, we conclude that results of optical/UV and X-ray inter-band correlations found by previous studies using 2500 Åare indeed robust and representative for the entire optical/UV continuum, provided that the quasar sample is composed of unobscured objects with high-quality data. Our results also highlight the efficiency of the filtering steps we applied for selecting unobscured quasar samples, as described in <cit.>.§.§ Various Measurements of L_ 2500 The luminosity at the rest frame 2500 Å (L_ 2500) can be measured in different ways: from the photometric spectral energy distribution (i.e. L_ 2500, phot, thus including the contribution of both the continuum and emission/absorption lines), or directly from the spectrum (i.e. L_ 2500, spec). Note that there are only 902 sources whose SDSS spectra cover 2500 Å in the rest-frame, so that only this subsample has L_ 2500, spec measurement.For our sample, the systematic difference between L_ 2500, phot and L_ 2500, spec is only 6.6 per cent (see Figure <ref>), so their regression results are expected to be similar. Indeed, Figures <ref>a and <ref>b show the regression results for the two measurements of L_ 2500. No significant difference is found between the regression parameters, including a similar level of intrinsic dispersion.We note that the flux at 2500 Å contains both the continuum and the blend of Fe ii UV emission lines (), so it is also useful to examine whether the UV Fe ii emission changes the regression results at 2500 Å significantly. For this purpose, we use a series of line-free wavelengths to determine the UV continuum, as shown in Figure <ref>. We then visually inspect each spectrum to ensure the quality. Then L_ 2500 is measured from the continuum (i.e. L_ 2500, cont). In most cases we use the flux at 2230 Å as one continual point, so it results in a reduction in the sample size. Some sources are also excluded due to the poor quality on the blue end of the spectrum. Thus the final subsample with L_ 2500, cont measurement has only 778 sources. We find that L_ 2500, spec is statistically larger than L_ 2500, cont by only 6.4 ± 5.4 per cent. Figures <ref>c shows the regression result for L_ 2500, cont, and the results are still consistent with the other two measurements of L_ 2500, including a similar intrinsic dispersion of 0.231 ± 0.006 dex.Therefore we conclude that the regression results for the L_ 2500 vs. L_ 2 keV relation are not affected by different measurements of L_ 2500 in this study. §.§ The L_ 2 keV – L_ 2500 – ν_ fwhm Plane <cit.> adopted a simplified disc-corona model and assumed that the radius of the broad line region (BLR) is proportional to the square-root of the disc luminosity, and then proposed the theoretical relation: L_ 2keV∝ L_ optuv^4/7ν_ fwhm^4/7, where ν_ fwhm is the virial velocity of the BLR. This relation was confirmed by their statistical analysis of a sample of 545 optically selected quasars. In this work, the Mg ii line was used to measure ν_ fwhm. Since line-profile fitting has been conducted for all the sources (see Section <ref>), we can perform 2-dimensional (2D) regression in the L_ 2 keV – L_ 2500 – ν_ fwhm plane to test the above theoretical relation. This 2D regression is defined as:log(L_ 2 keV) = γ_1 log(L_ 2500 ) + γ_2 log(ν_ fwhm) + β_1where γ_1, γ_2 and β_1 are the free parameters to be fitted. Thus we have γ_1 and γ_2 equal to 4/7 for the theoretical relation. We also define δ_1 as the intrinsic dispersion of this plane, to distinguish it from the intrinsic dispersion (δ) of the L_ 2 keV – L_ 2500 relation. We conducted the 2D regression using the ν_ fwhm of C iv, C iii], Mg ii and Hβ, respectively. The coefficients are listed in Table <ref>. The planes seen edge-on are shown in Figure <ref>.The subsample of Mg ii has 836 quasars, each has measurements of L_ 2 keV, L_ 2500, phot and Mg ii line width. We find γ_2=0.623±0.017 and γ_2=0.501±0.046, which are statistically consistent with both the theoretical values and the results reported by <cit.> for the same line but a different, larger sample. We also find that, for the same subsample, the dispersion of the L_ 2 keV – L_ 2500 relation decreases by 0.012±0.003 dex when the ν_ fwhm of Mg ii is included in the regression (see Table <ref>).For the Hβ line, the subsample of 209 quasars displays a relatively small dynamical range in luminosity (most of the data lies in the range 27.5 – 28.5, ∼1 dex, see Figure <ref>), and the intrinsic dispersion of the relation appears to dominate the distribution (see Figure <ref>d). We find a similar index of γ_1=0.627±0.045 for L_ 2500, phot. But γ_2 is found to be 0.165±0.162. However the large uncertainty makes it difficult to draw any firm conclusion about the index of ν_ fwhm.The results for C iv and C iii], however, are significantly different. We find γ_2 is -0.352±0.067 for C iv and -0.302±0.049 for C iii], both of which are negative, and opposite to the indices found for Mg ii, Hβ and the theoretical value. The negative γ_2 indicates that larger line-widths of C iv and C iii] correspond to weaker X-ray luminosity relative to UV. Considering the fact that the line-widths of C iv and C iii] do not only include components of virial velocity, but also a significant outflow velocity of the BLR (), the negative γ_2 is actually consistent with the anti-correlation found between the optical-to-X-ray spectral index (α_ ox: ) and the blueshift velocity of C iv (). It is well known that AGN properties exhibit different sets of correlations. The primary set is the so-called `Eigenvector-1' (), which is found to be driven by the Eddington ratio (e.g. ). The blueshift of C iv also belongs to Eigenvector-1, and a higher Eddington ratio corresponds to a larger blueshift velocity of this line (). In addition, it is found that a higher Eddington ratio corresponds to a lower ratio of X-ray emission relative to UV (e.g. ). Therefore, these previously known correlations with the Eddington ratio can naturally result in anegative γ_2 of C iv and C iii].§.§ Comparing the Direct-Correlation Method with the Line-fitting Method The OUXCS and OUSCS produced by the direct-correlation method shows that, compared to the underlying continuum, the primary emission lines have different correlations with the 2 keV and 2500 Å luminosities. This is consistent with the results found by fitting the lines, separating the line flux from the continuum, and then comparing their individual correlations. However, these two methods have fundamentally different characteristics.The line-fitting method can separate the emission lines from the continuum, and test their correlations separately with some other parameters (such as 2 keV luminosity and 2500Å luminosity), but this method may also introduce the following issues.Firstly, the uncertainty caused by the line profile fitting. There are various complexities in the line profile fitting. For example, the broad and narrow components are blended; the broad Hβ, Fe ii and [O iii] are also blended; narrow absorption lines can be present with different velocity-shifts in C iv and C iii], although these absorption features are not visible in the mean spectrum in Figure <ref>d. These can all lead to a measurement uncertainty of the line flux. However, the results for individual lines in Section <ref> are not affected by the line decomposition because the total line flux is used.Secondly, the relative error of the line flux. The LINMIX algorithm is sensitive to the relative errors of the data points. Once the lines are separated from the continuum, the relative errors of the line fluxes increase, especially when the lines are weak. This can lead to different regression parameters. Thirdly, different correlations at different velocity shifts of one line. For example, C iv has a significant blue wing, but OUXCS does not show a similarly strong correlation in the blue side of the line. This means that the correlation of the entire line flux will be the mean value of the correlations for components present at all velocity shifts.In comparison, the direct-correlation method examines the correlation with a specific parameter (such as 2 keV luminosity or 2500Å luminosity) at each wavelength. At the wavelength of an emission line, the correlation given is the overall result of the emission line, the underlying continuum and nearby blended lines (such as the Fe ii lines). If the correlation of an emission line shows a similar trend to that of the continuum, but with a statistically stronger correlation, it appears as an `emission line' in the correlation spectrum as well, just like C iv, C iii], Mg ii, Hβ in OUXCS. Conversely, if the correlation of the line is different from or weaker than that of the continuum, it appears as an `absorption line' in the correlation spectrum, just like [O iii], C iv, Mg ii and the narrow Hβ in OUSCS. Therefore, the direct-correlation method can be used to study the difference in correlation between emission lines and the continuum without detailed spectral fitting.The advantage of this method can be reflected by the correlation parameters seen at the optical Fe ii complex in the OUXCS (Figures <ref> Panel-a2 and <ref>). Usually, it is very difficult to explore the correlation of Fe ii lines with other parameters unless they are very strong (e.g. in some super-Eddington NLS1s, ) and not blended with nearby lines (e.g. [O iii] λλ4959/5007). With the direct-correlation method, we can reveal the correlation at the Fe ii complex without line fitting or de-blending. However, the disadvantage is that if different lines in a certain waveband are mixed together, the result of direct correlation will contain the correlations of different lines, making it difficult to distinguish which line has a greater contribution.We can also apply the direct-correlation method to each emission line. Based on the line fitting described in Section <ref>, we first remove the best-fit local continuum and Fe ii lines, and then directly correlate the spectral lines with the 2 keV luminosity and 2500Å luminosity, respectively. Figure <ref> shows the direct correlation results for different emission lines (plotted in black in each panel). Firstly, by comparing with the composite line profile (plotted in orange), we find that the correlation between the narrow line component and 2 keV is weaker than that of the broad line component. Secondly, the direct correlation results of the line profiles are very different from the line profiles observed in OUXCS and OUSCS (plotted in light blue), which indicates that the shapes of OUXCS and OUSCS are the overall results of different spectral components. For example, when we look at Panels a1, a2, a3 and a4 separately, we find that the blue wing of C iv and C iii], as well as the narrow component of Mg ii and Hβ, are relatively weaker in OUXCS than those observed in the direct correlation line profiles, indicating that the correlations of these line components are more different from the continuum. Therefore, we can see that the direct correlation method is simple and straightforward without introducing the complexities of line fitting, and it also provides new clues for understanding the profile and origin of different lines which we discuss in more detail in the next section.§.§ Origins of Different Line Species and Line Components The direct-correlation line profiles in Figure <ref> show the correlation of emission line at different velocities. In comparison, the lines observed in OUXCS and OUSCS show if and how significant are their correlations (with 2 keV luminosity or 2500 Å luminosity) different from those of the continuum. We summarize these correlation properties and, whenever possible, explain them in terms of the AGN unified model () and the shape of the spectral energy distribution (SED) of the ionizing photons , thereby inferring the origin of different lines and line components.(1) By comparing the direct-correlation line profiles of 2 keV with 2500 Å in Figure <ref>, we find that the correlations of C iv, C iii], Mg ii, Hβ lines with the 2500 Å luminosity are systematically stronger than those with 2 keV luminosity.These results can be understood from two aspects. Firstly, the ionization potentials of C iv, C iii], [O iii] and Mg ii are 47.9 eV, 24.4 eV, 35.1 eV and 7.6 eV, respectively. These energies correspond to wavelengths of 259.5Å, 509.5Å, 354.2Å and 1635.7Å. Therefore, these lines should be correlated with the far-UV and X-ray emission. The fact that they are all well correlated with the 2500 Å luminosity suggests that the good optical/UV self-correlation seen in OUSCS-2 (Figure <ref> Panel-b2) should extend into the far-UV regime of at least a few hundreds of angstrom, which is likely also dominated by the accretion disc emission. Secondly, X-rays are easily absorbed and only account for a low proportion of the broadband spectral energy distribution (SED). For example, based on the classic flat equatorial distribution of BLR, X-rays may be significantly absorbed by the disc wind or the disc itself before reaching the BLR. Therefore, it is reasonable to expect that these BLR lines should correlate better with the UV continuum than X-ray.(2) By comparing the direct-correlation line profile (black) and composite line profile (orange) in each panel of Figure <ref>, we find that the correlation between the narrow component of the broad emission lines and 2 keV or 2500 Å is generally worse than that of the broad component. However, the direct-correlation line profile of Mg ii for 2500 Å are almost the same as the composite line profile, indicating that the narrow component of Mg ii also have a similarly good correlation with 2500 Å luminosity.These results can be understood as the broad component being mainly produced by the ionization of disc photons, while the narrow component may also include contaminations from the host galaxy star light or being affected by other factors (such as covering factor), so the correlation of the narrow component with 2 keV and 2500 Å is worse. This is also consistent with the results of <cit.>.(3) By comparing the direction-correlation line profiles (black) in Panels a1 to a4 of Figure <ref> with OUXCS (light blue), we find that the correlation on the blue side of C iv is somewhat different from the continuum, and the central narrow components of Mg ii and Hβ show similar behaviours. We discuss these results together with the next point below.(4) By comparing the direction-correlation line profiles (black) and OUXCS (light blue) in Panels b1 to b4 of Figure <ref>, we find that although strong correlations with 2500 Å are observed across the entire wavelength range, the lines and their continua actually behave differently, but only in the cases of high-ionization lines (i.e. C iv, C iii]) and narrow component of low-ionization lines (Hβ, Mg ii). This is also confirmed by the linear regression parameters given in Table <ref>.The OUSCS has shown that the optical/UV continuum is highly self-correlated. Then these strong correlations should be transmitted through the ionizing source to different emission lines. However, point-3 and 4 above indicate that the forms of correlation between different lines (or different line components) and their ionizing sources are not the same. This implies that there is more complexity in the connection between these emission line regions and their respective ionizing sources, such as the difference of covering factor. For example, high-ionization lines such as C iv are considered to be located closer to the ionizing source (e.g. ), and so are more sensitive to the change of inner disc structure and wind (e.g. ).(5) Comparing to the results above, [O iii] is a clear exception. OUXCS shows that its correlation with 2 keV is significantly better than that of the continuum and broad lines. This is consistent with previous studies of [O iii] vs. X-ray correlation for various AGN samples (e.g. ). Meanwhile, OUSCS shows that the correlation between [O iii] and 2500 Å is significantly worse. However, in Figure <ref> and Table <ref>, the correlation between [O iii] and 2 keV is not significantly better. This is likely due to the additional uncertainties introduced by removing the best-fit Fe ii lines and the underlying continuum. Thus it also demonstrates the advantage of OUXCS, which is that it does not require spectral fitting.Compared with the narrow component of broad lines, the distribution of [O iii]-emitting gas may be more spherical, i.e. having a significantly larger covering factor to the high-energy emission from the corona. As a result [O iii] lines indeed show stronger correlations with the X-rays than C iv, but weaker correlations with the optical/UV continua.(6) The optical Fe ii complex on both sides of Hβ seems to show different correlation with e.g. the broad Hβ. It is known that the intensity of optical Fe ii is strongly correlated with the Eddington ratio (i.e. Eigenvector-1, ), so it does not correlate with the optical luminosity alone, but also with the black hole mass. Furthermore, the origin of optical Fe ii is still not clear (). It may not simply be the photoionization, but may also involve collisional excitation of gas turbulence (e.g. ). Thus it is not surprising if the Fe ii lines show a weaker/different correlation with the continuum luminosity.The above explanations about different line correlations are very preliminary, but still qualitatively consistent with the classic picture of AGN unified model and BLR. However, each line requires more detailed analysis in order to fully understand their different correlations and origins, but this is beyond the scope of this work. § CONCLUSIONS In this study we assembled a large unobscured quasar sample covering a redshift range of 0.13 – 4.51, and applied the direct-correlation-spectrum method of <cit.> to examine the optical/UV and X-ray luminosity correlations at different optical/UV wavelengths. The main results are summarized below:* We presented two types of correlation spectra (OUXCS and OUSCS), and studied the wavelength dependences of the regression parameters (slope, intercept and dispersion). * We find that the correlations with the 2 keV and 2500 Å luminosities are very significant right across the waveband of 1280 – 5550 Å when the quality of the spectral data is high. The correlations with the UV continuum are stronger than with the optical. * We find that the regression slope of the correlation with the 2 keV luminosity is 0.6 – 0.7 and the intercept is 6 – 8 for the entire optical/UV continuum, which is fully consistent with those found at 2500 Å. Therefore, we confirm that 2500 Å is indeed a good representative choice to study the optical/UV and X-ray inter-band correlations of quasars. * Our regression results are robust for different measurements of L_ 2500, such as the photometric luminosity, spectroscopic luminosity and spectral continuum luminosity. * The primary optical/UV emission lines (C iv,C iii], Mg ii, Hβ, [Oiii] λλ4959/5007) all show good correlations with the 2 keV and 2500 Å luminosities, and the correlations with the 2500 Å luminosity is systematically better. Baldwin effect have also been verified for these lines. These line correlations have different forms, and are different from their underlying continua, suggesting various complexities in the line-generation process. We provide preliminary explanations in the standard disc-wind scenario for these results. * We also performed 2D regression of the L_ 2 keV – L_ 2500 – ν_ fwhm plane for C iv,C iii], Mg ii and Hβ lines, separately. The inclusion of ν_ fwhm in the regression slightly reduces the dispersion of the L_ 2 keV – L_ 2500 relation. We find that for Mg ii and Hβ the index γ_2 of ν_ fwhm is positive and statistically consistent with the theoretical value. But the indices for C iv and C iii] are negative, which are consistent with previously known correlations with the Eddington ratio. * Compared with the line-fitting method, we demonstrated the advantages of the direct-correlation method, such as its model-independence and capability to reveal correlations with different velocity components present in an emission line's profile. However, spectral fitting is still required if we want to separate the line correlation from the underlying continuum, and try to understand which spectral feature contributes the most to the correlation, especially in the wavebands where multiple spectral features are mixed together. The difference of correlation between the observed UV continuum and the standard disc model supports with the recent work of <cit.>. Since our sample contains more quasars with higher redshifts and larger black hole masses, we will present more detailed SED analysis on this sample to further investigate the issue of UV continuum. This will also include the presence of soft excesses in some cases which may influence the luminosities in both UV and X-ray bands. § ACKNOWLEDGEMENTSWe thank the anonymous referee for thorough reading and providing valuable comments and suggestions. CJ acknowledges the National Natural Science Foundation of China through grant 11873054, and the support by the Strategic Pioneer Program on Space Science, Chinese Academy of Sciences through grant XDA15052100. EL acknowledges the support of grant ID: 45780 Fondazione Cassa di Risparmio Firenze. CD acknowledges the Science and Technology Facilities Council (STFC) through grant ST/T000244/1 for support. MJ acknowledges support from a Leverhulme Emeritus Fellowship, EM-2021-064.This work is based on observations conducted by , an ESA science mission with instruments and contributions directly funded by ESA Member States and the USA (NASA). This work is also based on observations made by the Chandra X-ray Observatory, as well as data obtained from the Chandra Data Archive. Funding for the SDSS and SDSS-II was provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS was managed by the Astrophysical Research Consortium for the Participating Institutions. § DATA AVAILABILITYThe data underlying this article are publicly available from the High Energy Astrophysics Science Archive Research Center (HEASARC) at <https://heasarc.gsfc.nasa.gov>, theScience Archive (XSA) at <https://www.cosmos.esa.int/web/xmm-newton/xsa>, the Chandra Data Archive at <https://cda.harvard.edu/chaser>, the Sloan Digital Sky Survey (SDSS) at <http://classic.sdss.org/dr7/products/spectra>, and the Strasbourg astronomical Data Center at<https://cds.u-strasbg.fr>.99 [Antonucci1993]Antonucci.1993 Antonucci R., 1993, ARA&A, 31, 473[Baldwin1977]Baldwin.1977 Baldwin J. A., 1977, ApJ, 214, 679[Baldwin et al.2004]Baldwin.2004 Baldwin J. A., Ferland G. J., Korista K. T., Hamann F., LaCluyzé A., 2004, ApJ, 615, 610[Bargiacchi et al.2022]bargiacchi2022 Bargiacchi G., Benetti M., Capozziello S., Lusso E., Risaliti G., Signorini M., 2022, MNRAS, 515, 1795. doi:10.1093/mnras/stac1941[Bisogni et al.2021]Bisogni.2021 Bisogni S., Lusso E., Civano F., Nardini E., Risaliti G., Elvis M., Fabbiano G., 2021, A&A, 655, A109[Boroson & Green1992]Boroson.1992 Boroson T. A., Green R. F., 1992, ApJS, 80, 109[Chiaraluce et al.2018]Chiaraluce.2018 Chiaraluce E., Vagnetti F., Tombesi F., Paolillo M., 2018, A&A, 619, A95[Davis & Laor2011]Davis.2011 Davis S. W., Laor A., 2011, ApJ, 728, 98[Dietrich et al.2002]Dietrich.2002 Dietrich M., Hamann F., Shields J. C., Constantin A., Vestergaard M., Chaffee F., Foltz C. B., et al., 2002, ApJ, 581, 912[Done et al.2012]Done.2012 Done C., Davis S. W., Jin C., Blaes O., Ward M., 2012, MNRAS, 420, 1848[Done et al.2013]Done.2013 Done C., Jin C., Middleton M., Ward M., 2013, MNRAS, 434, 1955[Du & Wang2019]Du.2019 Du P., Wang J.-M., 2019, ApJ, 886, 42[Fitzpatrick1999]Fitzpatrick.1999 Fitzpatrick E. L., 1999, PASP, 111, 63[Fitzpatrick & Massa2007]Fitzpatrick.2007 Fitzpatrick E. L., Massa D., 2007, ApJ, 663, 320[Gardner & Done2017]Gardner.2017 Gardner E., Done C., 2017, MNRAS, 470, 3591.[Gaskell1982]Gaskell.1982 Gaskell C. M., 1982, ApJ, 263, 79[Grier et al.2019]Grier.2019 Grier C. J., Shen Y., Horne K., Brandt W. N., Trump J. R., Hall P. B., Kinemuchi K., et al., 2019, ApJ, 887, 38[Grupe2004]Grupe.2004 Grupe D., 2004, AJ, 127, 1799[Grupe et al.2010]Grupe.2010 Grupe D., Komossa S., Leighly K. M., Page K. L., 2010, ApJS, 187, 64[Heckman et al.2005]Heckman.2005 Heckman T. M., Ptak A., Hornschemeier A., Kauffmann G., 2005, ApJ, 634, 161[Hu et al.2015]Hu.2015 Hu C., Du P., Lu K.-X., Li Y.-R., Wang F., Qiu J., Bai J.-M., et al., 2015, ApJ, 804, 138[Jin et al.2012a]Jin.2012a Jin C., Ward M., Done C., Gelbord J., 2012a, MNRAS, 420, 1825[Jin, Ward & Done2012b]Jin.2012b Jin C., Ward M., Done C., 2012b, MNRAS, 422, 3268[Jin, Ward & Done2012c]Jin.2012c Jin C., Ward M., Done C., 2012c, MNRAS, 425, 907[Jin et al.2017]Jin.2017b Jin C., Done C., Ward M., Gardner E., 2017, MNRAS, 471, 706[Jin et al.2022]Jin.2022 Jin C., Done C., Ward M., Panessa F., Liu B., Liu H., 2022, MNRAS, in print, arXiv:2208.06581[Just et al.2007]Just.2007 Just D. W., Brandt W. N., Shemmer O., Steffen A. T., Schneider D. P., Chartas G., Garmire G. P., 2007, ApJ, 665, 1004[Kelly2007]Kelly.2007 Kelly B. C., 2007, ApJ, 665, 1489[Kubota & Done2019]Kubota.2019 Kubota A., Done C., 2019, MNRAS, 489, 524[Lamastra et al.2009]Lamastra.2009 Lamastra A., Bianchi S., Matt G., Perola G. C., Barcons X., Carrera F. J., 2009, A&A, 504, 73[Lawrence et al.1997]Lawrence.1997 Lawrence A., Elvis M., Wilkes B. J., McHardy I., Brandt N., 1997, MNRAS, 285, 879[Leighly2004]Leighly.2004 Leighly K. M., 2004, ApJ, 611, 125[Liu & Qiao2022]Liu.2022 Liu B. F., Qiao E., 2022, iSci, 25, 103544[Lusso et al.2010]Lusso.2010 Lusso E., Comastri A., Vignali C., Zamorani G., Brusa M., Gilli R., Iwasawa K., et al., 2010, A&A, 512, A34[Lusso & Risaliti2016]Lusso.2016 Lusso E., Risaliti G., 2016, ApJ, 819, 154[Lusso & Risaliti2017]Lusso.2017 Lusso E., Risaliti G., 2017, A&A, 602, A79[Lusso et al.2020]Lusso.2020 Lusso E., Risaliti G., Nardini E., Bargiacchi G., Benetti M., Bisogni S., Capozziello S., et al., 2020, A&A, 642, A150[Lusso et al.2021]Lusso.2021 Lusso E., Nardini E., Bisogni S., Risaliti G., Gilli R., Richards G. T., Salvestrini F., et al., 2021, A&A, 653, A158[Marziani et al.2003]Marziani.2003 Marziani P., Zamanov R. K., Sulentic J. W., Calvani M., 2003, MNRAS, 345, 1133[Melia2019]melia2019 Melia F., 2019, MNRAS, 489, 517. doi:10.1093/mnras/stz2120[Milaković et al.2021]milakovic2021 Milaković D., Webb J. K., Lee C.-C., Zavarygin E. O., 2021, A&A, 655, A53. doi:10.1051/0004-6361/202141392[Mitchell et al.2023]Mitchell.2023 Mitchell J. A. J., Done C., Ward M. J., Kynoch D., Hagen S., Lusso E., Landt H., 2023, MNRAS, 524, 1796[Mushotzky, Done & Pounds1993]Mushotzky.1993 Mushotzky R. F., Done C., Pounds K. A., 1993, ARA&A, 31, 717[Novikov & Thorne1973]Novikov.1973 Novikov I. D., Thorne K. S., 1973, blho.conf, 343[Panessa et al.2006]Panessa.2006 Panessa F., Bassani L., Cappi M., Dadina M., Barcons X., Carrera F. J., Ho L. C., et al., 2006, A&A, 455, 173[Rakshit, Stalin & Kotilainen2020]Rakshit.2020 Rakshit S., Stalin C. S., Kotilainen J., 2020, ApJS, 249, 17[Risaliti & Lusso2015]Risaliti.2015 Risaliti G., Lusso E., 2015, ApJ, 815, 33[Risaliti & Lusso2019]Risaliti.2019 Risaliti G., Lusso E., 2019, NatAs, 3, 272[Salvestrini et al.2019]Salvestrini.2019 Salvestrini F., Risaliti G., Bisogni S., Lusso E., Vignali C., 2019, A&A, 631, A120[Schlegel, Finkbeiner & Davis1998]Schlegel.1998 Schlegel D. J., Finkbeiner D. P., Davis M., 1998, ApJ, 500, 525[Schmidt1968]Schmidt.1968 Schmidt M., 1968, ApJ, 151, 393[Signorini et al.2023]signorini23 Signorini M., Risaliti G., Lusso E., Nardini E., Bargiacchi G., Sacchi A., Trefoloni B., 2023, arXiv, arXiv:2306.16438. doi:10.48550/arXiv.2306.16438[Shakura & Sunyaev1973]Shakura.1973 Shakura N. I., Sunyaev R. A., 1973, A&A, 500, 33[Sulentic et al.2007]Sulentic.2007 Sulentic J. W., Bachev R., Marziani P., Negrete C. A., Dultzin D., 2007, ApJ, 666, 757[Tananbaum et al.1979]Tananbaum.1979 Tananbaum H., Avni Y., Branduardi G., Elvis M., Fabbiano G., Feigelson E., Giacconi R., et al., 1979, ApJL, 234, L9[Timlin et al.2020]Timlin.2020 Timlin J. D., Brandt W. N., Ni Q., Luo B., Pu X., Schneider D. P., Vivek M., et al., 2020, MNRAS, 492, 719[Ueda et al.2015]Ueda.2015 Ueda Y., Hashimoto Y., Ichikawa K., Ishino Y., Kniazev A. Y., Väisänen P., Ricci C., et al., 2015, ApJ, 815, 1[Vagnetti, Antonucci & Trevese2013]Vagnetti.2013 Vagnetti F., Antonucci M., Trevese D., 2013, A&A, 550, A71[Vanden Berk et al.2001]Vanden.2001 Vanden Berk D. E., Richards G. T., Bauer A., Strauss M. A., Schneider D. P., Heckman T. M., York D. G., et al., 2001, AJ, 122, 549[Verner et al.1999]Verner.1999 Verner E. M., Verner D. A., Korista K. T., Ferguson J. W., Hamann F., Ferland G. J., 1999, ApJS, 120, 101[Vestergaard & Wilkes2001]Vestergaard.2001 Vestergaard M., Wilkes B. J., 2001, ApJS, 134, 1[Vasudevan & Fabian2007]Vasudevan.2007 Vasudevan R. V., Fabian A. C., 2007, MNRAS, 381, 1235[Vignali, Brandt & Schneider2003]Vignali.2003 Vignali C., Brandt W. N., Schneider D. P., 2003, AJ, 125, 433[Wilkes1984]Wilkes.1984 Wilkes B. J., 1984, MNRAS, 207, 73[Wilkes, Elvis & McHardy1987]Wilkes.1987 Wilkes B. J., Elvis M., McHardy I., 1987, ApJL, 321, L23[Wu & Shen2022]ws2022 Wu Q., Shen Y., 2022, ApJS, 263, 42. doi:10.3847/1538-4365/ac9ead[Young, Elvis & Risaliti2010]young2010 Young M., Elvis M., Risaliti G., 2010, ApJ, 708, 1388. doi:10.1088/0004-637X/708/2/1388[Zdziarski, Johnson & Magdziarz1996]Zdziarski.1996 Zdziarski A. A., Johnson W. N., Magdziarz P., 1996, MNRAS, 283, 193[Zdziarski, Lubiński & Smith1999]Zdziarski.1999 Zdziarski A. A., Lubiński P., Smith D. A., 1999, MNRAS, 303, L11[Zheng & Malkan1993]Zheng.1993 Zheng W., Malkan M. A., 1993, ApJ, 415, 517§ SUPPLEMENTARY FIGURES | http://arxiv.org/abs/2310.17866v1 | {
"authors": [
"Chichuan Jin",
"Elisabeta Lusso",
"Martin Ward",
"Chris Done",
"Riccardo Middei"
],
"categories": [
"astro-ph.GA",
"astro-ph.HE"
],
"primary_category": "astro-ph.GA",
"published": "20231027030352",
"title": "Wavelength Dependences of the Optical/UV and X-ray Luminosity Correlations of Quasars"
} |
equationsection =1 plain theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary conjecture[theorem]Conjecturedefinitiondefi/[theorem]Definition rema/[theorem]Remark exa/[theorem]Example conj[theorem]Conjectureitemizedot A contribution to the mathematical theory of diffraction Part II. Recovering the far-field asymptotics of the quarter-plane problem. Raphaël C. Assier^*, Andrey V. Shanin^† and Andrey I. Korolkov^* ^* Department of Mathematics, University of Manchester, Oxford Road, Manchester, M13 9PL, UK ^† Department of Physics (Acoustics Division), Moscow State University, Leninskie Gory, 119992, Moscow, RussiaJanuary 14, 2024 ===================================================================================================================================================================================================================================================================================== We apply the stationary phase method developed in (Assier, Shanin & Korolkov, QJMAM, 76(1), 2022) to the problem of wave diffraction by a quarter-plane. The wave field is written as a double Fourier transform of an unknown spectral function. We make use of the analytical continuation results of (Assier & Shanin, QJMAM, 72(1), 2018) to uncover the singularity structure of this spectral function.This allows us to provide a closed-form far-field asymptotic expansion of the field by estimating thedouble Fourier integral near some special points of the spectral function. All the known results on the far-field asymptotics of the quarter-plane problem are recovered, and new mathematical expressions are derived for the secondary diffracted waves in the plane of the scatterer.§ INTRODUCTIONThe present article (Part II) isa continuation of <cit.> (Part I), in which a general mathematical framework was developed for the asymptotic evaluation of three-dimensional wave fields u given by double integrals of the typeu() = ∬_ F()exp{-i r G(,)}d ,where ≡ (x_1,x_2,x_3)∈ℝ^3 and ≡ (ξ_1,ξ_2)∈ℂ^2 represent the physical and spectral variables respectively. The scalar r is defined as r=||, the unit `observation direction' vectoris given by =/r, and d is understood to be dξ_1 ∧ dξ_2. A important special case of (<ref>) are two-dimensional wave fields given byu(x_1, x_2) = ∬_ F()exp{-i(x_1 ξ_1+x_2 ξ_2)}d.In what follows, we will refer to (<ref>) and (<ref>) as Fourier-type integrals and Fourier integrals respectively. Most of Part I was dedicated to Fourier integrals, but Fourier-type integrals were also considered in some details in magentaAppendix B[Sections, equation numbers, or theorems written in magentamagenta refer to Part I.]. Except for their singular sets, the functions F and G are assumed to be holomorphic functions ofin a neighbourhood of ℝ^2, and assumed to grow at most algebraically at infinity. The surface of integrationcoincides with ℝ^2 almost everywhere, except near the singularity sets of F and G where it is slightly indented. The indentation procedure is not straightforward in ℂ^2 and discussed in details in <cit.>, where the process of surface deformation is described by the bridge and arrow notation (first introduced in <cit.>). The key result of <cit.> is that the asymptotic behaviour of (<ref>) as r→∞ or of (<ref>) as x_1^2 + x_2^2→∞ is determined by local integration in the neighbourhoods of several special points of F or G. It is proved that outside the neighbourhoods of such points the surface of integration can be deformed in such a way that the integral is exponentially decaying on it as x_1^2 + x_2^2→∞ (or r→∞). The latter is referred to as the locality principle (firstintroduced, for 2D complex analysis, in <cit.>). The leading terms of the asymptotic estimations of u can then be found by computing some simple integrals. In Part II we apply the general results of Part I to the specific example of the problem of diffraction by a quarter-plane. This is a well-known canonical diffraction problem, which was approached by many researchers <cit.>. A good review on the subject can be found in <cit.>. Nevertheless, the task of finding a closed-form solution for the problem of diffraction by a quarter-plane remains open. Innovative two dimensional complex analysis techniques that are developed in <cit.> seem to be promising in that regard. Of specific interest to the present work are <cit.>,<cit.>and <cit.> where the authors tried to find the far-field asymptotics of the diffracted field. The geometrical theory of diffraction was used in <cit.>, aSommerfeld integral approach was used in <cit.>, and ray asymptotics on a sphere with a cut (Smyshlyaev's method) were used in <cit.>. In Part II we recover all the results of these papers, and provide additional formulae for the secondary diffracted field in the plane of the scatterer that, to our knowledge, have never been published before.As is customary, the quarter-plane problem is reduced to a 2D Wiener-Hopf equation involving two unknown spectral functions of two complex variables. As a result, the wavefield can be written as a Fourier-type integral of the form (<ref>).The appropriate function F is analytically continued into certain complex domains using the analytical continuation formulae derived in <cit.>. These formulae provide all the information about the singular set of F needed to asymptotically estimate the wave field using the methods of Part I.The rest of the article is organised as follows. In section <ref>, we provide a mathematical formulation of the problem of diffraction of a plane wave by a quarter-plane with Dirichlet boundary conditions. We indicate that some angles of incidence do not lead to secondary diffracted waves, and some do. We refer to the associated problems as“simple” and “complicated”, respectively. We introduce the 2D Wiener-Hopf equation in section <ref>. We recall some results of <cit.> and write down the analytical continuation formulae in section <ref>. Section <ref> is dedicated to the simple case. We briefly sketch the scheme of the stationary phase method as it was described in Part I (section <ref>), we study the singularities of the relevant spectral functions (section <ref>) and the indentation of the surface of integration (section <ref>), we find the special points (sections <ref>–<ref>) and we build the far-field asymptotics (sections <ref>–<ref>). A similar approach is followed in section <ref> for the complicated case. We recover all the known asymptotic formulae for the quarter-plane problem, and, in addition, we obtain new formulae for the secondary diffracted waves. § FORMULATION, WIENER-HOPF EQUATION, AND ANALYTIC CONTINUATION FORMULAE§.§ Formulation of the diffraction problem. “Simple” and “complicated” cases We consider the problem of wave diffraction by a quarter-plane (QP) subjected to an incident plane wave and Dirichlet boundary conditions. We make the time-harmonic hypothesis, with the e^- i ω t convention and time considerations are henceforth suppressed. The problem reduces to finding the total wave field u^t satisfying the Helmholtz equation and the Dirichlet boundary conditionΔ u^t + k^2 u^t = 0, u^t |_QP = 0, u^t = u^in + u,where u will be referred to as the scattered field. The QP and the incident wave u^in are defined byQP = {x∈ℝ^3,x_1 ⩾ 0, x_2 ⩾ 0, x_3 = 0 } u^in = e^i (k_1 x_1 + k_2 x_2 + k_3 x_3),where x = (x_1 , x_2 , x_3) andk_1 = - k sin (θ_0) cos (φ_0), k_2 = - k sin (θ_0) sin (φ_0), k_3 = - √(k^2 - k_1^2 - k_2^2) = - k cos (θ_0),for real spherical incident angles θ_0 and φ_0. Instead, one can formulate the problem for the scattered field u. The latter satisfies the Helmholtz equation and obeys the inhomogeneous Dirichlet boundary conditionsu(x_1, x_2, 0)|_QP = - e^ i (k_1 x_1 + k_2 x_2) .One can see from this formulation that u() is symmetric with respect to the plane x_3= 0. For this reason, we will restrict our study to x_3≥0.As it is known (see e.g. <cit.>),the problem formulation should be accompanied by Meixner's conditions at the edges and at the vertex,and by a radiation condition at infinity. Meixner's conditions are formulated as the requirement of local integrabilityof the energy-like combination |∇ u^t|^2 + |u^t|^2. These conditionsprevent the appearance of unphysical sources located at the edges or at thevertex.The radiation condition is more complicated to formulate and it should be discussed in details. Usually, this condition is formulated in the form of thelimiting absorption principle. Assume that the wavenumber parameteris represented as k = k_0 + i ϰ,where k_0 and ϰ are the real and the imaginary part of k,k_0 is positive real, and ϰ is positive but small.The value ϰ corresponds to some absorption in themedium. Fix the value k_0 and indicate the dependence of the solutionon ϰ by u^t(x ; ϰ). The limiting absorption principle states that, for ϰ>0, parts of the field decay exponentially as the distance from thevertex tends to infinity.For real k = k_0, one should consider the limit u^t(x) =lim_ϰ→ 0 u^t(x, ϰ)of the solution. This scheme is well justified, but sometimes it is difficult tofind which part of the wave field should decay. Note that the incident wave grows exponentially as the observation point moves in the direction of incidence. Let us first consider the simple case by restricting the incident angles toθ_0 ∈ (0, π / 2)φ_0 ∈ (π, 3 π / 2), ensuring thatRe[k_1] > 0Re[k_2] > 0.This case was considered in details in <cit.> and the geometry of the resulting problem is shown in <ref>, left.In this simple case, the total far-field is composed ofthe incident plane wave, a reflected plane wave, conical waves emanating from the edges of the quarter plane (the so-called primary diffracted waves),and a spherical wave emanating from the vertex (see <ref> for an illustration). There are also penumbral wave fields near the boundaries of the domains occupied by the reflected and the edge diffracted waves.In particular, we note that in the simple casethere are no waves that are diffracted by the edges twice (the so-called secondary diffracted waves).Moreover, for ϰ>0, only the incident wave grows at infinity. The edge diffracted waves are decaying, since theprojections of the vector (k_1 , k_2 , k_3) onto the edges, i.e. k_1 and k_2, have positive imaginary parts.The reflected wave is also decaying.Thus, one can formulate the radiation condition as follows:the scattered field u should be exponentially decaying at infinity. Let us now introduce the complicated case corresponding toθ_0 ∈ (0, π / 2)φ_0 ∈ (0, π / 2),implying that Re[k_1]<0 and Re[k_2]<0. The geometry of this case is shown in <ref>, right. In addition to the far-field waves present in the simple case, we note the additional presence of secondary diffracted wave in this case (as illustrated in <ref>). Moreover, in this case, for ϰ>0, the reflected wave and the edge diffracted waves also grow exponentially at infinity. Thus,one should extract all these waves (with the penumbral zones) from u^t, and requirethat the remaining part of the field is exponentially decaying. This isquite complicated and impractical, thus, it would be preferable to have an alternative formulation of the radiation condition. A safe way to formulate the radiation condition for the complicated QP problem is considered in <cit.>.Instead of a plane wave incidence, one should consider a point source located at thepoint ^s=(x_1^ s , x_2^ s , x_3^ s) = R (sin(θ_0) cos(_0) , sin(θ_0) sin(_0) , cos(θ_0)) for some large R and choose the strength A of the source to be A = - 4 π Re^- i k R to compensate the phase shift and the decay of the incident field. Upon denoting by u^t(x ; ϰ ,R) the total field resulting from such a source, we require that for each finite R the fieldu^t(x ; ϰ , R) decays exponentiallyas || →∞. Then u^t(x ; ϰ) for the plane wave incidence is defined as u^t(x ; ϰ) = lim_R →∞ u^t(x ; ϰ ,R) .This formulation is mathematically correct[This is a non-trivial theorem, since one should prove the existence of the limit.], but inconvenientin practice (the problems with point sources are more complicated than the incidentplane waves ones).In this article, we propose another way to formulate the radiation condition for thecomplicated case. It starts by considering the diffraction problem for the simple case for ϰ>0. The solution depends on the parameters k_1 and k_2 that describe the incident wave. For the simple case, these parameters both have a positive real part and a small positive imaginary part. We claim that the solutiondepends analytically on the parameters k_1 and k_2, and thus it remains valid after an analytical continuation as k_1 and k_2 are moved along some continuous contours.In order to get the complicated case solution, one should continuously change k_1 and k_2 until they both have negative real parts and small negative imaginary parts. The analytical dependency of the solution u on k_1 and k_2 is a conjectureand, formally, one should prove the agreement between the formulation based on the point source described aboveand the analytical continuation. We will not focus on this proof in the present work. However, one can think of two indirect confirmations that the formulation based on theanalytical continuation should be correct:a. This is a boundary value problem with boundary data (<ref>) that depends analytically on the parameters k_1 and k_2 and one would expect that the solution should also depend analytically on these parameters.b. In what follows, we use this analytic continuation approach to estimate thefar-field components for the complicated case. The wave componentsfound in this way agree with the GTD approach <cit.> and Smyshlyaev's method <cit.>, and no physically prohibited components appear. Beside the simple and complicated cases, one can study an intermediate case, say θ_0 ∈ (0, π / 2) andφ_0 ∈ (π / 2 , π), for which Re[k_1]>0 and Re[k_2]<0, and only one edge leads to a secondary diffracted wave.We will not focus on this, since the methods developed for the complicated case can easily be applied to the intermediate case. §.§ Spectral functions and the 2D Wiener–Hopf equationIn this section we reproduce briefly the double Fourier transforms manipulations described in <cit.>. Let us consider the simple case (<ref>) and assume that ϰ > 0. We introduce the double Fourier transform operator 𝔉 defined for any function ϕ by 𝔉 [ϕ (x_1, x_2)] (ξ)=∬_ℝ^2ϕ (x_1 , x_2) e^i(x_1 ξ_1 + x_2 ξ_2)x_1 x_2, ξ≡ (ξ_1 , ξ_2),and the spectral functions U(ξ) and W(ξ) defined by U(ξ)=𝔉 [u (x_1 , x_2 , 0^+; ϰ)] (ξ), W(ξ)=𝔉 [_x_3 u (x_1 , x_2 , 0^+; ϰ) ] (ξ).The dependence of U and W on ϰ is implied but not indicated.Note that the integrals (<ref>) and (<ref>) convergedue to the limiting absorption principle for the simple case.Upon introducing the kernel K (ξ) = 1/√(k^2 - ξ_1^2 - ξ_2^2),and assuming that U and W are known, the wave field u(x ; ϰ) can be reconstructed as u(x ; ϰ) =1/4 π^2∬_ℝ^2 U(ξ) e^-i ( ξ_1 x_1 + ξ_2 x_2 - |x_3| / K(ξ))ξ.The square root in K is chosen to have a positive imaginary part and(<ref>) can be interpreted as a plane wave decomposition in the domains x_3 > 0 and x_3 < 0.The differential ξ denotes simply ξ_1 ξ_2while ξ belongs to the real plane ℝ^2 and becomesξ = ξ_1∧ dξ_2 when we start to consider complex integration surfaces.The representation (<ref>) enables one to link U and W by the functional equation: K (ξ) W (ξ)=i U (ξ), and to write down an alternative representation of u(x , ϰ):u(x ; ϰ) =-i/4 π^2∬_ℝ^2 K(ξ)W(ξ) e^-i ( ξ_1 x_1 + ξ_2 x_2 - |x_3| / K(ξ))ξ.Similarly, the normal derivative of the field on the planex_3 = 0 can be written:_x_3 u(x_1 , x_2 , 0^+ ; ϰ) =1/4 π^2∬_ℝ^2W(ξ) e^-i ( ξ_1 x_1 + ξ_2 x_2 )ξ. The relation (<ref>) plays an important role and is actually a 2D Wiener–Hopf functional equation. Indeed, the function u(x_1 , x_2 , 0^+; ϰ) is known on thequarter-plane and is unknown on the remaining 3/4-plane. Conversely,due to the symmetry of u, the function _x_3 u(x_1 , x_2 , 0^+ ; ϰ) is unknown on the quarter-plane, and is zeroon the remaining 3/4-plane. Thus, (<ref>)links the Fourier transforms of functions that are unknown on non-overlaping domains whose union is the whole x_3=0 plane.§.§ Analytical continuation formulae A Wiener–Hopf formulation of the type (<ref>) may potentially lead to a solution of the diffraction problem. Unfortunately, up to date,no rigorous solution of this 2D Wiener–Hopf problem is known <cit.>.Instead, some useful properties for the Wiener–Hopf problem havebeen derived in <cit.>. They are the so-called analyticalcontinuation formulae[Note that the quarter-plane is not the only problem for which such formulae can be derived. Similar formulae were also found for the case of a no-contrast right-angled penetrable wedge <cit.>.]. These formulae are integral representations of the unknown functionW(ξ) defining it in certain complex domains.We base our further consideration on these representations.Here we list the main results of <cit.> that are used in the current paper. Before moving further, we need to note that the kernel K admits some factorisations given byK (ξ_1, ξ_2) = 1/γ (ξ_1, ξ_2) γ (ξ_1, - ξ_2) = 1/γ (ξ_2, ξ_1) γ (ξ_2, - ξ_1),where the function γ is given byγ (ξ_1, ξ_2)=√(√(k^2 - ξ_1^2) + ξ_2) .It is also useful to define the sets H^±, Ĥ^± and P as follows. H^+ and H^- are domains of a single complex variable, defined as the upper and lower half-planes cut along the curves h^+ and h^- defined as follows (see <ref>, left):h^±= {ξ∈ℂ s.t.ξ = ±√(k^2 - τ^2) forτ∈ℝ}.Ĥ^+ is the upper half-plane, which is not cut along h^+, and Ĥ^- is the lower half-plane, which is not cut along h^-. These domains are all open, i.e. the boundary is not included. Denote the contour along h^- as P (see <ref>, right). The boundary of H^- is therefore H^- = ℝ∪ P, and the boundary of H^+ is H^+ = ℝ∪ (-P).Consider the formula (<ref>). Initially it definesthe function W(ξ) on the real plane: ξ∈ℝ^2. However, since W is non-zero only in the first quadrant of the plane,the integral converges for any ξ withIm[ξ_1] > -ϵ and Im[ξ_2] > -ϵ,where ϵ can be chosen according to 0 < ϵ < ϰcos(θ_0) min (cos(_0) , sin (_0)). This is based on a rough estimation of decay of the wave field in the planex_3 = 0 due to the losses.In other words, the integral(<ref>) converges in Ĥ^+ ×Ĥ^+ and in someneighbourhood of the boundary of the real plane[ In the notation Ĥ^+ ×Ĥ^+, the set before “×” is related to theξ_1 complex plane, and the set after “×” is related to the ξ_2 complex plane.].In <cit.> the authors pursued the aim to continue W to the remaining parts of the complex spaceof ξ.We formulate the main results of <cit.> in the form of two propositions.The following integral representations for W are valid: W(ξ_1 , ξ_2)= i γ(ξ_1, ξ_2) γ(ξ_1 , k_2)/(ξ_1 + k_1)(ξ_2 + k_2)+ γ(ξ_1 , ξ_2)/4π^2∫_-∞^∞ξ_2' ∫_-∞^∞ξ_1' γ(ξ_1 , - ξ_2') K(ξ_1' , ξ_2') W(ξ_1' , ξ_2') / (ξ_1' - ξ_1)(ξ_2' - ξ_2) W(ξ_1 , ξ_2)= i γ(ξ_2, ξ_1) γ(ξ_2 , k_1)/(ξ_1 + k_1)(ξ_2 + k_2)+ γ(ξ_2 , ξ_1)/4π^2∫_-∞^∞ξ_1' ∫_-∞^∞ξ_2' γ(ξ_2 , - ξ_1') K(ξ_1' , ξ_2') W(ξ_1' , ξ_2') / (ξ_1' - ξ_1)(ξ_2' - ξ_2) The formula (<ref>) defines W analytically in the domain (H^- ∖{ - k_1}) ×Ĥ^+ while the formula (<ref>) defines W analytically in the domain Ĥ^+ × (H^- ∖{ - k_2}). The following integral representations (<ref>) and (<ref>) for W are valid in Ĥ^- × (Ĥ^+ ∪ H^-) and (Ĥ^+ ∪ H^-) ×Ĥ^-, respectively:W(ξ_1 , ξ_2)=i γ(ξ_1 , ξ_2)γ(ξ_1 , k_2)γ(k_2 , k_1) / (ξ_1 + k_1)(ξ_2 + k_2) γ(k_2 , -ξ_1)+γ(ξ_1 , ξ_2)/4π^2 J_1(ξ_1 , ξ_2), J_1 (ξ_1 , ξ_2)≡∫_P ξ_2'∫_-∞^∞ξ_1' γ(ξ_1 , -ξ_2') K(ξ_1', ξ_2') W(ξ_1' , ξ_2') / (ξ_1' - ξ_1)(ξ_2' - ξ_2) , W(ξ_1 , ξ_2)=i γ(ξ_2 , ξ_1)γ(ξ_2 , k_1)γ(k_1 , k_2) / (ξ_1 + k_1)(ξ_2 + k_2) γ(k_1 , -ξ_2)+γ(ξ_2 , ξ_1)/4π^2 J_2(ξ_1 , ξ_2), J_2 (ξ_1 , ξ_2)≡∫_P ξ_1'∫_-∞^∞ξ_2' γ(ξ_2 , -ξ_1') K(ξ_1', ξ_2') W(ξ_1' , ξ_2') / (ξ_1' - ξ_1)(ξ_2' - ξ_2) . Note that to obtain the formulae of Proposition <ref>, we need to use Proposition <ref>,since the integrals J_1 and J_2 require that W is defined on ℝ× P and P ×ℝ. The formulae (<ref>) and (<ref>) will be extremely important for the rest of the article, so let us analyse them briefly. They each have an integralterm and a non-integral term on the right-hand side (RHS). The non-integral terms are easy to deal with, since theyare products of elementary functions. They are defined everywhere as analytic functions with branch and polar singularities. Due to their simplicity, it is possible to extract their local behaviour near those singularities. Conversely, the integral terms contain the unknown function W(ξ), so we can only make general conclusions about them. However, below,we show that most of the far-field asymptotic wave components of u (all components, except the spherical wave diffracted by the vertex of the QP) arise solely due to thenon-integral terms of(<ref>) and (<ref>).Let us start by studying the function J_1(ξ) defined by (<ref>). The variables ξ_1 and ξ_2 can take values in the domains whose boundariesare the contours of integration, i.e. ξ_1 belongs to Ĥ^-, andξ_2 belongs to Ĥ^+ ∪ H^- (this is the whole plane cut along h^-). The integral (<ref>) defines a two-sheeted function on this domain, having a branch setat ξ_1 = -k. This branch set is a complex line with real dimension 2 and comes from the inner square root of the factor γ(ξ_1 , - ξ_2'). The two sheets of J_1 will be referred to as the physical and unphysical sheets. The resulting physical sheet of W is the one used in the formulae (<ref>) and (<ref>). It corresponds to the following choice of the square root of √(k^2 - ξ_1^2) in γ: when ξ_1 is real, the values of the square root should be close to positive real or to positive imaginaryas ϰ→ 0. Note that J_1 is regular at all points of the domainĤ^- × (Ĥ^+ ∪ H^-) such that ξ_1- k (on both sheets).A similar analysis can be made for the function J_2. It is a two-sheeted function inthe domain(Ĥ^+ ∪ H^-) ×Ĥ^- with a branch set at ξ_2 = - k. A sketch of the two continuations provided by(<ref>) and (<ref>) is shown in <ref>. The sketch is made in the coordinates ( Im[ξ_1] ,Im[ξ_2]). The domain I is the domain of definition of W through (<ref>), the domain II corresponds to (<ref>), and the domain III to (<ref>).One can see that the domain Im[ξ_1] < 0, Im[ξ_2] < 0 is covered by both formulae(<ref>) and (<ref>).This is a useful feature. Consider a neighbourhood of some pointξ^⋆ = (ξ^⋆_1 , ξ^⋆_2) with ξ^⋆_2 ∈ P, Im[ξ^⋆_1] < 0. The formula (<ref>)provides a continuation into this neighbourhood, but the neighbourhood iscut in two isolated parts by the surface ξ_2 ∈ h^-. Thus, it is impossible to say whether ξ^⋆ is a regular pointof W. Let us assume that ξ^⋆_1 ∉ h^-. Then one can use the formula (<ref>). This formula providestwo sheets of continuation of J_2 (or W) into two samples (on two sheets) ofthe neighbourhood of ξ^⋆, and one can study singularities in this neighbourhood.One can consider also the points ξ^⋆ such thatξ_1^⋆∈ P and ξ_2^⋆∈ P. Such points are problematic both for (<ref>)and for (<ref>). The consideration of these points are as follows. The set P × P has realdimension 2, and it is not an analytical set, i.e. it cannot be defined asg (ξ) = 0 for some holomorphic function g. Thus, by a well-known theorem of multidimensional complex analysis (Hartogs' theorem, see <cit.> p226) this set, or any 2D neighbourhood on it, cannot be a singularity of W. § FAR-FIELD ASYMPTOTICS IN THE SIMPLE CASE Our aim is to find the far-field asymptotics for the real wavenumber quarter-plane problem in the simple case. It means that the incident angles will be restricted by (<ref>), implying that k_1>0 and k_2>0. To do this, we will apply the machinery of <cit.>, with the help of the analytical continuation formulae of Section <ref>. §.§ The scheme of the stationary phase method Our aim is to estimatethe integrals (<ref>) and (<ref>) in the far field based on our knowledge of the singularities of W. Recall that r and the unit observation direction vector x̃ are defined as r=|| and x̃ = (x̃_1 , x̃_2 , x̃_3) = x / r. Our aim is to get an asymptotic estimation of the field as x̃ remains constant,and r →∞. We consider the limit ϰ→ 0 and only studythe non-vanishingterms, i.e. the components of the field that are not exponential decaying as r →∞.Upon introducing the functionsF (ξ) = K(ξ)W(ξ), and G(ξ ; x̃) = x̃_1 ξ_1 + x̃_2 ξ_2 - |x̃_3| / K(ξ_1 , ξ_2),the integral (<ref>) can be rewritten as u(x; ϰ) = - i/4 π^2∬_ℝ^2 F (ξ) exp{ - i r G(ξ ; x̃) } ξ,which is a Fourier-type integral of the form (<ref>). The integral (<ref>) is a simpler Fourier integral of the form (<ref>), and (<ref>) also reduces to such a Fourier integral when x_3=0, since rG(;) reduces to ξ_1 x_1+ξ_2 x_2 in that case. We now list some results about stationary phase method of <cit.>. ∙ The method is applicable to Fourier (<ref>) and Fourier-type (<ref>) integrals provided that the functions F and G(·;) are holomorphic in some neighbourhood of the real plane except for their singularities σ_j (polar or branch sets).These singularities are 2D analytic sets in ℂ^2. They should have the real property: their real traces σ'_j=ℝ^2 ∩σ_j should be curves in the real plane (rather than points).∙ The non-vanishing field components can be obtained by estimating the integrals (<ref>) or (<ref>) near real special points ξ^⋆ of two types: saddles on singularities (SoS) and crossings ofsingularities.The SoS are points ξ^⋆ at which the vector (x̃_1 , x̃_2)for (<ref>) or ∇ G ≡( G/ξ_1 , G/ξ_2)for (<ref>)is orthogonal to some singularity trace σ'_j. Note that with the definition (<ref>), ∇ G → (x̃_1 , x̃_2) as x̃_3 → 0. ∙ Each such special point may provide or not provide a non-vanishing term.This can be established by studying the mutual orientation of the vector ∇ G (for (<ref>)) or (x̃_1 , x̃_2) (for (<ref>)) at the special point, and the indentation of the integration surface with respect to the singularities (see magentaSection 4 of <cit.> for details).The special points that provide non-vanishing terms are referred to asactive.∙ For Fourier integrals (<ref>), some crossings of singularities do not provide non-vanishing terms for any . This is the case for additive crossings. A crossing ξ^⋆ between two real traces of some singularitiesσ_1' and σ_2' is said to be additive if the function F can be written near ξ^⋆ as F(ξ) = F_1(ξ) + F_2(ξ),where F_1 is singular only at σ_1, and F_2 is singular only at σ_2.The same is true for Fourier-type integrals (<ref>) provided that ^⋆ is not also a crossing for the singularities of G. ∙Tangential touch between two real traces do not provide a non-vanishing term unless they are simultaneously a SoS.The following statements are only relevant for Fourier-type integrals (<ref>):∙ For the specific choice of G given in (<ref>), only the special points located inside the circle σ_c'={∈ℝ^2,ξ_1^2+ξ_2^2=k^2} can providenon-vanishing terms of the field. This is because theterm |x_3| / K in the combination r G provides an exponential decay outside σ_c'.∙ Beside the SoS and crossings of singularities, there is one more typeof special points: 2D saddle points ^⋆ defined such that ∇ G(^⋆) = (0,0),which can also give non-vanishing far-field components.In what follows, we will consider the singularities of the integrands of(<ref>) and (<ref>), find the special points, and apply the corresponding asymptotic formulae obtained in <cit.>.§.§ Analysis of the singularities of W near the real planeThe formulae of analytical continuation (<ref>) and (<ref>) are all we need from <cit.>. Theyprovide the required information about the singularities of W near the (ξ_1 , ξ_2) real plane.We are specifically interested in the singularities whose intersection with this real plane has dimension 1 for ϰ = 0, and who belong to the physical sheet defined above. As we show in <cit.>, the study of these singularities enables one to estimate non-vanishing components of thewave field u, say, by using the representation (<ref>).We denote the singularities by the symbol σ with some indexes. Corresponding real traces ofsingularities will be denoted by σ' with indexes. Namely, σ'is σ∩ℝ^2 taken for ϰ = 0.We will now list these singularities for W and comment on some of their important features.∙ The set σ_p_1 = {ξ∈ℂ^2, ξ_1 = -k_1 }is a polar set of W. This follows from the non-integral term of (<ref>). The integral term of (<ref>) is not singular at σ_p_1.One can define the residue at σ_p_1 as a function res[W, σ_p_1](ξ_2) by theLeray method <cit.>. Since σ_p_1 is of the form ξ_1 =const, this can be found asa usual 1D residue in the ξ_1 plane for some fixed ξ_2. Namely, res[W, σ_p_1](ξ_2) =i γ (k_1, ξ_2) γ(k_1 , k_2)/ξ_2 + k_2· The same residue can be obtained from the non-integral term of (<ref>). Note that the residue has a pole at ξ_2 = - k_2 and a branch point atξ_2 = - √(k^2 - k_1^2). The real trace of σ_p_1 isσ'_p_1 ={ξ∈ℝ^2, ξ_1 = - lim_ϰ→ 0 k_1 }.∙ Similarly, the set σ_p_2 = {ξ∈ℂ^2, ξ_2 = - k_2 }is also a polar set of W with residue res[W, σ_p_2](ξ_1) =i γ (k_2, ξ_1) γ(k_2 , k_1)/ξ_1 + k_1· This residue has a pole at ξ_1 = - k_1 and a branch point atξ_1 = - √(k^2 - k_2^2). The real trace of σ_p_2 isσ'_p_2 ={ξ∈ℝ^2, ξ_2 = - lim_ϰ→ 0k_2 }.∙ The set σ_b_1 = {ξ∈ℂ^2, ξ_1 = - k}is a branch set of W of order 2. It is a branch set for the factors γ(ξ_1 , ξ_2), γ(ξ_1 , k_2), γ(ξ_1 , ξ_2) in (<ref>). Near σ_b_1, the behaviour of W is given byW(ξ) = C_1 (ξ_2) + C_2(ξ_2) √(ξ_1 + k) + O(ξ_1 + k).The real trace of σ_b_1 isσ'_b_1 ={ξ∈ℝ^2, ξ_1 = - k_0. }.Note that the complex line ξ_1 = k is not a branch set of W. A subset of this line belongs tozone I in <ref>, and W should be regular there.The rest of this set belongs to zone III, thus it is described by formula(<ref>). The latter has no singularity at ξ_1 = k. ∙ Similarly, the complex lineσ_b_2 = {ξ∈ℂ^2, ξ_2 = - k}is a branch set of order 2 for W. Its real trace isσ'_b_2 ={ξ∈ℝ^2, ξ_2 = - k_0 }. The function W is regular on the complex line ξ_2 = k. ∙ The analysis of the set σ_c = {ξ∈ℂ^2, ξ_1^2 + ξ_2^2 = k^2}is more complicated. Its real trace σ'_c={∈ℝ^2, ξ_1^2+ξ_2^2=k_0^2} is the circle of radius k_0. As we will now see, W is only singular on a portion of this circle. To start with, it follows directly from the definition (<ref>) and the properties of quarter-range Fourier transforms, that the function W is regular on σ_c ∩ (Ĥ^+ ×Ĥ^+). Let us now consider the domain 𝔸 = H^- × (Ĥ^+ ∪Ĥ^-) and rewrite (<ref>) asW(ξ) =γ(ξ_1 , ξ_2) (i γ(ξ_1 , k_2) γ(k_2 , k_1)/(ξ_1+ k_1)(ξ_2 + k_2) γ(k_2 , - ξ_1) +J_1(ξ_1 , ξ_2)/4 π^2).The terms within parentheses do not have singularities in 𝔸∩σ_c. Thus, the behaviour of W on σ_c is determined by that of the factor γ(ξ_1 , ξ_2). Being made of usual functions, γ(ξ_1 , ξ_2) is easy to analyse, but its Riemann manifold has a non-trivial structure. Indeed, this function is branching at ξ_1 = ± k, and on σ_c, but not everywhere: it is possible for√(k^2 - ξ_1^2) + ξ_2 not to be zero on σ_c. Focusing on the real trace, one can see that, on the physical sheet, the fragment of σ'_c having Re[ξ_2] < 0bears branching, while the points of σ'_c with Re[ξ_2] > 0 are regular points of W. A similar consideration of (<ref>) leads to the conclusion that the part of σ'_c bearing branching of W is the one with Re[ξ_1] < 0.We can therefore conclude that the part of σ_c' where W is singular is given by σ_cc' ={ξ∈ℝ^2, ξ∈σ_c' , ξ_1 < 0 , ξ_2 < 0 }. Moreover, since the branching of W is resulting from either γ(ξ_1 , ξ_2) or γ(ξ_2 , ξ_1), we can conclude that, nearσ'_cc, the function W behaves like ϕ(ξ)/K(),for some function ϕ(ξ) regular on σ'_cc. Thus, if ξ bypasses σ'_cc, W(ξ)changes to -W(ξ).A similar reasoning leads to the fact that the product F()=K(ξ) W(ξ) arising in(<ref>) is singular on σ'_c ∖σ'_cc and regular on σ'_cc.According to the formulae of analytic continuation, this list contains allthe real traces of singularities on the physical sheet. They are sketched in Figure <ref>. §.§ Indentation of the integration surfaceAs ϰ→ 0, the singularities listed above hit the real plane at their real traces. As we demonstrated above, and as depicted in <ref>, the real traces for F=K W and W are made up of 5 components given by σ'_p_1,σ'_p_2,σ'_b_1, σ'_b_2, and σ'_c.Thus, one cannot use ℝ^2 as the integration surface in (<ref>) or (<ref>). However for ϰ>0, using the 2D Cauchy theorem <cit.>, it is possible to slightly deform the integration surface in ℂ^2 without changing the value of the integral, as long as no singularities are hit in the process. This is done in a way that when taking the limit ϰ→0, the singularities do not hit the integration surface anymore. We call this deformation an indentationof the integration surface around the singularities. The resulting surface of integration (denoted ) coincides with ℝ^2 everywhere except in some neighbourhood of the real traces of the singularities.For each singularity, there are only two possible types of indentation. This issue is not simple,and has been discussed in details in <cit.>. The choice of indentation is given by the bridge and arrownotations. The correct choice for the problem at hand, dictated by he limiting process ϰ→ 0,is shown in <ref>. Applying the technique described in magentaSection 3.4 of <cit.>, it is reasonably straightforward to find the bridge and arrow configuration of σ_b_1, 2' and σ_p_1, 2', while the bridge configuration of σ_c' is determined by the tangential touch compatibility with σ_b_1, 2'. §.§ Special points for the Fourier integrals (<ref>)|_x_3= 0 and (<ref>)As discussed in section <ref>, to study the far-field behaviour of these Fourier integrals, it is enough to look at special points. Those are the SoS and the intersections of real traces as depicted in <ref>.We will first focus on the special points that will not contribute to the far-field asymptotics. Let us start with the intersections. The points corresponding to σ'_b_1∩σ'_c and σ'_b_2∩σ'_c intersect tangentially, and, hence, by magentaTheorem 4.9 of <cit.>, they do not contribute to the asymptotic expansion of u or ∂_x_3u. The points σ'_p_1∩σ'_b_2andσ'_p_2∩σ'_b_1 are transverse crossings with the additive crossing property.This can be proven as follows. Consider the polar line σ_p_1.The residue on this line is given by (<ref>). One can see that this residue has nobranching at the crossing point σ'_p_1∩σ'_b_2. This means that the pole can be addititively separated from the branching near the crossing point.Thus, the pointsσ'_p_1∩σ'_b_2andσ'_p_2∩σ'_b_1 do not contribute to the asymptotic expansion of u(x_1, x_2 , 0^+) and _x_3 u(x_1, x_2 , 0^+). In Appendix <ref>we also show thatthat the point σ'_b_1∩σ'_b_2isan additive crossing and, therefore, does not contribute to the asymptotic expansions. Let us now consider the potential SoS. In principle, the vector (x̃_1 , x̃_2) can be orthogonalto any of the real traces. However,we note that, except for σ'_c, all real traces are straight lines. If (x̃_1 , x̃_2) is orthogonal to one of these straight traces at one point, it is orthogonal to it everywhere. Such a pathological casewas excluded from the method developed in <cit.>, so wehave to exclude such directions from our consideration. Physically, such directions are likely to belong to the penumbral zones, and mathematically, they would necessitate a non-local approach.Therefore the relevant special points are the other crossings and the SoS on σ'_c. For a given direction (x̃_1 , x̃_2), there are two possible SoS where(x̃_1 , x̃_2) is orthogonal to σ_c'. Obviously, it first needs to be on a singular part of σ'_c. Moreover for it to count (to be active in the terms of <cit.>), given the bridge and arrow configuration, (x̃_1 , x̃_2) attached to this point would need to point towards the origin (see <cit.> for more detail). An active SoS can hence only be of the form ξ_SW= - k (x̃_1 , x̃_2). All the relevant special points are thereforeξ_RW= (- k_1, - k_2), ξ_PD1^(F) = (- k_1, k_2'),ξ_PD2^(F) = (k_1', - k_2),ξ_SW= - k (x̃_1 , x̃_2), ξ_PD1^(W) = (- k_1, - k_2'), ξ_PD2^(W) = (- k_1', - k_2),wherek_2' ≡√(k^2 - k_1^2) andk_1' ≡√(k^2 - k_2^2)are both strictly positive. The position of the SoS ξ_SW depends solely on the observation direction (x̃_1 , x̃_2), while the position of all the other special points (crossings) depend solely on the incident angles. Anticipating our findings, the subscripts _RW, PD 1, PD 2, SW stand for reflected wave, primary diffracted wave 1 and 2, and spherical wave.§.§ Special points for the Fourier-type integral (<ref>)|_x_3 > 0As discussed in section <ref> and in more detail in magentaAppendix B of <cit.>, the situation becomes slightly different for x_3 >0. In particular, potential special points lying on σ'_c stop to provide non-vanishing contributions or become inactive. Moreover, since we have a Fourier-type integral, we also need to consider potential 2D saddle points. For the specific case under investigation, the following is observed. The point _SW that was a SoS on σ'_c for x_3=0 becomes a 2D Saddle and is given by the same formula. However, since x̃_3>0, this point now lies strictly inside the circle σ'_c. The transverse crossings _PD1,2^(F) migrate to SoS on σ'_p_1,2, again strictly inside σ'_c. Note that now a SoS is defined as a point where ∇ G is perpendicular to a real trace. Since this vector also depend on , it is now possible to have an isolated SoS on a straight real trace. The crossing _RW remains contributing, without a change in type or location. The special points to consider and their type (displayed in <ref>) are hence given by(Crossing) : ξ_RW = (- k_1, - k_2),(2D saddle): ξ_SW = - k (x̃_1 , x̃_2),(SoS) : ξ_PD1 =(-k_1,-x̃_2 k_2'/√(x̃_2^2 +x̃_3^2)), (SoS): ξ_PD2 =(-x̃_1 k_1'/√(x̃_1^2 +x̃_3^2),-k_2).In the next section, to each of these special points, we will associate a far-field wave component by using the results in magentaSection 5 of <cit.>. For brevity, we will only do it for the field u, that is for the integral (<ref>). However, we have given all the necessary results for the method to also be applied to get the far-field asymptotics for ∂_x_3u(x_1,x_2,0^+) using the integral (<ref>).§.§ Field estimation near the special points for (<ref>)|_x_3 > 0§.§.§ The reflected waveConsider the vicinity ofξ^⋆ =ξ_RW, which is a transverse crossing point for the real trace components σ'_1 = σ'_p_1 and σ_2' = σ'_p_2. We can approximate Fnear this point using formula (<ref>)to obtain the local approximationsF (ξ) ξ→ξ_RW≈i/ (ξ_1 + k_1) (ξ_2 + k_2),Then, let us suppose that ∇ G≠(0,0) near ^⋆, i.e. there is no stationary point nearby, therefore,G can be approximated as follows: G()ξ→ξ_RW≈ G(ξ^⋆) +(ξ-ξ^⋆)·∇ G(ξ^⋆). Thus, the associated wave component is given by the following integral: u_RW (x) = A∬_exp{ -ir(-^⋆)·∇ G(^⋆)}/ (ξ_1 + k_1) (ξ_2 + k_2)dξ,where A = 14π^2e^-irG(ξ^⋆),G(^⋆)=-k_1 x̃_1-k_2 x̃_2+k_3 x̃_3,∇ G(^⋆)=(x̃_1+x̃_3 k_1k_3,x̃_2+x̃_3 k_2k_3).The surface of integrationis chosen to coincide with ℝ^2 everywhere except for the two singular lines σ'_p_1,2, where it is slightly indented according to bridge notation shown in <ref>. Hence, we can use the results of magentaSection 5.2 and magentaAppendix B of <cit.> to get:u_RW (x)= -exp{i(k_1x_1+k_2x_2-k_3x_3)}ℋ(x_1+x_3k_1/k_3)ℋ(x_2+x_3k_2/k_3),whereℋ is the Heaviside function. Note that the Heaviside functions in (<ref>)define the region of activity of the crossing ξ_RW. One can also notice that (<ref>) is just a composition of two one-dimensional polar integrals so there are other, simpler, means to evaluate (<ref>). This is exactly the result for the reflected wave that one would expect from Geometrical Optics considerations.§.§.§ Primary diffracted waves Consider the vicinity of ξ^⋆ =ξ_PD 1, that corresponds to a SOS onσ_1' = σ_p_1'.Using (<ref>) we can approximate F near ξ^⋆ as follows F (ξ) ξ→ξ_PD1≈i√(k_2'+k_2)(x̃_2^2+x̃_3^2)^3/4/√(k_2')(k_2 √(x̃_2^2+x̃_3^2)-k_2' x̃_2)√(√(x̃_2^2+x̃_3^2)+x̃_2)×1/ξ_1+k_1.Since ^⋆ is a SOS, i.e. ∇ G ⊥σ_1' at ^⋆, following magentaSection 5.1 and magentaAppendix B of <cit.>, and noting that G/ξ_2(ξ^⋆)=0,we only need to consider the following approximation for G:G()ξ→ξ_PD1≈ G(ξ^⋆) +(ξ_1 - ξ^⋆_1) G/ξ_1(ξ^⋆) + (ξ_2 - ξ^⋆_2)^2/2^2 G/ξ_2^2(ξ^⋆).Terms in 𝒪((ξ_1 - ξ^⋆_1)^2), 𝒪((ξ_1 - ξ^⋆_1)(ξ_2 - ξ^⋆_2)) and higher orders can be neglected. Using thatG(^⋆)=-k_1 x̃_1-k_2'√(x̃_2^2+x̃_3^2), ∂ G/∂ξ_1(^⋆)=x̃_1-k_1/k_2'√(x̃_2^2+x̃_3^2), ∂^2 G/∂ξ_1^2(^⋆)=(x̃_2^2+x̃_3^2)^3/2/k_2'x̃_3^2,we obtain the asymptotic component u_PD 1 (x) associated to the special point _PD1:u_PD 1 (x)= e^-i3π/4x_3√(k'_2 +k_2)exp{ik_1x_1 + ik_2'√(x_2^2 + x_3^2)}√(2π)(k_2√(x_2^2+x_3^2)-x_2k_2')√(√(x_2^2+x_3^2)+x_2)ℋ(x_1-k_1/k_2'√(x_2^2+x_3^2)).A very similar approach with _PD2 leads to the asymptotic component u_PD 2():u_PD 2 (x)= e^-i3π/4x_3√(k'_1 +k_1)exp{ik_2x_2 + ik_1'√(x_1^2 + x_3^2)}√(2π)(k_1√(x_1^2+x_3^2)-x_1k_1')√(√(x_1^2+x_3^2)+x_1)ℋ(x_2-k_2/k_1'√(x_1^2+x_3^2)),which can also be recovered by just swapping the indices 1 and 2 in (<ref>). §.§.§ The spherical waveConsider the vicinity of the 2D saddle pointξ^⋆ =ξ_SW. Apart from some pathological cases (penumbral zones),W(ξ) is known to be regular at ξ^⋆. Therefore, using the fact that G(^⋆)=-k, K(^⋆)=(kx̃_3)^-1, (H_G(^⋆))=(kx̃_3)^-2, where H_G is the Hessian matrix of G, the classical 2D saddle point method leads to the asymptotic component u_SW:u_SW(x) = -kW(ξ^⋆)/2πe^ikr/kr.Note that, unlike theasymptotic formulae derived previously, this one is not completely in closed-form. Indeed, it depends on the quantity W (ξ^⋆), which is not known a priori. Means of finding its value were discussed at length in <cit.>.§.§ On the consistency with the asymptotics of (<ref>)|_x_3 = 0As we mentioned above, the Fourier integral (<ref>)|_x_3 = 0 and the Fourier-type integral (<ref>)|_x_3 > 0 need to be treated separately because the special points of (<ref>) change their type as x_3→ 0. However, physically, such limit should not be pathological and we expect the resulting asymptotics to be consistent. As x_3=0, we can apply the results of <cit.>ad-hoc without the need of magentaAppendix B. §.§.§ The reflected waveNeither the type (transverse crossing) nor the position of the special point ξ^⋆ =ξ_RW depend on x_3, so we do not expect any changes. Indeed, using magentaSection 5.2 of <cit.>, we find that its contribution is given byu_RW(x_1,x_2,0) = -exp{i(k_1x_1+k_2x_2)}ℋ(x_1)ℋ(x_2), which is consistent with (<ref>).§.§.§ Primary diffracted wavesFor x_3=0, the special point ξ^⋆ =ξ^(F)_PD1 is a transverse crossing, and we can hence apply magentaSection 5.2 of <cit.>. Using (<ref>), F can be shown to behave as:F (ξ) ξ→ξ^(F)_PD1≈i√(2k'_2)/√(k'_2+k_2)×1/(ξ_1+k_1)√(k^2-ξ_1^2-ξ_2^2),leading to the far-field wave componentu_PD1(x_1,x_2,0) = e^-i3π/4exp{i(k_1x_1-k'_2x_2)}/√(k'_2+k_2)√(-π x_2)ℋ(-x_2)ℋ(k'_2x_1+k_1x_2).Considering the limit x̃_3→0 in Section <ref>, we observe the following. If x̃_2<0, as illustrated in figure <ref> (right) then _PD1→_PD1^(F) and the contribution (<ref>) → (<ref>). If however x̃_2>0, then _PD1→_PD1^(W), which is not a special point of (<ref>)|_x_3 = 0, and the contribution tends to zero. This behaviour illustrates the perfect consistency between this section and the results of Section <ref> as x_3→0. Contribution from the transverse crossing ξ^(F)_PD2 and the consistency with Section <ref> can be obtained in a similar way, or by swapping the indices 1 and 2 in (<ref>).§.§.§ The spherical waveWhen x_3=0, the special point ξ^⋆ =ξ_SW is a SOS. Taking into account that W is regular at ξ^⋆, and using the definition (<ref>, left), F is approximated as follows:F (ξ) ξ→ξ_SW≈W(ξ^⋆)1/√(k^2-ξ_1^2-ξ_2^2).Then, using the results of magentaSection 5.1 of <cit.> the resulting far-field component is:u_SW(x_1,x_2,0) = -kW(ξ^⋆)/2πe^ik√(x_1^2+x_2^2)/k√(x_1^2+x_2^2)ℋ(ℝ^2\QP),where ℋ(ℝ^2\QP) is equal to 1 if (x_1,x_2)∈ℝ^2\QP and 0 otherwise. Remembering that F is regular on σ_cc', one can see that the latter is indeed consistent with (<ref>). §.§ General asymptotic expansion of uWe have now considered all the contributing points and we can therefore reconstruct the far-field asymptotic approximation of u as follows:u (x)| x | →∞≈u_RW (x) + u_PD 1(x) + u_PD 2 (x) + u_SW (x),where the components are given by equations (<ref>),(<ref>), (<ref>), (<ref>). These expansions agree with former results given for example in <cit.>, <cit.>, <cit.>, <cit.>, obtained via the GTD, Sommerfeld integrals and the random walk method. A schematic illustration of these wave components is given in <ref>. § FAR-FIELD ASYMPTOTICS IN THE COMPLICATED CASE We now consider the complicated case for the real wavenumber quarter-plane problem. This means that the incident angles are restricted by (<ref>), implying that k_1<0 and k_2<0. As in the previous section, our aim is to find the resulting far-field asymptotics.The main reason why this case is labelled complicated is because, on top of the wave fields discussed in the previous section, it is also known to give rise to two secondary diffracted waves. These are notoriously difficult to analyse. Moreover, all the known published formulae for these secondary waves <cit.>, though completely valid for x_3>0, have been obtained in such a way that they blow up on the plane x_3 = 0.In what follows we will show that our method leads to these same formulae for x_3>0, but also provides a proper secondary diffracted wave fields on the x_3 = 0 plane, hence providing formulae that, to our knowledge, have never been obtained previously.§.§ Special points for the Fourier integrals (<ref>)|_x_3 = 0 and (<ref>) Assume temporarily that the wavenumber k has a small positive imaginary part ϰ, and treat Re[k_1] and Re[k_2] as parameters that can move continuously from a state were they are both positive, to a state where they are both negative. The free terms of the analytical continuation formulae exhibit new singular sets when Re[k_1, 2] change sign. Indeed, the setσ_sb_1 = {ξ∈ℂ^2, ξ_1 = - k_1' },where k_1' is defined as in (<ref>), is a branch line of W of order 2; the subscript _sb stands for secondary branch. This is due to the factor γ(ξ_1,k_2) in the non-integral term of (<ref>).The function W behaves near σ_sb_1 as W(ξ) = Ĉ_1(ξ_2) + Ĉ_2(ξ_2)√(ξ_1 + k_1') + 𝒪(ξ_1 + k_1').The real trace of σ_sb_1 is σ_sb_1' = {ξ∈ℝ^2, ξ_1 = - lim_ϰ→0k_1'}.Note that the factor γ(k_2,-ξ_1) in (<ref>) has a branch point at k_1', but it does belongs to the upper half-plane, where nothing is known about the analyticity propertiesof the integral term J_1. Similarly, due to the term γ(ξ_2,k_1) in (<ref>), the line σ_sb_2 = {ξ∈ℂ^2, ξ_2 = - k_2'},where k_2' is defined in (<ref>), is a branch line of W of order 2. The real trace of σ_sb_2 is σ_sb_2' = {ξ∈ℝ^2, ξ_2 = - lim_ϰ→0k_2'}.One has to be very careful to make sure that the bridge configuration of the singularities is preserved in this continuous process. This is why, in a somewhat counter intuitive manner, the bridge configuration of the singularity σ_p_1, 2' needs to remain the same, even though, for ϰ > 0, the singularity changes half-plane. This is to make sure that the singularities do not intersect the surface of integration in the process of k_1 and k_2 changing sign. A schematic view of the real traces of the singularity sets of F and W, together with the associated bridge configurations, is given in <ref>. As for the simple case, and as indicated on this same figure, a few potential special points can be discarded due to either additive or tangential crossing. The additivity of the crossing σ_sb_1∩σ_sb_2 follows directly from the structure of the non-integral terms of (<ref>) or (<ref>), the additivity of crossings σ_b_2∩σ_sb_1 and σ_b_1∩σ_sb_2 follows from the non-integral terms of (<ref>) and (<ref>), correspondingly. The special points that need to be considered are: ξ_RW= (- k_1, - k_2), ξ_PD 1^(F) = (- k_1, k_2'),ξ_PD 2^(F) = (k_1', - k_2), ξ_SW= -k(x̃_1, x̃_2), ξ_PD 1^(W) = (- k_1, - k_2'),ξ_PD 2^(W) = (- k_1', - k_2), ξ_SD 1^(F)= (- k_1', - k_2), ξ_SD 2^(F) = (- k_1, - k_2'), ξ_SD 1^(W) = (- k_1', k_2), ξ_SD 2^(W) = (k_1, - k_2'). Apart from the SoS _SW, they are all transverse non-additive crossings. The points ξ_SD 1^(F) and ξ_SD 2^(F) are remarkable as they correspond to triple transverse crossings. §.§ Special points for the Fourier-type integral (<ref>)|_x_3 > 0As for the simple case, the nature of the special points change when x_3>0. In particular, all the contributing points are now strictly within the circle σ_c' and 2D saddles can appear. In this case, the following special points are found: ξ_RW= (- k_1, - k_2), ξ_PD 1 = (- k_1, -x̃_2 k'_2/√(x̃_2^2+x̃_3^2)), ξ_PD 2 = (-x̃_1 k'_1/√(x̃_1^2+x̃_3^2), - k_2), ξ_SW= -k(x̃_1, x̃_2),ξ_SD 1 = (- k_1', k_2x̃_2/√(x̃_2^2+x̃_3^2)), ξ_SD 2 = (k_1x̃_1/√(x̃_1^2+x̃_3^2), - k_2').Apart from ξ_RW that remains a transverse crossing and ξ_SW that is now a 2D saddle point, all the other special points listed above are now SoS. They are illustrated in <ref> (left).Moreover, as illustrated in <ref> (right), we note thatξ_PD 1x̃_3 →0⟶ { [ ξ_PD 1^(F) if x̃_2 < 0; ξ_SD 2^(F) if x̃_2 > 0 ] ., ξ_PD 2 x̃_3 →0⟶ { [ ξ_PD 2^(F) if x̃_1 < 0; ξ_SD 1^(F) if x̃_1 > 0 ] .,ξ_SD 1x̃_3 →0⟶ { [ ξ_SD 1^(F) if x̃_2 < 0; ξ_SD 1^(W) if x̃_2 > 0 ] .,ξ_SD 2 x̃_3 →0⟶ { [ ξ_SD 2^(F) if x̃_1 < 0; ξ_SD 2^(W) if x̃_1 > 0 ] .. §.§ Reflected, primary diffracted and spherical wavesThe points _RW, _SW and _PD1,2 are given by the same formulae as in the simple case (see (<ref>)) and are of the same type. Therefore the whole procedure of obtaining the asymptotic components for the corresponding waves remains unchanged and can be carried out ad-hoc in the exact same manner as in the simple case of Section <ref>, leading to the exact same asymptotic formulae (<ref>), (<ref>), (<ref>) and (<ref>). The same conclusions also hold for these waves when it comes to the consistency between the x_3=0 and the x_3>0. The only difference is that, as x̃_2,1>0, _PD1,2→_SD2,1^(F) as x̃_3→0, which this time is a special point (this was not the case in the simple case). However, the asymptotic contribution of _PD1,2 still tends to zero in that case. More will be said below about the reason for this seeming contradiction. §.§ The secondary diffracted waves for (<ref>)|_x_3 > 0Consider the vicinity ofξ^⋆ =ξ_SD 1, a SOS on σ_1' = σ_sb_1'. Using (<ref>) we can approximate F near ξ^⋆ as follows:F(ξ) ≈i√(k_1'+k_1)(x̃_2^2+x̃_3^2)^3/4/√(2)(k_1'-k_1)k_2^2(x̃_2+√(x̃_2^2+x̃_3^2))^3/2×√(ξ_1 + k_1').Then, using the expansion (<ref>) with G(^⋆)=-k_1'x̃_1+k_2√(x̃_2^2+x̃_3^2),∂G∂ξ_1(^⋆)=x̃_1+k_1'k_2√(x̃_2^2+x̃_3^2),∂^2G∂ξ_2^2(^⋆)=- (x̃_2^2+x̃_3^2)^3/2k_2x̃_3^2 , together with the results of magentaSection 5.1 and magentaAppendix B of <cit.>, we obtain:u_SD 1 (x)=-x_3 √(k'_1+k_1)exp{i(k'_1x_1-k_2√(x_2^2+x_3^2))}4π(k'_1-k_1)(x_2 + √(x_2^2+x_3^2))^3/2(-k_2x_1-k'_1√(x_2^2+x_3^2))^3/2ℋ(-k_2x_1 - k'_1√(x_2^2+x_3^2)).This formula agrees with previously published work <cit.>. It is interesting to see what happens to this expression as x_3→0. Note first that due to the Heaviside function in (<ref>), this wave component is zero if x_1<0. We hence only need to consider the case x_1>0.If we also have x_2>0, the expression (<ref>) tends to zero as x_3→0. This is hardly surprising since, in that case, according to (<ref>), _SD1→_SD1^(W), which is not a special point of (<ref>)|_x_3=0. It is also a confirmation that u_SD1 satisfies the Dirichlet boundary conditions on QP.If instead we have x_2<0, the expression blows up in the limit x_3→0, which is more surprising and somewhat unphysical. This is something that was not realised in <cit.>. Within the framework of the present article, the mathematical reason for this behaviour is due to the fact that as x_1>0, x_2<0 and x_3→0, two special SOS points (ξ_SD 1 and ξ_PD 2) merge into the triple crossing ξ_SD 1^(F). This arbitrary proximity between these two SoS means that their respective contribution cannot be computed independently as we did and one should treat them together to obtain an accurate picture. We can also give a physical interpretation to this phenomena. To obtain the secondary diffracted wave emanating from the x_1 edge, one can approximate, locally, the primary diffracted wave coming from the x_2 edge as a plane wave hitting the x_1 edge with a grazing incidence along the plate. As can be understood by considering a simpler 2D (half-plane) problem, the region x_3=0 outside the plate corresponds to a penumbral zone for this problem. In the next section we will address this issue by deriving a formula valid on the plane x_3=0. A similar reasoning about ξ^⋆ =ξ_SD 1 , leads to u_SD 2, with similar remarks as x_3→0: u_SD 2 (x) =-x_3√(k'_2+k_2)exp{i(k'_2x_2-k_1√(x_1^2+x_3^2))}4π(k'_2-k_2)(x_1 + √(x_1^2+x_3^2))^3/2(-k_1x_2-k'_2√(x_1^2+x_3^2))^3/2ℋ(-k_1x_2 - k'_2√(x_1^2+x_3^2)). §.§ The secondary diffracted waves for (<ref>)|_x_3 = 0Consider the vicinity of ξ^⋆ =ξ_SD 1^(F) = (- k_1', - k_2)=(ξ_1^⋆,ξ_2^⋆), that corresponds to the transverse intersection of the three singular traces σ_1' = σ_sb_1', σ_2' = σ_c' and σ_3'=σ'_p_2.The case of a triple transverse crossing was not considered in <cit.> and hence the asymptoticscannot be obtained by a direct application of this paper. We will however adapt our method to obtain the asymptotics in this specific case. Using(<ref>), the leading singular behaviour of F near ξ^⋆ can be found to be:F (ξ)ξ→ξ_SD 1^(F)≈i√(k_1' + k_1)/ (- k_1' + k_1)×(ξ_1 + k_1')^1 /2/(ξ_2 + k_2) (k^2 - ξ_1^2 - ξ_2^2)^1 / 2·Upon introducing the local change of variables Ψ : (ξ_1, ξ_2) → (ρ, ψ) defined byξ_1-ξ_1^⋆ = ρcos (ψ)and ξ_2-ξ_2^⋆ = ρsin (ψ),there is an equivalence between →^⋆ and ρ→0, and ξ↔ρρ∧ψ. Because k_2^2+(k_1')^2=k^2, we can introduce the angle ϑ_1 defined by k_2=-kcos(ϑ_1) and k_1'=ksin(ϑ_1).We can hence write2 k^2 - ξ_1^2 - ξ_2^2ξ→ξ^⋆= 2 k_1' ρcos (ψ) + 2 k_2 ρsin (ψ) + 𝒪 (ρ^2)=2ksin(ϑ_1-ψ)+𝒪(ρ^2).Moreover, using the polar coordinates (x_1,x_2)=(r cos (φ), r sin (φ)), we find that x_1 ξ_1+x_2 ξ_2 = x_1 ξ_1^⋆+x_2 ξ_2^⋆+ r ρcos(ψ-φ),and that, therefore, the leading asymptotic contribution of _SD1^(F) is given by u_SD 1(x_1,x_2,0^+)=𝒜e^- i(x_1 ξ_1^⋆+x_2 ξ_2^⋆)∬_Ψ (Γ)(cos (ψ))^1 / 2 e^- i ρ r cos (ψ - φ)/(sin(ϑ_1-ψ))^1 / 2sin (ψ)ρ∧ψ,where we introduced the notation 𝒜 = √(k'_1 + k_1)/4π^2√(2k)(-k_1'+k_1).Our aim is hence to evaluate this double integral. It can be shown that the surface of integration Ψ (Γ) can be can be continuously deformed (without hitting any singularities) to the surface { (ρ, ψ) ∈ℂ^2, ψ∈Υρ∈λ_ψ}. The contour Υ, illustrated in <ref> (left), is close to the real segment [0, 2 π]. The way that Υ bypasses the singular points in the ψ plane provides the required compatibility with the bridge configuration around the triple crossing.The λ_ψ form a continuous family of contours in ℂ that depend on Re[ψ] and vary between 0 and ∞, as shown in <ref> (right). It is chosen to ensure exponential attenuation of the integrand as ρ tends to ∞.Note that when Re[ψ] is equal to φ+π/2 or φ+3π/2, the contour λ_ψ has to be purely real. However, in that case, the exponential decay is provided by the imaginary part of ψ. This is why, even though the points φ+π/2 and φ+3π/2 are not singular, they are chosen to be bypassed by Υ as in <ref> (left). The integral can hence be rewritten in a sequential form as follows:u_SD 1(x_1,x_2,+0) ≈ 𝒜e^- i(x_1 ξ_1^⋆+x_2 ξ_2^⋆)∫_ψ∈Υ(cos (ψ))^1 / 2/(sin(ϑ_1-ψ))^1 / 2sin (ψ)( ∫_ρ∈λ_ψ e^- i ρ r cos (ψ - φ)ρ) ψ .Given the properties of λ_ψ, the inner integral can be evaluated directly to∫_ρ∈λ_ψ e^- i ρ r cos (ψ - φ)ρ=- i/r cos (ψ - φ),and u_SD 1 is now given by a single integralu_SD 1(x_1,x_2,0^+) ≈ - i𝒜e^- i(x_1 ξ_1^⋆+x_2 ξ_2^⋆)/r∫_Υ(cos (ψ))^1 / 2/(sin(ϑ_1-ψ))^1 / 2sin (ψ) cos (ψ - φ)ψ .Fortunately, this integral can be evaluated exactly (see Appendix <ref> for details): ∫_Υ(cos (ψ))^1 / 2/(sin (ϑ_1 - ψ))^1 / 2sin (ψ) cos (ψ - φ)ψ=4π√(-sin(φ))/cos(φ)√(cos(ϑ_1-φ))ℋ(-sin(φ))ℋ(cos(ϑ_1-φ)).It leads to the following expression for the secondary diffracted field at x_3 = 0:u_SD 1 (x_1,x_2,0^+) =-i√(k'_1+k_1)√(-x_2)e^i(k'_1x_1 + k_2x_2)/√(2)π(k_1-k_1')x_1√(k'_1x_2-k_2x_1)ℋ (-x_2) ℋ(k'_1x_2-k_2x_1).The asymptotic contribution resulting from the other triple crossing ξ_SD 2^(F) can be obtained similarly, or just by swapping the subscripts 1 and 2, and readsu_SD 2 (x_1,x_2,0^+) =-i√(k'_2+k_2)√(-x_1)e^i(k_1x_1+k'_2x_2 )/√(2)π(k_2-k_2')x_2√(k'_2x_1-k_1x_2)ℋ (-x_1) ℋ(k'_2x_1-k_1x_2). While doing this work, with the aim of double checking our result, we found a way of recovering this formula by using the formalism of <cit.>. §.§ General asymptotic expansion of u We have now considered all the contributing points and we can therefore reconstruct the far-field asymptotic approximation of u as follows:u (x)| x | →∞≈u_RW (x) + u_PD 1(x) + u_PD 2 (x) + u_SW (x) + u_SD 1 (x) + u_SD 2 (x).It agrees, in principle, with former results given for the field in <cit.>. This expansion is valid for x_3>0 and for x_3=0. When x_3>0, the components are given by equations (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),(<ref>). The first four components are continuous as x_3→0, however, more work is required for the last two secondary diffracted waves, for which the formulae(<ref>) and (<ref>) should be used when x_3=0. A schematic illustration of these wave components is given in <ref>. § CONCLUSION We have shown that it was possible to apply the methodology developed in <cit.> to the challenging problem of wave diffraction by a quarter-plane. We used this methodology, together with the analytical continuation formulae derived in <cit.> to recover the far-field asymptotics of the problem at hand with only a modest amount of algebra. We recovered all the known results on the far-field asymptotics of the wave field in two cases, the simple and the complicate case. All the far-field wave components were obtained by a direct application of the results of <cit.>. The only formulae that could not be obtained in such a way concern the approximation of the secondary diffracted waves (only occurring in the complicated case) on the plane x_3=0. Those were found to correspond to a triple crossing of singularities. We however managed to adapt our argument and recovered a far-field component, leading to new formulae that had not been obtained before. Though we have chosen, for brevity, not to deal in details with the normal derivative of the field, its asymptotics can be obtained very similarly by studying (<ref>) instead of (<ref>)|_x_3 = 0. In that case, as illustrated in <ref> (right), we would not have to deal with triple crossings, so the components can be obtained directly by applying the methodology of <cit.>. unsrt§ PROOF OF ADDITIVE CROSSINGThough we have concluded in <cit.> that the crossing (-k,-k) must be additive, we provide here an additional proof of this fact based solely on the analytic continuation formulae. Consider the formula (<ref>) and take ξ_1∈ H^- and ξ_2 ∈ h^-, with ξ_2 being on the left of h^-, as illustrated in <ref>. Since ξ_2 is on P, the integral along P is understood in the sense of principle value. Let us also introduce the path δ_1' (resp. δ_2'),which is a loop starting and ending at ξ_1 (resp. ξ_2) bypassing -k anticlockwise (see <ref>). Rewrite (<ref>) asW (ξ_1 , ξ_2) = J̃_1(ξ_1 , ξ_2)/4 π^2 +i γ(ξ_1 , ξ_2) γ(ξ_1 , k_2) γ(k_2 , k_1)/(ξ_1 + k_1) (ξ_2 + k_2) γ(k_2 , - ξ_1) , J̃_1(ξ_1 , ξ_2) ≡γ(ξ_1 , ξ_2) ∫_-∞ - iϵ^∞ - iϵ( ∫_Pγ(ξ_1 , - ξ_2') K (ξ_1' , ξ_2') W(ξ_1' , ξ_2') / (ξ_2' - ξ_2) dξ_2' ) dξ_1'/(ξ_1' - ξ_1)where ϵ is a positive value such that 0 < ϵ < - Im[ξ_1]. Note that we can make this deformation due to the position of P. We introduce four values of W (ξ_1 , ξ_2), they are W (ξ_1 , ξ_2), W_δ_1' (ξ_1, ξ_2), W_δ_2' (ξ_1, ξ_2), W_δ_1' δ_2' (ξ_1, ξ_2),where W (ξ_1 , ξ_2) is the “initial” value, W_δ_1' (ξ_1 , ξ_2) is obtained fromW (ξ_1 , ξ_2)by continuation along δ_1' in the ξ_1-plane,W_δ_2' (ξ_1 , ξ_2) is obtained fromW (ξ_1 , ξ_2)by continuation along δ_2' in the ξ_2-plane,W_δ_1'δ_2' (ξ_1ξ_2) is obtained fromW_δ_1' (ξ_1 , ξ_2) by continuation along δ_2' in the ξ_2-plane. To prove that the crossing ^⋆=(-k,-k) is additive for W, it is enough to show that W (ξ_1 , ξ_2) +W_δ_1' δ_2' (ξ_1 , ξ_2) =W_δ_1' (ξ_1 , ξ_2) + W_δ_2' (ξ_1 , ξ_2),as ξ_1 approaches h^-. Indeed, if (<ref>) is satisfied then W can be presented in the form (<ref>). This is due to Proposition 4.12 of <cit.> (which was first proved in Section 4.2 of <cit.>).Because it only contains usual functions, one can check directly that the second term of (<ref>) possesses the additive crossing property for ^⋆=(-k,-k) by showing that it satisfies a relationship akin to (<ref>). Thus, we only have to prove it for the term J̃_1. The integral (<ref>) cannot be understood literally, since the point ξ_2 sits exactly on P. Rewriting the inner integral in (<ref>) as an integral along the left part of h^- only, and using the Sokhotski-Plemelj formula, we can rewrite J̃_1 as:J̃_1(ξ_1 , ξ_2)≡γ(ξ_1 , ξ_2)∫_-∞ - iϵ^∞ - iϵ dξ_1' V.P.∫_P dξ_2'γ(ξ_1 , - ξ_2') K (ξ_1' , ξ_2') W(ξ_1' , ξ_2') / (ξ_1' - ξ_1)(ξ_2' - ξ_2)+π i / K (ξ_1 , ξ_2)∫_-∞ - iϵ^∞ - iϵ dξ_1' K (ξ_1' , ξ_2)[W(ξ_1' , ξ_2) - W_δ_2'(ξ_1' , ξ_2)] /ξ_1' - ξ_1 ,where V.P. denotes the principal value of a singular integral, defined in a usual way,i.e. we consider separately small arcs bypassing the point ξ_2' = ξ_2 (see <ref>). From this we can then obtain the following expressions for J̃_1,δ_1', J̃_1,δ_2', J̃_1,δ_1' δ_2' (the indices are assigned as they were above for W):J̃_1,δ_1' (ξ_1 , ξ_2)≡γ' (ξ_1 , ξ_2) ∫_-∞ - iϵ^∞ - iϵ dξ_1' V.P.∫_P dξ_2'γ'(ξ_1 , - ξ_2') K (ξ_1' , ξ_2') W(ξ_1' , ξ_2') / (ξ_1' - ξ_1)(ξ_2' - ξ_2)+π i / K (ξ_1 , ξ_2)∫_-∞ - iϵ^∞ - iϵ dξ_1' K (ξ_1' , ξ_2)[W(ξ_1' , ξ_2) - W_δ_2'(ξ_1' , ξ_2)] /ξ_1' - ξ_1, J̃_1,δ_2' (ξ_1 , ξ_2)≡γ(ξ_1 , ξ_2)∫_-∞ - iϵ^∞ - iϵ dξ_1' V.P.∫_P dξ_2'γ(ξ_1 , - ξ_2') K (ξ_1' , ξ_2') W(ξ_1' , ξ_2') / (ξ_1' - ξ_1)(ξ_2' - ξ_2)-π i / K (ξ_1 , ξ_2)∫_-∞ - iϵ^∞ - iϵ dξ_1' K (ξ_1' , ξ_2)[W(ξ_1' , ξ_2) - W_δ_2'(ξ_1' , ξ_2)] /ξ_1' - ξ_1, J̃_1,δ_1' δ_2' (ξ_1 , ξ_2)≡γ'(ξ_1, ξ_2) ∫_-∞ - iϵ^∞ - iϵ dξ_1' V.P.∫_P dξ_2'γ'(ξ_1 , -ξ_2') K (ξ_1' , ξ_2') W(ξ_1' , ξ_2') / (ξ_1' - ξ_1)(ξ_2' - ξ_2)-π i / K (ξ_1 , ξ_2)∫_-∞ - iϵ^∞ - iϵ dξ_1' K (ξ_1' , ξ_2)[W(ξ_1' , ξ_2) - W_δ_2'(ξ_1' , ξ_2)] /ξ_1' - ξ_1,where we have definedγ'(ξ_1 , ξ_2) ≡√(- √(k^2 - ξ_1^2) + ξ_2).The additive crossing relation J̃_1 (ξ_1 , ξ_2) + J̃_1,δ_1' δ_2' (ξ_1 , ξ_2) =J̃_1,δ_1' (ξ_1 , ξ_2) + J̃_1,δ_2' (ξ_1 , ξ_2)can now be checked directly. Note that the principal value part of J̃_1 does not change after the bypass δ_2', while the residue part does not change after the bypass δ_1'. We have therefore proven that (<ref>) is true on H^- × h^-. Then, by analytical continuation in the ξ_1 plane, it is true on h^- × h^-, as required, and the crossing is additive. § EVALUATION OF(<REF>)Let us evaluate the integral:I= ∫_ΥInt(ψ) ψ,where Int(ψ)=(cos (ψ))^1 / 2/(sin (ϑ_1 - ψ))^1 / 2sin (ψ) cos (ψ - φ)·To do this, we will consider three cases, depending on the location of φ+π/2:Case 1:0<φ+π/2<θ_1,Case 2: θ_1<φ+π/2<π/2, Case 3: π/2<φ+π/2<π.The contour Υ is shown for each case in <ref> (top row). On this figure, the dotted lines represent branch cuts of the integrand Int(ψ). In each case, we start by deforming the contour Υ as described in <ref> (middle row). We note that in the Cases 2 & 3, two poles are picked up in the process. It can be shown directly that the integrandInt(ψ)→ 0 exponentially as Im[ψ]→±∞. Hence, we can push the deformation process further and discard the horizontal parts of the contours. The resulting contours for each case are displayed in <ref> (bottom row). They consist of two infinite vertical lines with opposite orientation, and, in Cases 2 & 3, some encircled poles. The two lines are at a distance π from each other and therefore Int(ψ) takes the same values on each line. The part of the integral corresponding to these lines can hence be discarded.We are left with the following result:I ={[0in Case 1; - 2 i πRes[ Int(ψ), ψ = φ + π/2] + 2 i πRes[ Int(ψ), ψ = φ + 3 π/2] in Cases 2 & 3 ].Using the fact that in Case 2 we have sin(φ)<0 and cos(θ_1-φ)>0, while in Case 3 we have sin(φ)>0 and cos(θ_1-φ)>0, we find that Res [ Int(ψ), ψ= φ+ π/2 ]= - (- sin(φ))^1 / 2/(- cos(ϑ_1 - φ))^1 / 2 cos(φ) ={ [ i √(- sin(φ))/cos(φ)√(cos(ϑ_1 - φ)) in Case 2;- √(sin(φ))/ cos(φ)√(cos(ϑ_1 - φ)) in Case 3 ] . , Res [ Int(ψ), ψ= φ+ 3 π/2 ]= - (sin(φ))^1 / 2/(cos(ϑ_1 - φ))^1 / 2 cos(φ)={ [ - i √(- sin(φ))/cos(φ)√(cos(ϑ_1 - φ)) in Case 2; - √(sin(φ))/cos(φ)√(cos(ϑ_1 - φ)) in Case 3 ] . .Inputting this into (<ref>), we find that I ={[ 0in Cases 1 & 3; 4 π√(- sin (φ))/√(cos (ϑ_1 - φ))cos (φ) in Case 2 ]. ,which can be summarised by I=4π√(-sin(φ))/cos(φ)√(cos(ϑ_1-φ))ℋ(-sin(φ))ℋ(cos(ϑ_1-φ)),as required, since, in Case 1, we have cos(ϑ_1-φ)<0.§ SECONDARY DIFFRACTED WAVES ASYMPTOTICS WITH THE DIFFRACTION SERIES ON A SPHERE The aim of this appendix is to find the secondary diffracted wave at x_3=0 using the technique described in <cit.>. We start by recalling the main ideas of the technique. Introduce the incident angular direction _0=(θ_0,φ_0) and an observation angular direction = (θ,φ), so that any observation direction can be described by the spherical coordinates (r,). Using this, the incident wave can be rewritten in the following way:u^ in = exp{-ikrcosϑ_i(,_0)},where cosϑ_i(,_0) = cosθcosθ_0 + sinθsinθ_0cos(φ-φ_0).One can treat the angular directions (,_0) as a pair of points on the unit sphere, and ϑ_i as the angular distance betweenand _0.The Laplace operator allows separation of variables in spherical coordinates, and as it was shown in <cit.> the solution can be found using the so-called Smyshlyaev's formula:u^t = 2e^3π i/4√(2π/kr)∫_γJ_ν(kr)e^-iπν/2g(,_0,ν)ν dν,where J_ν is the Bessel function. The contour γ is shown in <ref>, left. The function g(,_0,ν) is the Green's function on the sphere for the Laplace-Beltrami equation:(Δ̃+ ν^2 - 1/4)g = δ(-_0),where Δ̃ is the Laplace-Beltrami operator on a sphere acting on the variable . The function g satisfies the Dirichlet boundary condition on the arc θ = π/2, φ∈ [0,π/2]. The relative position of the scatterer, observation pointand the source point _0 is shown in <ref>, right. Let us define the angle ϑ_1 as in (<ref>), and the angles ϑ_2 and ϕ_1 such thattanϑ_2 = -x_2/x_1, and cosϕ_1 = k_1/k_1'. To simplify Smyshlyaev's formula function g is approximated by a diffraction series on a sphere:g(,_0) ∼∑_m g_m(,_0,ν) = ∑_m e^ik(ν)ϑ_mG_m(ν), k(ν) = ( Im[ν])ν,where the summation is taken over all difraction trajectories going from _0 to , ϑ_m is the length of the mth trajectory, and the G_m(ν) are slowly varying function of ν. The integral (<ref>) can be asymptotically evaluated. The result is given by formula (50) of <cit.>: u ≈∑_m u_m,u_m = 2 e^3π i/4√(2π k r)e^-ikrcosϑ_m - iνϑ_m g^+_m(ω,ω_0,ν)sinϑ_m,where g^+_m are terms of diffraction series (<ref>) taken in the upper half-plane of ν.To build an asymptotic for the secondary diffracted wave we need to find the term g^+_m corresponding to the wave going from thesource _0 to the edge x_2 of the scatterer, then going along the scatterer, and thenleaving the scattererat the edge x_1 and going the distance ϑ_2 to the observation point. Unfortunately, in <cit.> there is no adequate description of this wave. The reason is that the second diffraction act produces a penumbra (on the sphere),and the observation point belongs to this penumbra. So our aim is to fill this gap in <cit.> and build the corresponding expression. Consider diffraction acts on the spherical surface one-by-one, as they are shown in <ref>.The field u_1 is the primary field falling onto the edge x_2;the field u_2 is the single diffracted field going along the scatterer from x_2 to x_1;the field u_3 is the doubly diffracted wave. Build an auxiliary line (shown by dashed line)as a geodesic orthogonal to the scatterer at the edge x_1. Let us perform the computation in two steps.On the first step, we build the field u_2 on the auxiliary line. On the second step, we build thefield u_3 at the observation point ω using the Kirchhoff approximationusing the auxiliary line as the place for secondary sources. Step 1.Note that the scatterer is of the Dirichlet type.Thus, the field is equal to zero on it, but grows linearly in the normal direction.Consider the fields u_2^+ and u_2^- on the sides of the scatterer (see <ref>).The angles ϕ_3 and ϕ_4 are the angles between the rays going from x_2 along the geodesics and the scatterer.According to the formulae (20)–(24) of <cit.>, u_2^+ = g_0(ϑ_1 , ν) g_0(ϑ , ν) T_D(ϕ_1 , π - ϕ_3), u_2^- = g_0(ϑ_1 , ν) g_0(ϑ , ν) T_D(ϕ_1 , ϕ_4 - π),where g_0 (θ, ν) = - exp{i νϑ + i π / 4}/2 √(2 πνsinϑ), T_D (ϕ_1, ϕ) =1/cos((ϕ_1 - ϕ)/2) +1/cos((ϕ_1 + ϕ)/2).For small ϕ_3 one can use the Taylor series with respect to ϕ_3: u_2^+ ≈- g_0(ϑ_1, ν) g_0(ϑ, ν) _ϕ. T_D(ϕ_1 , ϕ) |_ϕ = πϕ_3 =- g_0(ϑ_1, ν) g_0(ϑ, ν) cos(ϕ_1 / 2)/sin^2(ϕ_1 / 2)ϕ_3.Similarly,u_2^- ≈- g_0(ϑ_1, ν) g_0(ϑ, ν) cos(ϕ_1 / 2)/sin^2(ϕ_1 / 2)ϕ_4. Introduce the coordinate τ along the auxiliary line pointing upward (see Fig. <ref>).Note that for small τ > 0τ = ϕ_3sinΦ + O(τ^3),thus, on the auxiliary lineu_2 (τ) ≈ -g_0(ϑ_1, ν) g_0(π/2, ν) cos(ϕ_1 / 2)/sin^2(ϕ_1 / 2) |τ|. 6ptStep 2.Now let us compute the field u_3.For this, use the Kirchhoff approximationu_3 = 2 i ν∫ u_2(τ) g_0 (ϑ(τ)) dτ,where ϑ (τ) is defined in Fig. <ref>.The distance ϑ (τ)can be found using a simple formula: cosϑ(τ) = cosϑ_2cosτ .One can see that this yields for small τϑ(τ) ≈ϑ_2 + τ^2/2ϑ_2.Now we can compute the integral (<ref>): u_3 =exp{ i ν (ϑ_1 + ϑ_2 + π/2) - i π/4}/4 √(2)π^3/2ν^3/2√(sinϑ_2)/√(sinϑ_1)cosϑ_2cos (ϕ_1/2)/sin^2 (ϕ_1 /2)and this is the solution of the corresponding spherical problem, i. e. this is g_m^+. Indeed, ϑ_m = ϑ_1 + ϑ_2 + π / 2.Now let us apply formula (<ref>):u(x_1 , x_2 , 0) =i exp{ i k ρsin(ϑ_1 + ϑ_2) }√(sinϑ_2)/ 2 π k ρ√(cos(ϑ_1 + ϑ_2))√(sinϑ_1)cosϑ_2cos (ϕ_1/2)/sin^2 (ϕ_1 /2) .To compare (<ref>) with (<ref>), note that k'_1 x_1 + k_2 x_2 = k ρsin (ϑ_1 + ϑ_2), γ (k_2 , k_1) = √(2 k )√(sinϑ_1)cos (ϕ_1 / 2), k_1 - k_1' = - 2 k sinϑ_1sin^2 (ϕ_1 / 2).Using the latter together with (<ref>-<ref>), we obtain (<ref>). | http://arxiv.org/abs/2310.18031v1 | {
"authors": [
"Raphael C. Assier",
"Andrey V. Shanin",
"Andrey I. Korolkov"
],
"categories": [
"math.AP"
],
"primary_category": "math.AP",
"published": "20231027101325",
"title": "A contribution to the mathematical theory of diffraction. Part II: Recovering the far-field asymptotics of the quarter-plane problem"
} |
On the difference of mean subtree orders under edge contraction Ruoyu Wang January 14, 2024 ===============================================================empty emptyAt modern warehouses, mobile robots transport packages and drop them into collection bins/chutes based on shipping destinations grouped by, e.g., the ZIP code. System throughput, measured as the number of packages sorted per unit of time, determines the efficiency of the warehouse. This research develops a scalable, high-throughput multi-robot parcel sorting () solution, decomposing the task into two related processes, bin assignment and offline/online multi-robot path planning, and optimizing both.Bin assignment matches collection bins with package types to minimize traveling costs. Subsequently, robots are assigned to pick up and drop packages to assigned bins.Multiple highly effective bin assignment algorithms are proposed that can work with an arbitrary planning algorithm.We propose a decentralized path planning routine using only local information to route the robots over a carefully constructed directed road network for multi-robot path planning.Our decentralized planner, provably probabilistically deadlock-free, consistently delivers near-optimal results on par with some top-performing centralized planners while significantly reducing computation times by orders of magnitude.Extensive simulations show that our overall framework delivers promising performances.Upon the publication of the manuscript, source code and data will be released at <https://github.com/arc-l/mrps> § INTRODUCTION At autonomous sortation centers, a layout of which is illustrated in Fig. <ref>, robots move to assigned stations to pick up packages and then to the destination sorting bins/chutes to drop these packages. A given bin usually collects parcels going to shipping addresses grouped, e.g., by the ZIP code. After a robot successfully delivers a parcel, it is assigned another task. We denote this two-phase, dynamic planning problem as multi-robot parcel sorting ().Throughput, the average number of parcels delivered per timestep over a period of time, is a common criterion for evaluating the performance of systems.Assuming the average traveling cost for sorting one parcel is d, the throughput for n robots equals n/d.Therefore, reducing d increases the throughput, which can be achieved through improving two sub-processes.First, we strategically select the bins assigned to robots to reduce robot travel costs.This helps especially when the distribution of each type of parcel is known to be imbalanced.This forms the bin assignment problem.Following bin assignment, multi-robot path planning () must be carefully performed, which is NP-hard to optimize <cit.>.Due to the challenge, computing optimal routing solutions for large-scale problems is generally impractical.Instead, a fast sub-optimal planning algorithm with good optimality is generally preferred.Committing to the two-phase approach, we propose efficient methods for addressing each. Together, these methods result in an efficient algorithmic framework for . The main results and contributions of our work are:* Algorithmically, we model the bin assignment task as an optimal assignment problem matching bins with specific parcel types. Optimal/sub-optimal algorithms, including a high-performance genetic algorithm, are proposed to solve the resulting assignment problem, resulting in much-improved system throughput when combined with an off-the-shelf multi-robot path planner.* Leveraging the regularity of grid-like warehouses, a directed network is imposed that comes with multiple throughput-enhancing properties. An efficient decentralized dynamic path planner running over the network directly prioritizes based on the inherent assignment order. The solution, provably probabilistically deadlock-free, greatly speeds up the planning process while simultaneously realizing high levels of solution optimality compared to SOTA centralized algorithms.* We benchmark on multiple popular SOTA centralized and decentralized algorithms, facilitating future theoretical and algorithmic studies of the problem.Ralated Work. Multi-robot path planning () has been extensively studied. In static/one-shot settings <cit.>,given a (graph) environment and many robots, with each robot having unique start and goal vertices,the task is to find collision-free paths for routing all the robots.Solving one-shot optimally in terms of minimizing either makespan or sum-of-cost is NP-complete <cit.>.Subsequently, multi-robot parcel sorting (), an online variant of where the robot would be assigned a new goal after reaching their current goal, is also NP-hard to optimally solve.[This can be readily proven by reducing viewing as a special type of . I.e.,is a restricted .] solvers can be centralized and decentralized.Centralized solvers assume robots' paths can be calculated in a central computation node and subsequently executed without coordination error among the robots. Decentralized solvers assume each robot has significant autonomy and calculates the path independently, with necessary coordination among the robots to facilitate decision-making.Centralized solvers either reduce to other well-studied problems <cit.> or use search algorithms to search the joint space to find the solution <cit.>. Recently, polynomial time 1.x-optimal algorithms potentially suitable for large-scale parcel sorting have also been proposed <cit.>.Many researchers also looked into machine learning methods to directly learn decentralized policies for <cit.>.A common approach to solving “stitches” one-shot instances together by using a (usually complete, bounded sub-optimal) planner to recompute paths at each timestep at least one robot is assigned a new goal <cit.>.Replanning can be time-consuming as resources are wasted in redundant path computations. Han et. al<cit.> use a pre-computed database to resolve collisions scalably. However, as it only uses local information, the solution quality worsens in dense environments. Some planners plan new paths only for robots with new goals, which also lacks global oversight <cit.>. Another promising method plans paths within a finite window, leading to better scalability <cit.> at the cost of completeness. This phenomenon worsens when the planning window size is small compared to the average distance to the goal, as robots cannot predict the situation outside the planning window and might plan greedy short-term paths, resulting in unsolvable scenarios.Organization. In Section <ref>, the environment setting and problem definition are given.Section <ref> outlines the three proposed bin assignment algorithms. Section <ref> describes a near-optimal multi-robot planning algorithm, probabilistically deadlock-free, that can run in a decentralized manner. In Section <ref>, we provide extensive evaluation results of the proposedalgorithms. We conclude in Section <ref>. § MULTI-ROBOT PARCEL SORTING PROBLEMMulti-robot parcel sorting () consists of bin assignment and multi-robot path planning in an intelligent sorting warehouse.A warehouse is defined as agrid world 𝒢(𝒱,ℰ) with n mobile robots, n_bbins, and n_p pickup stations where robots may receive packages to be dropped off in bins. To make the problem more concrete, in this work, it is assumed that 𝒢 is composed of a grid of 3× 3 cells with a bin at each cell center, plus a one-cell wide border.The position of each pickup station and each bin is known and fixed. The warehouse sorts n_c types of parcels (n_c≤ n_b), and each type is associated with a set of shipping addresses.Denote ℛ={r_1,...,r_n} as the set of robots, ℬ={b_1,...,b_n_b} as the set of bins, 𝒫={p_1,...,p_n_p}as the set of pickup stations, and 𝒞={1,...,n_c} as the types. As a no-load robot arrives at a pickup station, a random package temporarily stored at the station is loaded on the robot. The distribution of parcel types varies for each station; it is assumed that at station p_k, parcels of type j arrive with probability m_kj, ∑_jm_kj=1.The probability is fixed a priori knowledge and can be estimated from historical statistics and continuously updated. In the bin assignment problem, each bin is associated with a parcel type. A surjectionf:ℬ→𝒞 must be computed to reduce the average traveling cost. There can also be multiple bins for a given type of parcel when n_b>n_c.Proper bin allocation reduces the traveling cost of robots, increasing the throughput. As robots continuously pick up and deliver packages, the path-planning process is online and dynamic.Over the grid-like environment, in each time step, a robot can move up, down, left, and right or wait at its current position.Collisions must be avoided, specifically: (i) two robots cannot be in the same location at the same time (meet collision), (ii) two robots cannot traverse the same edge in opposite directions at the same time (head-on collision), and(iii) a robot cannot occupy any bin vertex (it will fall into the bin).We do not consider the time of loading parcels and dropping parcels.When a robot finishes a task, it goes to the nearest station to retrieve a new parcel.The path planning sub-problem in is also known as life-long multi-agent path finding <cit.>.The general goal is to let the robots sort the parcels as quickly as possible.We consider throughput as the criteria: the average number of parcels sorted per time step. § BIN ASSIGNMENT We solve in two phases, starting with bin assignment that matches each bin with one package type such that each of the n_c types is assigned to at least one bin. If n_b > n_c, multiple bins can get the same type. In this phase, we do not consider inter-robot collisions in assigning robots with packages to proper collection bins.Case 1: n_b=n_c. Each bin has a unique type; the problem can be unnaturally cast as a linear assignment problem <cit.>.The cost of assigning bin b_i to type j denoted as w_ij isw_ij=∑_km_kjdist(p_k,b_i),wheredist(p_k,b_i) is the shortest distance from pickup station p_k to bin b_i.This can be solved optimally by applying the Hungarian algorithm <cit.> that runs in O(n_b^3) time, sufficiently fast for hundreds of bins.Case 2: n_b>n_c. In this case, multiple bins may be assigned for taking in the same parcel type, leading to increased sorting throughput and making the problem more complex.The problem may be modeled as a generalized assignment problem, which is NP-hard in general <cit.>. We propose an optimal (but slow) method and two fast, near-optimal methods for solving the assignment problem. The optimal algorithm is based on mixed integer programming, and sub-optimal ones are greedy and genetic algorithms <cit.>.§.§ Optimal Allocation via Mixed Integer ProgrammingWe propose an optimal mixed integer programming model for bin assignments as:Minimize1/n_p∑_j=1^c∑_k=1^n_p m_kjd_kj,subject to: d_kj=min_i{1/x_ijdist(p_k,b_i)}∑_jx_ij=1for each bin b_i∑_ix_ij≥ 1for each type j x_ij= 0 if b_i is not used to sort type j 1if b_i is used to sort type jEq. (<ref>) seeks a feasible assignment minimizing the average traveling cost required to sort the parcels, ignoring collisions.The variable d_kj in Eq. (<ref>) is the minimum traveling cost to sort parcels type j, the distance from pickup station p_k to its nearest sorting bin of type j.We let 1/x_ij=+∞ ifx_ij=0, i.e., bin i is not reachable as it is not a bin to sort type j parcels.Eq. (<ref>)and Eq. (<ref>) ensures bin-type surjectivity.We use Gurobi <cit.> as the solver; we introduce new variables y_ij=1/x_ij+εdist(p_k,b_i), where ε is a very small positive constant, to avoid divide-by-zero issues.§.§ Greedy Bin AllocationThe above optimal solution scales poorly. Toward speeding up computation, we outline a fast greedy method (Algo. <ref>). First, n_c bins are selected by solving a min-cost maximum matching problem (Lines 2-4).Each assigned bin of type c has cost w_c.Then, we choose bin b' with the highest cost w_b, assuming this bin is for sorting parcels of type c_b.We can choose an unassigned bin to share b''s load by iterating over all unassigned bins. For each bin b”, we calculate the new traveling cost for sorting type c if adding this bin b” for sorting c_b (Line 9).Let 𝐁=𝐛_𝐜+b” be the bins used to sort type c_b, the cost is given by𝒞_b=∑_k=1^n_pm_kc_bd_kc_b, where d_kc_b=min_b_i∈𝐁dist(p_k,b_i). The bin with the smallest new traveling cost is assigned for type c.Random assignments are made if the smallest traveling cost exceeds the original cost. The process repeats until all bins are assigned.§.§ Bin Allocation via Genetic AlgorithmGenetic algorithms (GA) <cit.> can be effective in solving NP-hard optimization problems <cit.> and turns out to work nicely on . In GA, a chromosome represents a potential solution, corresponding to bin assignment in our case. Specifically, a chromosome is an integer array of length n_b. The genome at index i is an integer in [1,n_c] representing the type of the bin b_i is assigned to.A valid chromosome should contain each type (n_c in total) at least once. The fitness function is defined as the multiplicative inverse of the average shortest distance of sorting each parcel: F(C)=(∑_k=1^n_p∑_j=1^n_cm_kj d_kj)^-1,where d_kj is the shortest traveling distance required from pickup station p_k to its nearest bin of type j,d_kj=min_b_i.type=jdist(p_k,b_i). The initial population of is generated randomly (80%) and by a greedy algorithm (20%).A new population is generated by selection and crossover (partially mapped crossover [18]), and mutation. At least one best individual is copied without changes to the new population (elitism selection), preventing the loss of the current best solution. Rest follows standard GA procedures. We note that the bin assignment process is semi-static in that it remains fixed for a given batch of packages to be sorted but can be updated for each batch.§ DISTRIBUTED MULTI-ROBOT PATH PLANNINGAfter bin assignment, robots must be routed to pick up and deliver packages. We present a decentralized method to accomplish this, requiring only that the observable area of a robot is a 3× 3 square centered at its current location; it is assumed that a robot can detect potential collisions and communicate with the robots located within this area. §.§ Directed Road Network DesignBy Robbins' Theorem <cit.>, an undirected graph G(V,E) can be oriented to yield a strongly connected directed graph if and only if G is 2-edge-connected, meaning the removal of any single edge from G does not disconnect it. This fundamental insight implies the existence of a strong graph orientation within the context of our warehouse environment.To facilitate decentralized robot routing, we first impose an orientation on the underlying grid-like environment, transforming it into a strongly connected directed graph that largely preserve shortest paths. Employing linear-time depth-first search <cit.>, a strong graph orientation can be identified. However, not all the orientations are good. Determining the optimal strong graph orientation, which minimizes the average pairwise distance, is demonstrated to be NP-hard <cit.>. For scenarios resembling city-street grid maps, the investigation into the optimal graph orientation is discussed in <cit.>. We adopt a strategy similar to <cit.> to convert the graph into a digraph where the parallel “streets” alternate in direction (see Fig. <ref>). The directed network provides several crucial advantages. Each edge of the directed network only permits movement in a single direction. Despite solving optimally is NP-hard on digraphs <cit.>, the absence of head-on collisions greatly reduces computational burden in eliminating considering head-on collisions and reduces the search space's size.Furthermore, our design incorporating two adjacent embedded "highways" with opposing directions—out of numerous possible configurations does not significantly compromise the optimal solution path. Consider any pair of vertices u and v, where d_d(u,v) represents the shortest distance between them in the undirected graph and d_u(u,v) represents the shortest directed distance. Then d_d(u,v)≤ d_u(u,v)+5. Our graph orientation method is not limited to warehouse maps, verifiable using Robbins' Theorem <cit.>.§.§ Prioritized Recursive Yielding PlannerWe present a decentralized planner over the directed network in which a robot's observation area is limited to a 3× 3 square around it.The robot can detect possible collisions and communicate with other robots in its observation area.In our case, we only need to address potential meet collisions.Similar to <cit.>, our algorithm first uses A* to plan initial paths to goals that may still contain conflicts.Robots follow the paths and resolve the conflicts locally by letting robots yield to higher-priority robots. We call the planner prioritized recursive yielding planner () (Algo. <ref>). generates initial paths using A*.Robots then execute the paths and resolve conflicts in .If a robot reaches its current goal, it receives a new goal and replans the path using A*.Specifically, if it arrives at a station p_k, a new parcel would be randomly generated according to the given distribution 𝐦_𝐤.Then the robot would plan a path to the nearest bin of the corresponding type.If the robot delivers the parcel, it plans a path to the nearest station to get a new parcel.To resolve potential collisions (, Algo. <ref>), first, robots try to figure out whether they form a cycle. To do so, each robot r_i sends a message to its neighbor (if there is one) in the direction it is going, and the message propagates. If robot r_i receives the message back, it is in a cycle.Those robots that form a cycle have the highest priorities and will move to their next position (see Fig. <ref>(a)). If a robot r_k has no neighbor in the direction it is going, r_k checks with diagonal neighbors to see if there is a meet collision.For example, in Fig. <ref>(b), robot 1 finds a possible meet collision with robot 3; they communicate to decide who should go first, based on the following policy: If a robot has not delivered its parcel, it has higher priority; otherwise, if a robot has more steps to arrive at its current goal location, it has higher priority. If a tie remains, priority is assigned randomly. In Fig. <ref>(b), suppose robot 1 has higher priority than robot 3; robot 3 yields to robot 1.Robot 1 sends a message informing robot 2 it will move to the next location while robot 3 sends a message to tell robot 4 that it has to wait.For robot r_k, if the robot located at its next position is not going to wait at the next step, r_k will communicate with the neighboring conflicted robot and decide who should go first following the above policy (see Fig <ref>(c)). §.§ Path DiversificationIn our planner, whenever a meet collision occurs, one of the robots must wait, leading to increased cost.Therefore, the solution quality is strongly correlated with the number of meet collisions of the planned paths.Meet collisions can be reduced using a randomized diversification heuristic <cit.>, in which a random path is selected among all candidate paths of the same shortest lengths.Alternatively, we may let the robot access partial global information, in which case focal search<cit.> may be applied that uses the number of conflicts as the heuristic to reduce the number of conflicts we need to resolve.We denote with path diversification as enhanced prioritized recursive yielding planner (). §.§ Probabilistic Deadlocks Prevention GuaranteesBoth prioritized planning and decentralized planning may have deadlocks.A deadlock occurs when a robot cannot reach its designated destination within a finite time.When robots are continuously assigned new tasks, as is in our setting, the only potential deadlock arises if a robot attempts to move to a vertex where a cycle full of robots exists (Fig. <ref>).Robots moving on a cycle have the highest priority; if such cyclic patterns persist indefinitely, robots need to pass vertices on the cycle and cannot progress.However, infinite cycles can only form if robots move between a fixed pair of a pickup station and the matched delivery bin. This should never happen because these packages at the station do not require sorting. We have the following conclusion.and algorithms are probabilistically deadlock-free.§ EVALUATIONSWe evaluate the proposed bin assignment algorithms and the overall performance of our solvers with various planners, including centralized planner (w=1.5) <cit.>with horizon cut techniques <cit.>, <cit.>, a discrete version of <cit.>, and the proposed decentralized planners and .All experiments are performed on an Intel® CoreTM i7-6900K CPU at 3.2GHz. Unless otherwise stated, each data point averages over 20 runs on randomly generated instances.Assignment algorithms are implemented in Cython, and planners are implemented in C++.§.§ Comparison of Bin Assignment AlgorithmsWe evaluate different assignment algorithms on the sorting warehouse setup shown in Fig. <ref>, which has n_b = 36 bins and n_p = 12 pickup stations. The number of types n_c varies (but cannot exceed 36). The probability of each type in each station follows a random vector that adds up to 1. We use Gurobi <cit.> to solve the mixed integer programming (MIP) model with a 5-minute time limit.For genetic algorithms, the maximum number of iterations is chosen as 800, the population size is set to 100, and the mutation rate is set to 0.08.The result is presented in Fig. <ref>. Compared to the random assignment, greedy, genetic, and MIP algorithms reduce the average distance by about 10%, 25%, and 30%, respectively.MIP, while optimal, is slow. Genetic algorithm (GA) runs much faster and achieves nearly identical optimality.The computation time gap between MIP and GA can be expected to become even bigger as the number of bins increases. The result confirms using repetitive bins can reduce the average distance.When the number of bins equals the number of package types (n_b = n_c = 36), the assignment is optimal with the Hungarian algorithm, with faster computation time than greedy, GA, and MIP.§.§ Impact of Assignment Algorithms on Planners Fixing the number of bins at n_b = 36, the number of pickup stations at n_p = 12, and the number of package types at n_c = 20, and let the number of robots vary,we examined the impact of combining different bin assignment algorithms with different planners. The performance of different planners, , , , , , , under different bin assignment strategies are compared. is a variant of that uses highway heuristics and treats the graph as a directed graph.For and , the planning window and execution window are set to be 5.The throughput result is shown in Fig. <ref>. We omit the computation time comparison, which is not important. We observe that the GA, MIP, and greedy approaches all improve the throughput of and about 10%-20% compared to random assignment. GA and MIP also clearly outperform greedy assignments. Note that the throughput equals n/d, where d is the average traveling cost required to deliver and sort a parcel.As and are bounded suboptimal, the solution costs are close to the traveling distance. Combined with the bin assignment algorithms, the throughput of and scale nearly linearly to the number of robots within the range (but we will see that they are more computationally demanding when compared to decentralized planning methods). , , and resolve conflicts locally, leading to improved throughputat lower robot density.At higher robot density, due to the high number of conflicts to be resolved, d would increase too with respect to the number of robots.Consequently, these algorithms have a peak throughput, indicating the number of robots that can achieve the best throughput.Compared to , and have much better throughput.§.§ Comparisons of Solvers on Larger MapsLastly, we present evaluation results (Fig. <ref>-Fig. <ref>) on full solvers using GA for bin assignment. Specifically, we evaluate scalability, computation time, and the achieved throughput. For evaluating scalability, three maps are used. We let each planner run 500 steps for each setting and calculate the computation time and average throughput.For the smallest map (Fig. <ref>), and have the best throughput since they plan the paths and resolve the conflicts centrally. On the other hand, these methods take the most time, rendering them impractical as the map size and the number of robots increase (Fig. <ref> and Fig. <ref>). For example, for the largest map (Fig. <ref>) with 300 robots, it takes over 0.5 second to compute a solution for routing each robot, which is prohibitively expensive for online applications. , , , and are much more scalable, making them more suitable for online applications such as parcel sorting.While and have fairly sub-optimal performance in terms of throughput, and deliver throughputs that are directly comparable to centralized methods (while the centralizes are still scalable). adopts the information of the planned paths of other robots and applies focal search to reduce the number of conflicts to resolve; therefore, achieves better throughput in dense cases as compared with .All in all, and are the most suitable for solving tasks.§ CONCLUSIONIn this research, we tackle multi-robot parcel sorting (), partitioning it into two phases: bin assignment and multi-robot routing.We propose several effective algorithms for bin assignment that significantly reduce the average traveling distance, leading to increased throughput. These algorithms can be combined with any multi-robot path planning routines.For the multi-robot routing phase of , we propose a prioritized, probabilistically deadlock-free algorithm over a directed network. The decentralized approach, integrated into an solver,achieves excellent overall performance in terms of throughput and scalability as compared to other advanced solvers. IEEEtran | http://arxiv.org/abs/2310.17753v1 | {
"authors": [
"Teng Guo",
"Jingjin Yu"
],
"categories": [
"cs.RO"
],
"primary_category": "cs.RO",
"published": "20231026194749",
"title": "Bin Assignment and Decentralized Path Planning for Multi-Robot Parcel Sorting"
} |
Monic abelian trace-one cubics]On monic abelian trace-one cubic polynomials0.5 Department of MathematicsUniversity of British ColumbiaVancouver, BC Canada [email protected] 0.5 Department of MathematicsPrinceton UniversityPrinceton, NJ [email protected] We compute the asymptotic numberof monic trace-one integral polynomials withGalois group C_3 and bounded height.For such polynomialswe compute a height functioncoming from toric geometryand introduce a parametrization usingthe quadratic cyclotomic field (√(-3)).We also give a formula forthe number of polynomials of the formt^3 -t^2 + at + b ∈[t] with Galois group C_3 for a fixed integer a.[ Andrew O'Desky October 15, 2023 ====================§ INTRODUCTIONLet F denote the set of polynomialsof the formt^3 -t^2 + at + b ∈[t]which have Galois group C_3,the cyclic group of order three.The primary aim of this paper is to provethe following asymptotic formula.Let ε >0.The number of polynomialst^3 -t^2 + at + b ∈ Fwith max(|a|^1/2,|b|^1/3)≤ His equal to CH^2 log H + ( Clog√(3)+D -π/3√(3))H^2 + O_ε(H^1+ε)as H →∞, where C= 4π^2/81∏_q≡23(1-1/q^2) ∏_p≡13(1-3/p^2+2/p^3) and D/C =2 γ + log(2π)- 3 log(Γ(1/3)/Γ(2/3)) + 9/8log 3 + 9/4∑_q≡23log q/q^2-1 + 27/4∑_p≡13(p+1)log p/p^3-3p+2 .This may be qualitatively compared with<cit.>which asserts that the number N(H)of monic integral cubic polynomialst^3 +at^2 + bt + cwith Galois group C_3and max(|a|,|b|,|c|) ≤ Hsatisfies2H ≤ N(H) ≪ H (log H)^2,however their height function is inequivalentto the height inTheorem <ref>and there is no trace-one condition. We also prove a formula of sortsfor the number of f ∈ F withspecified nonconstant coefficients.For any H ≥ 1let E_H ⊂^2 be the ellipse defined by E_H :x^2+y^2+xy-x-y = 13(H^2-1). If t^3 -t^2 + at + b ∈ F then a ≤ 0.Fix a ∈_≤ 0.The number of polynomials of the formt^3 -t^2 + at + b ∈ Ffor any b ∈ is equal to 1/2∑_d|(1-3a) 3^ω(P_1(d)) (-1)^Ω(P_2(d)) -1/6#E_√(1-3a)() whereP_j(d) denotes the largest divisor of donly divisible by primes ≡ j3,and ω(n) (resp. Ω(n))denotes the number of prime factors of a positive integer ncounted without (resp. with) multiplicity.§.§ An integral Diophantine problemTo prove these theorems we relatethe polynomial counting problem to anintegral Diophantine problemon a certain singular toric surface Sand then solve the Diophantine problem.Let Å^3 = Spec [X,Y,Z]and ℙ_2 = (Å^3)= Proj [X,Y,Z]be equipped with the regular action of C_3.Consider the quotient surface S = _2/C_3. Let T ⊂ S denote the image of the unit groupin the group algebra Å^3 of C_3under Å^3-{0}→_2 → S.One can show that T is a rank-two torus and S isa toric compactification of T. The set of rational points S() is thus equipped witha family of toric height functionsH(-,s)constructed in <cit.>,where s is a parameter in the complexified Picard groupPic(S) ⊗.The surface S has Picard rank one <cit.>,so we may regard s asa complex number where s = 3corresponds to the ample generator.Let D_0 be the divisor{ε X+Y+Z = 0}⊂ S.A rational point P of S - D_0 isD_0-integralif every regular function in𝒪(S_ - D_0)= [X/ε,Y/ε]^C_3is -valued on P. Our third result is an explicit formulafor the height zeta function for D_0-integralrational points on the torus T ⊂ S.∑_P ∈ T(ℚ), D_0-integral H(P,s)^-1 = (1-1/3^z)^2 ζ_(√(-3))(z)^2 ∏_q≡23(1-1/q^2z) ∏_p≡13(1-3/p^2z+2/p^3z) where z = s2 andζ_(√(-3)) is the Dedekind zeta functionof (√(-3)).This height zeta functioncan be meromorphically continued to the half-plane Re(s)>1and its only pole in this region is at s = 2 with order 2.If n ∈_≥ 1 is not divisible by 3,then the number of D_0-integral rational points on Twith toric height √(n)is equal to ∑_d|n 3^ω(P_1(d)) (-1)^Ω(P_2(d))§.§ Relation between the problemsIn <cit.>it was shown that the torus Tis the moduli space for C_3-algebraswith a given trace-one normal element.In particular, T() ≅ {(K/C_3-algebra, xtrace-one normal)} where a C_3-algebra K/ isa -algebra equipped with an action of C_3for which there is a C_3-linear -algebra isomorphismfrom K to either a cubic abelian number field orthe split algebra ^3,and an element x ∈ K is normal ifits Galois conjugatesare linearly independent over .Using this bijection we consider the function T() ⟶{t^3 -t^2 + at + b ∈[t]} taking a rational point (K/,x)to the characteristic polynomial of x.We prove that the image of this function isthe subset of polynomialswhich either have Galois group C_3or split into three linear factors over with at most two being the same,and if f is such a polynomial, thenthe number of rational points of Twith characteristic polynomial f is given by w_f=1if f has a double root,2otherwise. Moreover we show thata rational point P of T is D_0-integralif and only ifthe associated characteristic polynomialt^3 -t^2 + at + b is integral,and we also prove that H(P,1) = √(1-3a) for D_0-integral points.This toric height is equivalent to the height used inTheorem <ref>.§.§ Further remarksThe restriction to trace-one normal elementswas made out of conveniencein <cit.>and should not be essential for the method.In place of S, there is a three-foldwith a similar constructionand an open subset which parametrizesall normal elements of C_3-algebras.In forthcoming work <cit.>the method presented here will be extendedto count monic integral polynomialswith bounded height and any given abelian Galois group.§.§ AcknowledgementsA.O. is very grateful toTimothy Browning,Vesselin Dimitrov,Jef Laga,Peter Sarnak,Sameera Vemulapalli,Victor Wang,and Shou-Wu Zhangfor helpful discussionsand comments on an earlier draft.A.O. would also like to thank Alexandra Pevznerfor pointing out the reference <cit.>.A.O. was supported by NSF grant DMS-2103361. § THE ORBIT PARAMETRIZATIONIn this section we recall some facts from <cit.>and describe the orbit parametrization. Let σ be a generator of C_3.Let Δ =3 X Y Z-X^3-Y^3-Z^3, the determinant ofmultiplication by an elementXe+Yσ+Zσ^2 of the group algebra.We set 𝒢 = ℙ_2[Δ^-1] and T = 𝒢/C_3. Then 𝒢 is an algebraic torus over which may be identifiedwith the units of the group algebra of C_3 with augmentation one,i.e. 𝒢 = {(x,y,z) ∈Å^3 :Δ(x,y,z) ∈_mandx+y+z = 1}. Since C_3 is abelian, the homogeneous space T=𝒢/C_3is itself an algebraic torus over .The action of 𝒢 on the regular representationinduces an action of T on S extendingthe regular action of T on itself.Let Å^2= Spec ℚ[X/ε,Y/ε]denote the open affine plane in _2 wherethe augmentation map ε=X+Y+Z is nonvanishing.A rational (or adelic) point P of Å^2/C_3 isD_0-integralif every regular function in𝒪(Å_^2/C_3)= [X/ε,Y/ε]^C_3is -valued (resp. ×-valued) on P. §.§ T as a moduli spaceLet K/ℚ be a separable ℚ-algebraequipped with the action ofa finite group G of ℚ-algebra automorphisms of K.We say that K/ℚ regarded with its G-action is a(Galois) G-algebra ifthe subset of K fixed by G is equal to ℚ.Geometrically, a G-algebra is the ring of functionson a principal G-bundle,equipped with its natural G-action. Since we regard the G-action as part of the data of a G-algebra,a G-algebrais not generally determined by the isolated data ofthe underlying -algebra K andthe abstract finite group G.The G-action on a G-algebramay be twisted by any outer automorphism of G,and the twisted G-algebra will not generallybe isomorphic to the original G-algebra. Two pairs (K/ℚ,x),(K'/ℚ,x') are regarded as equivalent ifthere is a G-equivariant ℚ-algebra isomorphismK → K' sending x to x'.We make use of the following modular interpretation for T.The homogeneous variety T is the moduli space forC_3-algebras with a given trace-one normal element.In particular,there is a bijectionbetween rational points of Tand equivalence classes ofC_3-algebras K/ℚ equipped witha trace one normal element x ∈ K.Let K be a cubic abelian number field.Then K, equipped with its canonical Galois action,is a C_3-algebra.The twist K' of the C_3-algebra K bythe outer automorphism g ↦ g^-1 of C_3(with twisted action g ∗ x = g^-1x)is not isomorphic to K as a C_3-algebra.[In terms of Galois cohomology,the non-cohomologous 1-cocycles in H^1(,C_3)corresponding to the C_3-algebras K and K'have the same image under the canonical mapH^1(,C_3) → H^1(,S_3) because the outer automorphismof C_3 is realized by S_3-conjugation.] Let K_spl = ^3, the split cubic algebra.Then C_3⊂ S_3 = Aut_-alg(K_spl)and K_spl, equipped with its canonical C_3-action,is a C_3-algebra.Any transposition gives an isomorphism of C_3-algebrasfrom K_spl to its twist K_spl'. An element x of the split C_3-algebra K_splis normalif and only ifx either has distinct coordinates orexactly two identical coordinates. The pairs (K_spl,x) and (K_spl',x)are equivalentif and only ifx has exactly two identical coordinates(swapping the identical coordinatesgives the required isomorphism);in particular, if x has distinct coordinatesthen (K_spl,x) and (K_spl',x)determine different rational points of 𝒢/C_3,even though K_spl and K_spl'are isomorphic as C_3-algebras.§.§ T as a torusHere we describe some of the toric data associatedwith T which will be needed later.For more details see e.g. <cit.>.Let E = ℚ(ζ)where ζ is a primitive cube root of unity,and let γ denote the generator ofthe Galois group Γ of E over .Let Pl_E denote the set of places of E.The group of units U in the group algebrais a three-dimensional algebraic torus defined over ℚwhich canonically factors as U = _m ×𝒢. The characters and cocharacters of T may be described as follows.The larger torus U is diagonalized over Eby the three elementary idempotents in the group algebra: v_0' =13(1+σ+σ^2), v_1' =13(1+ζ^2 σ+ζσ^2),v_2' =13(1+ζσ+ζ^2σ^2).Each idempotent is associated witha characterχ_iU(E) → E^×for i = 0,1,2determined by u v_i' = χ_i(u)v_i',corresponding to the action of U onthe ith irreducible representation of C_3.The character χ_0 is trivial on 𝒢, sothe lattice of characters of 𝒢_E is generatedby χ_1 and χ_2.We denote this lattice by M_E'and let N_E' denote the dual lattice to M_E'.To describe the fansit is more symmetric to work with the isomorphic imageof N_E' in the quotient of C_3 by the line spanned byv_0'+v_1'+v_2', and we write v_i for the image of v_i' (i = 0,1,2).The Galois group Γ of E acts on M_E'by swapping χ_1 and χ_2,and on N_E' via the dual action. To pass from 𝒢 to T, consider the element ω=13(2v_1+v_2) ∈ N_E,' and set N_E = N_E' + ωand M_E = N_E^∨= {m ∈ M_E,' :m(n) ∈ for all n ∈ N_E}. The character lattice (resp. cocharacter lattice)of T_E is M_E (resp. N_E).The cocharacters ω and γω span N_Eso the dual basis (a',b') =(ω,γω)^∨spans M_E. The fan Σ of S is the same as the fan for _2and has three generators Σ(1) = {v_0,v_1,v_2}. We also make use of the following formulas forthe characters of 𝒢.Let (v_1^∨,v_2^∨) ∈ M_E' bethe dual basis to (v_1,v_2) ∈ N_E'.The characters of 𝒢 associated tov_1^∨ and v_2^∨ are given on E-points of U by χ^v_1^∨(uv_0' + vv_1'+wv_2') = v/uandχ^v_2^∨(uv_0' + vv_1'+wv_2') = w/u.This explicit description of the character latticesleads to an (unexpected) isomorphism between𝒢 and its quotient T = 𝒢/C_3.On character latticesit is given by the Γ-equivariant isomorphism N_E=⟨ω,γω⟩→ N_E'=⟨ v_1,v_2⟩ taking ω to v_1 and γω to v_2. This implies that the multiplicative group ofthe cyclotomic field (√(-3))naturally parametrizes cubic trace-one polynomials.The tori T and 𝒢 = R^E__mare isomorphic as algebraic groups over .Every rational point (K/,x) of T therebydetermines an element of (√(-3))^×which is canonically determined up to the action ofAut(𝒢).The toric height H(f)√(1-3a)on T() is identified withthe square-root of the norm on (√(-3))^×.Let ζ be a primitive cube root of unity.If u+vζ∈(√(-3))^×has norm N and trace T,then the characteristic polynomial ofthe corresponding rational point (K/,x) isf=t^3-t^2+13(1 - N)t+127(1+N(T-3)) ∈[t]. Such a polynomial either has Galois group C_3 orsplits into three linear factors over ,with at most two linear factors being the same.Conversely, a monic trace-one polynomial f=t^3-t^2+at+b ∈[t]which either has Galois group C_3 orsplits into three linear factors over ,with at most two linear factors being the same,can be expressed in this wayfor precisely two rational points of Tif f has no repeated roots, orfor precisely one rational point of Tif f has a double root which is not a triple root.The elements u+vζ∈(√(-3))^×corresponding to fwill be the roots of the quadratic polynomial g = t^2 - (3-1-27b/1-3a)t + 1-3a ∈[t]. The polynomial f will have integral coefficientsif and only if u^2+v^2-uv ∈ 1+3and (u^2+v^2-uv)(3-2u+v) ∈ 1+27 . The character lattice of a torus over as a Galois representation determinesthe torus as an algebraic group up to isomorphism,cf. e.g. <cit.>.Equation (<ref>)below identifies the toric heightwith the square-root of the norm.The formulas for a and b follow from expressinga and b in terms of characters of Tand then using (<ref>)to reexpress these using characters on 𝒢. § TORIC HEIGHTSIn this section we show that the toric height H(-,1)of a D_0-integral point (K,x) of Tin the sense of <cit.>is equal to H(f) = √(1-3a)where f is the characteristic polynomial of x. Let w be a place of E.For any x ∈ T(E_w)the functionχ↦ord_w(χ(x))on characters χ∈ X^∗(T_E_w)determines an element of X_∗(T_E_w)_. Letn_w(x) ∈ X_∗(T_E)_ be the cocharacter corresponding to this elementunder the canonical isomorphismX_∗(T_E_w)_≅ X_∗(T_E)_induced by base change of the split torus T_Ealong E → E_w.For any place v of _v let K_v denotethe maximal compact subgroup of T(_v).Evaluating characters of T_E on _v-pointsgives a canonical bijection T(_v)=Hom_Γ(w/v)(M_E,E_w^×) where w is any place of E over v.When v is finite, K_v may be identified withthe subset of O_w^×-valued homomorphismsK_v = Hom_Γ(w/v)(M_E,O_w^×) ⊂T(_v). Let w be a place of E lying over a place v of .There is an exact sequence 1 ⟶ K_v⟶ T(_v)n_w X_∗(T_E)_^Γ(w/v). If w is infinite then n_w is surjective, andif w is finite then the image of n_w isthe lattice X_∗(T_E)^Γ(w/v). <cit.> nearly proves the claimbut at the ramified place w over v = 3only ensures that the image of n_wis a finite index subgroup ofX_∗(T_E)^Γ(w/v).To see that the image of n_w is all ofX_∗(T_E)^Γ(w/v)recall that the cocharacter latticesof T_E and 𝒢_E are isomorphicas Galois representations via (<ref>). Since T(_v) and K_v are determined bythe dual modules M_E and M_E',it suffices to show that n_w is surjectivewhen defined relative to 𝒢;in more detail, there is a diagram 1[r]Hom_Γ(w/v)(M_E',O_w^×) [r][d]Hom_Γ(w/v)(M_E',E_w^×) [r,"n_w"][d]Hom_Γ(w/v)(M_E',) [d] 1[r]Hom_Γ(w/v)(M_E,O_w^×) [r]Hom_Γ(w/v)(M_E,E_w^×) [r,"n_w"]Hom_Γ(w/v)(M_E,) where the vertical arrows are isomorphisms of abelian groupsinduced by the transpose ofthe Γ-isomorphism N_E → N_E', and the homomorphisms n_wcorrespond to post-composing with ord_w.The diagram commutesso surjectivity of the upper n_w implies surjectivity ofthe lower n_w. To see that the upper n_w is surjective,observe that the upper row of the diagramis the Γ(w/v)-invariants ofthe short exact sequence of Γ(w/v)-modules 1[r]Hom(M_E',O_w^×) [r]Hom(M_E',E_w^×) [r]Hom(M_E',) [r]0 (here the exactness on the right follows fromExt^1(M_E',O_w^×) = 0 since M_E' is free);thus the upper row of the diagramcontinues to the first cohomology groupH^1(Γ(w/v),Hom(M_E',O_w^×)).Now recall that the group of unitsU in the group algebra of C_3 is_m × R^E_𝔾_mwhere the first projection is the augmentation character,so the torus 𝒢 is isomorphic toR^E_𝔾_m.This implies that M_E' is a free Γ(w/v)-module,Hom(M_E',O_w^×) is coinduced,and thereforeH^1(Γ(w/v),Hom(M_E',O_w^×)) = 0so n_w is surjective.The toric variety S has at worst cyclic quotient singularitiessince its fan is simplicialso every Weil divisor on S is -Cartier.The toric heightwith respect to a Weil divisor D for which nD is Cartieris defined as H(-,𝒪(D)) as H(-,𝒪(nD))^1/n.Let D_0,D_1,D_2 bethe three irreducible T-stable divisorscorresponding respectivelyto the three generators v_0,v_1,v_2in Σof the fan of S (cf. <cit.>).We call any formal ℂ-linear combinations_0D_0+s_1D_1+s_2D_2a toric divisor of S.A support function isa continuous Γ-invariant functionφ N_E,→whose restriction to any cone of Σ is linear.Support functions and Γ-invariant toric divisorsare in bijection under φ↔(s_0,s_1,s_2)=(-φ(v_0),-φ(v_1),-φ(v_2)) where s_1 = s_2 to ensure Γ-invariance.Any Cartier toric divisor ∑_es_eD_e corresponds toa T_E-linearized line bundle𝒪(∑_es_eD_e)whose corresponding support function φsatisfiesφ(e) = -s_e for each e ∈Σ(1).For x = (x_w)_w ∈ T(Å_E)and φ a support function let H(x,φ) =∏_w ∈ Pl_E(q_w^-φ(n_w(x_w)))^1/[E:] where φ(n_w(x_w)) is evaluated usingthe canonical isomorphism X_∗(T_E) ≅ X_∗(T_E_w).The following simplified form is often useful.If x = (x_v)_v ∈ T(Å), embedded diagonally in T(Å_E), thenthe quantity φ(n_w(x_v)) is independentof the choice of w over v, and H(x,φ) = ∏_v ∈ M_q_v^-1/e_vφ(n_w(x_v)) where e_v is the ramification index of any prime of Elying over v (1 by definition if v = ∞).§.§ Computing the local toric heightLet L be a globally generated line bundle on Sand let {v_1,…,v_N}⊂ H^0(S_E,L)be a generating set of global sections.The standard height function on S associated to Land the generating set {v_1,…,v_N} is H(x,L,(v_i)_i=1^N) =∏_w ∈ Pl_Emax( |v_1(x)/s(x)|_w, …,|v_N(x)/s(x)|_w )^1/[E:](x ∈ S(E)). where s is any local nonvanishing section at x,and |·|_w = q_w^-ord_w(·)if w is nonarchimedean and|·|_w=|·|^d_w otherwise.The quantity H(x,L,(v_i)_i=1^N) does not depend onthe local section sor the choice of splitting field.If the line bundle L is linearized by the open torus T of Sin the sense of <cit.>,then the space of sections of L on any T-stableopen subset of S carries a linear action of Tand may therefore be diagonalized.The toric height on S associated toa T-line bundle Lis the standard height function on Sdefined using a basis of weight vectors for H^0(S,L).The advantage of this height is thatits local height functionsare amenable to harmonic analysis — namely their Fourier transformshave a simple form. The next lemma computes the weight vectors we needto express the toric height relativeto the toric divisor D_0.Let 1 denote the canonical nowhere-vanishingglobal section in H^0(S_E,𝒪(3D_0)).The space H^0(S_E,𝒪(3D_0)) is spanned over E by the following four weight vectors: 1, (1-3e_2e_1^-2)1,(e_1^3- 92 e_1 e_2+ 272 e_3+ √(-27)2√(disc)) e_1^-31,(e_1^3- 92 e_1 e_2+ 272 e_3- √(-27)2√(disc)) e_1^-31 where √(disc) = (X-Z)(Y-X)(Z-Y).The associated characters of T_E are, respectively, 1,χ^a'+b',χ^2a'+b',χ^2b'+a' where a' = 2v_1^∨ - v_2^∨ and b' = 2v_2^∨ - v_1^∨in the character lattice M_E = X^∗ T_Eand (v_1^∨,v_2^∨)is the dual basis to (v_1,v_2). Let φ_0 be the support function corresponding to -D_0.On S_E = S ⊗ E we havethe weight decomposition<cit.> H^0(S_E,𝒪(3D_0)) ≅⊕_u ∈ P ∩ M_E E χ^u where P is the polyhedron in M_E,ℝdefined by P = {u ∈ M_E,ℝ: u ≥ 3φ_0on N_E,ℝ}. Let (v_1^∨,v_2^∨) ∈ M_E,be the dual basis to (v_1,v_2) ∈ N_E.Write u = u_1v_1^∨ + u_2v_2^∨∈ M_E.The polyhedron P is cut out by the inequalities u ≥ 0 on σ_12u ≥ 3v_2^∨on σ_10u ≥ 3v_1^∨on σ_20. Figure <ref> depicts the polyhedron Pwhen N_E is identified withthe lattice in ℝ^2 generated byω=(1,0) andγω = (1/2,√(3)/2). Then the character lattice M_E is generated bya'=(1,-√(3)/3) and b'=(0,2√(3)/3).We have that u_1 = 3/2 x - √(3)/2y andu_2 = √(3) ywhere x,y are the standard coordinates on ^2,and the polyhedron Pis cut out by the inequalities 3/2 x - √(3)/2y ≥ 0 √(3)y ≥ 03≥3/2 x + √(3)/2y.We concludethat h^0(S_E,𝒪(3D_0)) = 4. The global section 1 is clearlythe weight vector in H^0(S_E,𝒪(3D_0))with trivial T_E-action.We may find the other three weight vectors inH^0(S_E,𝒪(3D_0))by twisting 1 bythe three nontrivial characters in P.Using the formulas from <ref>,one finds that χ^a'+b'(uv_0' + vv_1'+wv_2') = vw/u^2 =(X+ζ Y+ζ^2 Z)(X+ζ^2 Y+ζ Z)/(X+Y+Z)^2 = e_1^2-3e_2/e_1^2 with associated weight vector χ^a'+b'1.Similarly, χ^2b'+a'(uv_0' + vv_1'+wv_2')=w^3/u^3 = e_1^3- 92 e_1 e_2+ 272 e_3- √(-27)2√(disc)/e_1^3 and χ^2a'+b' = γχ^2b'+a'is the conjugate character. §.§ Completing the orbit parametrization Consider the function T(ℚ)→ℙ_3(E)(K/ℚ,x)↦ [w_1:w_2:w_3:w_4] where w_1,…,w_4 are the weight vectors inH^0(S_E,𝒪(3D_0)) given by (<ref>).The characteristic polynomial f = t^3-t^2+at+b ∈[t]of a rational point(K/ℚ,x) ∈ T(ℚ)has integer coefficients if and only if(K/ℚ,x) is D_0-integral.For any D_0-integral rational point (K/,x) on T,H((K/ℚ,x),𝒪(D_0)) =H(f)= √(1-3a). First we verify thatthe C_3-invariant functionse_1,e_2,e_3,√(disc) of X,Y,Zappearing in the formulas (<ref>)for the weight vectors are polynomial functionsof the coefficients of the characteristic polynomial of x.By <cit.>the unit u =∑_g ∈ C_3 g(x)[g^-1] ∈𝒢(K) maps to (K/ℚ,x) under𝒢→𝒢/C_3.Thus the three rational functionsX/ε,Y/ε,Z/εon ℙ evaluate on uto the Galois conjugates of x,and therefore any C_3-invariant polynomialin X/ε,Y/ε,Z/εis a polynomial function in thecoefficients of the characteristic polynomial of x.This proves the `if' direction of the first assertion,since a and bare the values at (K/,x) ofthe C_3-invariant polynomialse_2(X/ε,Y/ε,1-X/ε-Y/ε) and-e_3(X/ε,Y/ε,1-X/ε-Y/ε)in [X/ε,Y/ε]^C_3.For the `only if' direction,first we use that [X,Y,Z]^C_3 = [e_1,e_2,e_3,X^2Y+Y^2Z+Z^2X] (see e.g. <cit.>).For any integer d ≥ 1,dehomogenizing with respect to εinduces an isomorphism of C_3-modules[X,Y,Z]_d ≅[X/ε,Y/ε]_≤ dwhere (-)_d (resp. (-)_≤ d)denotes the submodule of homogeneousdegree d elements(resp. degree ≤ d elements).In particular, [X,Y,Z]_d^C_3≅[X/ε,Y/ε]_≤ d^C_3, and so (K/,x) is D_0-integralif and only if the four generators of [X,Y,Z]^C_3are integral on (K/,x).In fact, it already suffices for e_2 and e_3 to be integral:if e_2 and e_3 evaluate to integers on (K/,x),then X^2Y will evaluate to an integral element of Kand its trace will be an integer,equal to the value of the last generator.This proves the first assertion. To compute the toric height,we use <cit.>to express the support function φ_0associated to D_0using the weight vectors in H^0(S_E,𝒪(3D_0))found in Lemma <ref>.The local toric height H_v with respect to 𝒪(3D_0)of any point (K,x) ∈ T() is max( |w_1(x)/1(x)|_w, …, |w_4(x)/1(x)|_w )^1/[E:]=max( 1, |1-3e_2|_w, | 1-92 e_2+ 272 e_3- √(-27)2√(disc)|_w,| 1-92 e_2+ 272 e_3+ √(-27)2√(disc)|_w )^1/[E:] where |·|_w = q_w^-ord_w(·)if w is nonarchimedean and |·|_w=|·|^d_w otherwise.When (K,x) is D_0-integral,the only contributionto the height is the local contributionfrom the complex place w of E at infinity,which is max( 1, |1-3e_2|^2, |1- 92 e_2+ 272 e_3+ √(-27)2√(disc)|^2 )^1/2. A short computation shows that |1- 92 e_2+ 272 e_3+ √(-27)2√(disc)|^2 = (1-3e_2)^3. Thus 1-3e_2 > 0 and(1-3e_2)^3 ≥ (1-3e_2)^2which shows that H((K,x),𝒪(D_0)) =H((K,x),𝒪(3D_0))^1/3= √(1-3e_2) = H(f). As a function of characteristic polynomialst^3-t^2+at+b of rational points on T,the quotient √(3)max(|a|^1/2,|b|^1/3)/√(1-3a) is bounded and tends to 1 as a,b →∞.This shows that the toric height is equivalentto the “root height” inTheorem <ref>.§ THE POISSON SUMMATION FORMULAIn this section we prove the following formulafor the height zeta function for D_0-integralrational points on the open torus of S.Fix any s ∈ (^Σ(1))^Γwith Re(s_e) ≫ 0 for every e ∈Σ_w(1).Then the multivariate Dirichlet series Z(s)= ∑_P ∈ T(ℚ) D_0-integral H(P,s)^-1 is absolutely convergent and equals (1-3^-z) ζ(z) ∏_q≡231+1/q^z^-1∏_p≡131+3/p^z1-1/p^z^-1where z = 1/2(s_0+s_1+s_2).This multivariate Dirichlet series admitsa meromorphic continuation to{s ∈ (^Σ(1))^Γ :Re(s_0+s_1+s_2) > 1}.For the proof,we recall some well-known facts from harmonic analysis.For any finite place v of ℚlet d^× x_v be the Haar measure on T(ℚ_v)for which the maximal compact subgroup has measure one,and at the infinite place choose the Haar measured^× x_∞ on T(ℝ)for which v_0 ⊂ N_is a unimodular latticewith respect to the pushforward to N_ under n_wof d^× x_∞.For any finite set S of places of containing v=∞ let Å_Sdenote the subring of adeles which are integralat places not in S. There is a unique Haar measure on T(𝔸), denoted d^× x,whose restriction to T(Å_S) = ∏_v ∈ S T(_v) ×∏_v ∉S K_vis the product measure ∏_v ∉S d^× x_vfor all S.The Fourier transform of any factorizable integrable functionf = ⊗_v f_v ∈ L^1(T(𝔸)) is defined by f(χ) =∫_T(Å) f(x)χ(x)^-1 d^× x = ∏_v ∫_T(_v) f_v(x)χ_v(x)^-1 d^× x_v. The subgroup E^× =T(ℚ) is discrete inÅ_E^× = T(Å).We equip T(ℚ) with its counting measureand the quotient group T(ℚ)\ T(𝔸)with the quotient measure (also denoted d^× x)of d^× x by the counting measure.The dual measure dχ of this quotient measureis by definition the unique Haar measure on(T()\ T(Å))^∨with the property thatfor all F ∈ L^1(T()\ T(Å)) satisfyingF∈ L^1((T()\ T(Å))^∨),the Fourier inversion formula holds: F(x) =∫_(T()\ T(Å))^∨F(χ)χ(x) dχ. Let T(ℚ)^⊥ denote thethe subgroup of characters on T(Å) that are trivial onT(ℚ);this subgroup is canonically isomorphic to(T()\ T(Å))^∨.Let f ∈ L^1(T(Å)).The general Poisson summation formula —following from the classical proof for ⊂ —says that iff |_T(ℚ)^⊥∈ L^1(T(ℚ)^⊥)then ∫_T() f(xy)dx =∫_T()^⊥f(χ) χ(y)dχ for a.e. y ∈ T(ℚ) andsuitably normalized Haar measure dχ on T()^⊥<cit.>. To apply the Poisson summation formulawe will compute the Fourier transform of x ↦ H(x,-s,D_0) = H(x,-s) 1_D_0(x) (x ∈ T(𝔸)) where 1_D_0 T(Å) →{0,1}is the characteristic function on D_0-integral points. The function H(x,-s,D_0) is factorizableso its Fourier transform is equal tothe product of the transforms of its local factors: H(χ,-s,D_0) = ∏_v∈ M_H_v(χ_v,-s,D_0). As usual, we say thata character χ on T(_v) is ramifiedif its restriction to the maximal compact subgroup is nontrivial,and otherwise it is unramified. Let s ∈ (^Σ(1))^Γ andassume Re(s_e)>0 for each e ∈{0,1,2}.Let w be the infinite place of E. Let χ∈ T()^∨ be a unitary character.If χ is ramified thenH_∞(χ,-s) is identically zero.If χ is unramified, thenχ(x)= e(⟨ n_w(x) ,m⟩) for all x ∈ T()for a unique m ∈ M_,and H_∞(m,-s)= (-1/2π i) s_0+s_1+s_2/2π i1/(m(v_0)+s_0/2π i)(m(v_0)-s_1+s_2/2π i).Next let v be a finite place of .For any unitary character χ∈ T(_v)^∨,the integral defining H_v(χ,-s,D_0)converges absolutely to a holomorphic function of sin the region {s ∈ (^Σ(1))^Γ :Re(s_1),Re(s_2)>0 }.Assume v ≠ 3.Let w be any place of E lying over v.The local characteristic function 1_D_0,v is K_v-invariant.If χ is ramified, then H_v(χ,-s,D_0)is identically zero.If χ is unramifiedthen we may regard χ as a character onX_∗(T_E)^Γ(w/v)(Proposition <ref>)and H_v(χ,-s,D_0) = ∑_n ∈ X_∗(T_E)^Γ(w/v) n ∈ℝ_≥ 0v_1+ℝ_≥ 0v_2χ(n)^-1 q_v^φ(n) . If v = 3 thenthe support of x ↦ H_3(x,-s,D_0) isthe unique subgroup K_3,2 of K_3 of index six.Under the isomorphism T(_3) → E_3^× the supportcorresponds to the subgroup 1+3O_E,w of O_E,w^×where w is the unique place of E lying over 3. The local Fourier transforms —and therefore the entire Poisson summation argument —must be computed before restricting tothe line in Pic^T(S) ⊗spanned by the T_E-line bundle 𝒪(D_0) of interestsince x ↦ H_v(x,-s,𝒪(D_0)) will notbe integrable for any place v ≠ 3,∞once either of s_1 or s_2 vanishes,no matter how large and positive Re(s_0) is. Note that 1_D_0,∞ is identically onesince integrality conditions are only imposed at finite places,and also observe thatthe integrand is K_∞-invariant.If χ is ramified then H_∞(χ,-s,D_0)vanishes by Schur's lemma,so suppose χ is unramified.Then H_∞(χ,-s,D_0) = ∫_T() H_∞(x,-s)1_D_0,∞(x)χ(x)^-1d^× x_∞ = ∫_N_w, H_∞(y,-s) e(-⟨ y,m⟩)dμ(y)= ∫_N_w, e^φ(y) e(-⟨ y,m⟩)dμ(y). Next we compute that ∫_N_w, e^φ(y) e(-⟨ y,m⟩)dμ(y)= ∫__≥ 0 e^φ(yv_0) e(-⟨ yv_0,m⟩)dμ(y) + ∫__≥ 0 e^φ(-yv_0) e(⟨ yv_0,m⟩)dμ(y)= ∫__≥ 0 e^-y(s_0+2π im(v_0)) dμ(y) + ∫__≥ 0 e^-y(s_1+s_2-2π im(v_0)) dμ(y)= (s_0+2π im(v_0))^-1 + (s_1+s_2-2π im(v_0))^-1= (-1/2π i) ( (-m(v_0)-s_0/2π i)^-1 + (m(v_0)-s_1+s_2/2π i)^-1)= (-1/2π i) -(s_0+s_1+s_2)/2π i1/(-m(v_0)-s_0/2π i)(m(v_0)-s_1+s_2/2π i) which proves the claimed formula. Next let v be a finite place ofandlet w be any place of E lying over v.Let N_w = X_∗(T_E)^Γ(w/v).The weight vectors in H^0(S_E,𝒪(3D_0)) correspond to the characters0,v_1^∨,v_2^∨,3v_1^∨,3v_2^∨ in M_E,so from (<ref>)we see that the local height H_v(x,D_0) is ≤ 1if and only ifn_w(x) ∈ℝ_≥ 0v_1+ℝ_≥ 0v_2. Now consider the sub-O_E-module O_E⟨ w_1,w_2,w_3,w_4⟩⊂O_E[X,Y,Z]^C_3_3 =O_E⟨ e_1^3,e_1e_2,e_3,δ⟩ where δ = X^2Y + Y^2Z+Z^2X (cf. (<ref>)).From the formulas for the weight vectors,one computes that the homomorphism takingthe basis vectors e_1^3,e_1e_2,e_3,δ tothe weight vectors w_1,w_2,w_3,w_4,respectively,has the matrix [ 1 1 1 1; 0-3 -3(2+ζ) -3(2+ζ^2); 0 09(2+ζ)9(2+ζ^2); 0 0 3(1+2ζ) 3(1+2ζ^2); ] which has determinant 243√(-3). Assume v ≠ 3.The cokernel of (<ref>) is a 3-group,so this inclusion becomesan isomorphism after tensoring with _v.Thus x ∈ T(_v) is D_0-integrale_2(x),e_3(x),δ(x) ∈_v w_1(x),…,w_4(x) ∈ O_E ⊗_vH_v(x,D_0)≤ 1n_w(x) ∈ℝ_≥ 0v_1+ℝ_≥ 0v_2.This also shows that 1_D_0,v is K_v-invariantsince the w-adic size of each weight vector isunchanged under the action of K_v.If χ is ramifiedthen the Fourier transform ofH_v(x,s)^-11_D_0,v(x)vanishes by Schur's lemma,so suppose χ is unramified at v.The integrand is K_v-invariantand d^× x_v(K_v)=1 so ∫_T(_v) H_v(x,s)^-11_D_0,v(x)χ(x)^-1 d^× x_v=∑_n ∈ N_w q_v^1/e_vφ(n) 1_D_0,v(n)χ(n)^-1 = ∑_n ∈ X_∗(T_E)^Γ(w/v) n ∈ℝ_≥ 0v_1+ℝ_≥ 0v_2χ(n)^-1 q_v^φ(n) .For v = 3 we use the integrality conditions(<ref>)rephrased in terms of cyclotomic numbersfrom Proposition <ref>,which in this local context take the form e_2(x),e_3(x),δ(x) ∈_3 u^2+v^2-uv ∈ 1+3 _3and (u^2+v^2-uv)(3-2u+v) ∈ 1+27 _3 where x ↔ u+vζ∈(ζ) ⊗_3.These conditions imply that u+vζ isa 3-adic unit, so the support of x ↦ H_3(x,-s,D_0)is contained in K_3.Suppose that z=u+vζ∈ K_3 is in the support.Let N=u^2+v^2-uv and T=2u - v.Define n,τ∈_3 by N=(1+3n)^-1, N(3-T)=1+27τ.One easily sees from these equations that T-2=3n+O(3^2) and 1-T+N=3^2n^2+O(3^3) and therefore fromthe Newton polygon of the characteristic polynomial of z, t^2-Tt+N = (t-1)^2-(T-2)(t-1)+1-T+N, one concludes that z ∈ 1 + 3O_w.Conversely if z = 1 + 3x with x ∈ O_w thenclearly N(z) ∈ 1 + 3 _3 whileN(3-T) = 1+9(N-T^2)-27NT = 1 + 9(-3+9n)+O(3^3) ∈ 1 + 27 _3. To compute the quantities arising in the Poisson summation formula,we need to parameterize the continuouspart of the automorphic spectrumof the torus T.For any x=(x_w)_w∈ T(Å_E)let L(x)=12∑_w ∈ Pl_E n_w(x_w)log q_w ∈ N_E,.We can give a simpler expression for Lusing the isomorphismT ≅ R^E__m.It is easy to check that L(x)(N) = log |N(x)|_Å where NÅ_E^×→Å^×is the norm character.The norm character generatesthe rational character lattice M_E^Γso N_E^Γ is generatedby the unique Γ-invariant cocharacter in N_Ewhich takes the norm character to 1.Thus for any x ∈Å_E^× = T(),L(x) = 12 log |N(x)|_Å(v_1+v_2)∈ N_.There is an exact sequence 1 ⟶ K/μ⟶ T(ℚ)\ T(Å) L N_ℝ⟶ 0 where K is the maximal compact subgroup of T(Å)and μ = T() ∩ K. From (<ref>) we seethe kernel of L is the norm-one subgroup ofthe idèle class groupT(ℚ)\ T(Å) of E.The rank of the group of units is zero andthe class group is trivialso the norm-one subgroup of the idèle class group isgenerated by K/μ.Finally L is surjective since n_w is already surjectivefor the complex place w of E(Proposition <ref>).Let K' ⊂ Kdenote the subgroup which fixesthe characteristic function 1_D_0 = ⊗_v 1_D_0,vfor D_0-integral points in T(Å).ThenK' = K_3,2×∏_v ≠ 3 K_v where K_3,2⊂ K_3 is the unique subgroup with index 6.There is an exact sequence 1 ⟶ K/(K' ·μ) ⟶ T(ℚ)\ T(Å)/K' L N_ℝ⟶ 0. Restriction to the connected component ofthe identity in T(ℚ)\ T(Å)/K'gives a canonical splittingsN_→ T(ℚ)\ T(Å)/K'of L, inducing the isomorphisms T(ℚ)\ T(Å)/K' ∼ T()\ T() K/K' × N_∼ K/(K' ·μ)× N_ T() x K'↦ (T()xs(L(x))^-1K',L(x)) where the second map is defined usingthe natural isomorphismT()\ T() K/K' ≅ K/(K' ·μ).The equality K' = K_3,2×∏_v ≠ 3 K_vfollows from K_v-invariance ofthe local characteristic functions1_D_0,v when v ≠ 3and the computation of the support when v = 3from Proposition <ref>.The short exact sequence is obtained bytaking the quotient by K' of the first two groups inthe short exact sequence of Proposition <ref>.The group K/(K' ·μ) is finite sothe natural quotient map T(ℚ)\ T(Å)/K' →T(ℚ)\ T(Å)/Kidentifies the connected component of the identityof T(ℚ)\ T(Å)/K'with T(ℚ)\ T(Å)/K.Thus the restriction of L tothe connected component of the identityof T(ℚ)\ T(Å)/K'is an isomorphism onto N_,so its inverse givesthe canonical splitting map s. Now we may prove Theorem <ref>.Let 1_D_0 T(Å) →{0,1}be the characteristic function on D_0-integral points.By the definition of D_0-integrality,1_D_0 = ⊗_v1_D_0,v is a factorizable function. Take f = H(·,-s)1_D_0.To apply the Poisson formulawe verify that f is in L^1(T(Å))and the restriction of fis in L^1(T()^⊥). From (<ref>) we have H(x,-s)1_D_0 =∏_v ∈ M_1_D_0,v(x_v) q_v^1/e_vφ(n_w(x_v)), x=(x_v)_v ∈ T(Å) ⊂ T(Å_E). For any finite set S of places of containing v=∞ let Å_Sdenote the subring of adeles which are integralat places not in S.The chain of inequalities∫_T(Å) f(x)d^× x = lim_Ccompact∫_C f(x)d^× x≤lim_Sfinite∫_T(Å_S) f(x)d^× x≤∫_T(Å) f(x)d^× x in the limits of larger C and Sshows that ∫_T(Å) f(x)d^× x = lim_S∫_T(Å_S) f(x)d^× x≤lim_S∏_v ∈ S v ≠ 3∫_T(_v) 1_D_0,v(x_w)q_v^φ(n_w(x_w)) d^× x_v (recall that H_3(x,-s,D_0) is supported in K_3by Proposition <ref>).Let |·| be any norm on N_.There is a constant ρ>0 such thatfor any finite place v≠ 3, any place w of E lying over v,and n ∈ N_E^Γ(w/v), |1_D_0,v(n)q_v^φ(n)|≤ 0 if n is not in ℝ_≥ 0v_1+ℝ_≥ 0v_2,q_v^-ρ |n| min{Re(s_1),Re(s_2)} otherwise. Set t = min{Re(s_1),Re(s_2)}. Then for v ≠ 3 we have | ∫_T(_v) 1_D_0,v(x_w) q_v^φ(n_w(x_w)) d^× x_v| ≤∑_n ∈ N_E^Γ(w/v)∩ ( ℝ_≥ 0v_1+ℝ_≥ 0v_2) q_v^-ρ t|n|≪(1-q_v^-ρ t)^-rk N_E^Γ(w/v) where the implied constant is independent of v.For v = ∞ we have already seen thatx↦ H_∞(x,-s,D) is integrableonce Re(s_e) >0 for all e ∈Σ_Γ(1)(Proposition <ref>).Thus for any finite set of places S, | ∫_T(Å_S) f(x)d^× x |≪∏_v ∈ S v ≠∞(1-q_v^-ρ t)^-1≤ζ(ρ t) which is finite for t > 1/ρ.Taking the limit over S shows f is integrable. Next we prove that the restriction of fto T()^⊥≅ (T(Å)/T())^∨is integrable by evaluating the integral.By Schur's lemma, this function is supported on(T()\ T(Å)/K')^∨where K' ⊂ K is the subgroup which fixesthe characteristic function 1_D_0.We will use the isomorphism in Lemma <ref> to perform the integral over the automorphic spectrum of T.Let C denote the finite group K/(K' ·μ).For any χ∈ (T()\ T(Å)/K)^∨there is a unique m ∈ M_ such thatχ(x) = e(⟨ m,L(x) ⟩)for all x ∈ T(Å).Set t = m(v_0) for m ∈ M_and let χ_t be the corresponding character.Any K'-unramified automorphic character of Tis of the form ψχ_tfor a unique ψ∈ C^∨ and t ∈.The Haar measure on T() was chosen so thatv_0 ⊂ N_ was unimodularfor the pushforward measure to N_, and so ∫_(T(Å)/T())^∨f(χ)dχ=κ∑_ψ∈ C^∨∫_f(ψχ_t)dt where κ is a positive constantyet to be determined.The local v-adic component ψ_v ∈ T(_v)^∨of ψ isT(_v) → T(ℚ)\ T(Å)/K'↠ C ^×.Because the group N_ = T()/K_∞has no nontrivial finite quotients,the local component ψ_∞ is trivial,and the infinite factor of fis (<ref>).With the help of (<ref>)we find the product over the finite factorsbesides v=3 is ∏_v ≠ 3,∞f_v(ψ_vχ_t,v) = ∏_v ≠ 3,∞∑_m ∈ X_∗(T_E)^Γ(w/v) m ∈ℝ_≥ 0v_1+ℝ_≥ 0v_2ξ_v(m)^-1q_v^φ(m) = ∑_ηξ(η)^-1η^-s where ξ = ψχ_t,η = (m_w)_v∈∏_v≠ 3,∞ X_∗(T_E)^Γ(w/v)satisfies certain conditions,ξ(η) ∏_v ≠ 3,∞ξ_v(m_w),andη^-s∏_v ≠ 3,∞q_v^φ(m_w). This is a multivariate Dirichlet series in s_1 and s_2with summands indexed by ηwhich is absolutely convergent when s_1 and s_2 havesufficiently large and positive real parts,so the integral in (<ref>)may be distributed into the sum over η.With the help of(<ref>),we see that (<ref>) equals κ(-1/2π i) s_0+s_1+s_2/2π i∑_ηη^-s∫_1/(t+s_0/2π i)(t-s_1+s_2/2π i)∑_ψ∈ C^∨f_3(ξ_3)ξ(η)^-1 dt. Let K_3,2 denote the support ofthe local characteristic function 1_D_0,3(cf. Proposition <ref>).Since χ_t,3 is trivial on K_3,we have f_3(ξ_3) = f_3(ψ_3).Recall that n_wT(_v) → X_∗(T_E)^Γ(w/v)is surjective for any finite w (Proposition <ref>).Since T() ⊂ T(_v) is dense for any v(E^× is obviously dense inE^×_v = (E ⊗_v)^×),there is a y_v ∈ T() ⊂ T(_v)which is a n_w-preimage of m_wwhere η = (m_w)_v.Then ∑_ψ∈ C^∨f_3(ξ_3)ξ(η)^-1 = ∑_ψ∈ C^∨∫_K_3,2ψ_3(x_3)^-1 d^× x_3 ξ(η)^-1= χ_t(η)^-1∑_ψ∈ C^∨∫_K_3,2ψ_3(x_3)^-1∏_v ≠ 3,∞ψ_v(y_v)^-1 d^× x_3. Since K_v = K_v' for all v ≠ 3,∞(Lemma <ref>),the 3-adic projection map pr_3induces an isomorphismCK_3/(K_3,2·pr_3(μ)).Let k_η∈ K_3/(K_3,2·pr_3(μ)) be the image of∏_v ≠ 3,∞ y_v underT()\ T(Å)/K' → C → K_3/(K_3,2·pr_3(μ))so that ∏_v ≠ 3,∞ψ_v(y_v) = ψ_3(k_η).Explicitly, k_η is ∏_v ≠ 3,∞ k_vwhere k_v = (k_v,v')_v'∈ K is the idèle with componentsk_v,v' =1 if v' = v,y_v^-1 |y_v|_v^-1/2 if v' = ∞,y_v^-1 otherwise. In particular, ψ_v(y_v) = ψ_3(pr_3(y_v)^-1). We claim that pr_3(y_v) ∈ K_3,2for all v ≠ 3,∞(a priori it is only in K_3).This amounts to the assertion thatevery prime ideal in O_E not dividing 3admits a generator that is congruent to 1 3O_E.In other words, we claim thatthe ray class group C_𝔪of O_E with modulus 𝔪 = 3 O_E is trivial.This follows from the short exact sequence<cit.>(with notation defined there) 0 ⟶ O_E^×/O_E,1^×⟶ E_𝔪^×/E_𝔪,1^×⟶ C_𝔪⟶C ⟶ 0 which implies that h_𝔪 = h ·#(O_E^×/O_E,1^×)^-1·2^r_0· N(𝔪_0) ·∏_𝔭 | 𝔪_0(1-N(𝔭)^-1) = 1 · 1^-1· 2^0 · 3^2· (1-3^-1) = 1. Thus the integral in (<ref>)simplifies down to ∑_ψ∈ C^∨∫_K_3,2ψ_3(x_3)^-1∏_v ≠ 3,∞ψ_v(y_v)^-1 d^× x_3.= ∑_ψ∈ C^∨∫_K_3,2ψ_3(x_3k_η)^-1 d^× x_3= ∑_ψ∈ C^∨∫_K_3,2ψ_3(x_3)^-1 d^× x_3 by absorbing k_η into the Haar measure.Since K_3,2⊂ψ_3for any ψ∈ C^∨,and recalling that d^× x_3(K_3) = 1,this is equal to ∑_ψ∈ C^∨∫_K_3,2ψ_3(x_3)^-1 d^× x_3 =|C|· d^× x_3(K_3,2) =|C| · [K_3:K_3,2]^-1 = [K_3,2pr_3(μ):K_3,2] = 6.Returning to (<ref>),we see that Z(s) is equal to ∫_(T(Å)/T())^∨f(χ)dχ = 6κ(-1/2π i) s_0+s_1+s_2/2π i∑_ηη^-s∫_χ_t(η)^-1 dt/(t+s_0/2π i)(t-s_1+s_2/2π i). This can be evaluated using Cauchy's residue formula.The numerator of the integrand in (<ref>)is bounded in the upper half-planeand the denominator is ≪ t^-2so we may deform the path of integration alongtothe upper half-plane and obtain (-1/2π i) (s_0+s_1+s_2)∑_ηη^-sRes[ χ_t(η)^-1/(t-s_1+s_2/2π i) ;t = -s_0/2π i]=∑_ηη^-sχ_s_0/2π i(η). We now describe the conditions determining η = (m_w)_v.A tuple (m_w)_v ∈∏_v N_w corresponds to a summandof (<ref>) if and only ifm_w ∈imn_w∩(ℝ_≥ 0v_1+ℝ_≥ 0v_2) for all w.By Proposition <ref>, imn_w =⟨ v_1,ω⟩ if w split,N_E^Γ=⟨ v_0 ⟩ otherwise. Any element of N_E may be expressed asa v_1 + b ω= av_1 + b( 13(2v_1+v_2)) = (a+23 b)v_1 + 13 bv_2for integers a,b.Thenn_1 = ∏_q≡23q^c_q∏_p≡13p^a_p+2/3 b_p and n_2 = ∏_q≡23q^c_q∏_p≡13p^1/3 b_p for integer exponents a_p,b_p,c_q almost all zero and satisfying a_p+23b_pand 13 b_p≥0 if p≡13,c_q ≥ 0if q≡23. For a given η = (m_w)_v,let n_1,n_2 ∈_≥ 1 be determined by the equality v_1 log n_1 + v_2 log n_2= ∑_v≠ 3,∞m_wlog q_v. The v-adic component of χ_t ∈ M_ (t ∈)is given by χ_t,v(m_w)= χ_t,v(m_w,1v_1+m_w,2v_2)= q_v^-π i(m_w,1+m_w,2)tand so χ_t(η)=∏_v ≠ 3,∞χ_t,v(m_w) =∏_v ≠ 3,∞ q_v^-π i (m_w,1+m_w,2)t =(n_1n_2)^-π i t. We have thatη^-s = ∏_v ≠ 3,∞ q_v^φ(m_w) =(n_1 n_2)^-s_1 and finally η^-sχ_s_0/2π i(η) =(n_1n_2)^-(s_0/2+s_1). Set z = s_0/2+s_1(the unique M_-invariant linear form on(^Σ(1))^Γ up to scaling).Then ∫_(T(Å)/T())^∨f(χ)dχ= 6 κ ∏_q≡23∑_c_qq^-2c_qz∏_p≡13∑_a_p,b_pp^-(a_p+b_p)z .Fix b_p≥ 0 and sum over all compatible a_pin the right-most sum: ∑_a_p≥ -2/3b_p p^-(a_p+b_p)z=p^-b_pz∑_a_p≥ -2/3b_p p^-a_pz =p^-(b_p-⌊2/3b_p⌋)z1-1/p^z^-1.Let b = 3k+j forj ∈{0,1,2} and k ∈_≥ 0.Observe that 23b=2kif b=3k or 3k+1,2k+1if b=3k+2.Now summing (<ref>) over b_p≥ 0 obtains ∑_b_p ≥ 0 a_p≥ -2/3b_pp^-(a_p+b_p)z =1-1/p^z^-1∑_b=3kp^-kz +∑_b=3k+1p^-(k+1)z +∑_b=3k+2p^-(k+1)z=1-1/p^z^-1( (1-p^-z)^-1 +2p^-z(1-p^-z)^-1)= 1-1/p^z^-11+3/p^z1-1/p^z^-1.Finally we return to finish computing the zeta function.Combining(<ref>) and (<ref>)obtains Z(s)= 6κ(1-3^-z) ζ(z) ∏_q≡231+1/q^z^-1∏_p≡131+3/p^z1-1/p^z^-1.This shows thatthe restriction of fto T()^⊥≅ (T(Å)/T())^∨ is integrableand given by this multivariate Dirichlet seriesfor Re(z) = 1/2Re(s_0+s_1+s_2) ≫ 0.The precise region of convergence claimed in the theorem statementwill be computed in the lemma below. To compute the constant κ,note there is only one monic trace-one cubic polynomialof toric height equal to 1which either has Galois group C_3or splits into linear factors over ,with at most two being the same,and it is f = t^3 - t^2.This polynomial corresponds toa unique rational point of Tsince it has repeated factors(Proposition <ref>).This means the coefficient of 1in this Dirichlet series is 1 andκ = 1/6.In the next lemma we reexpress Z(s)in a form better suited for determining the poles and leading constants. The height zeta function is also given by Z(s)=(1-1/3^z)^2 ζ_(√(-3))(z)^2 ∏_q≡23(1-1/q^2z) ∏_p≡13(1-3/p^2z+2/p^3z)where ζ_(√(-3)) is the Dedekind zeta functionof the cyclotomic field (√(-3)).The height zeta function has meromorphic continuation tothe region {s ∈ (^Σ(1))^Γ :Re(s_0+s_1+s_2) > 1}. Since (1+3x(1-x)^-1)(1-x)^3 = 1-3x^2+2x^3 we have ∏_p≡131+3/p^z1-1/p^z^-1∏_p≡13(1-1/p^z)^3 = ∏_p≡13(1-3/p^2z+2/p^3z).Let χ = (-3/·)= (·/3)be the nontrivial quadratic character of modulus 3.Multiplying both sides of (<ref>)by L(z,χ) obtains ∏_q≡231+1/q^z^-1∏_p≡131+3/p^z1-1/p^z^-1∏_p≡13(1-1/p^z)^2= L(z,χ) ∏_p≡13(1-3/p^2z+2/p^3z).Now∏_p≡13(1-1/p^z)^2= ∏_p≡13(1-1/p^z) ∏_q≡23(1-1/q^z) ∏_p≡13(1-1/p^z) ∏_q≡23(1+1/q^z)/∏_q≡23(1-1/q^2z)= ((1-1/3^z)ζ(z)L(z,χ))^-1∏_q≡23(1-1/q^2z)^-1.Putting this into the previous equation obtains L(z,χ) ∏_p≡13(1-3/p^2z+2/p^3z) =∏_q≡231+1/q^z^-1∏_p≡131+3/p^z1-1/p^z^-1×((1-1/3^z)ζ(z)L(z,χ))^-1∏_q≡23(1-1/q^2z)^-1 which shows that Z(s) is equal to (1-3^-z)ζ(z)∏_q≡231+1/q^z^-1∏_p≡131+3/p^z1-1/p^z^-1=((1-1/3^z)ζ(z)L(z,χ) )^2 ∏_q≡23(1-1/q^2z) ∏_p≡13(1-3/p^2z+2/p^3z)=((1-1/3^z)ζ_(√(-3))(z) )^2 ∏_q≡23(1-1/q^2z) ∏_p≡13(1-3/p^2z+2/p^3z).The Dedekind zeta function has meromorphic continuationto the entire complex plane,so the meromorphic continuation of the height zeta functionis determined by the remaining Euler product: ∏_q≡23(1-1/q^2z) ∏_p≡13(1-3/p^2z+2/p^3z)We have 1-x^2=(1+x^2)^-1(1-x^4) and1-3x^2+2x^3=(1-x^2)^3(1+2x^3-3x^4+O(x^5)). This shows that the Euler product in question is L(2z,χ) ∏_q≡23(1-1/q^4z) ∏_p≡13(1-1/p^2z)^4 (1+2/p^3z-3/p^4z+⋯). The Dirichlet L-function is entire. The Euler product over q≡23is absolutely convergentin the region Re(z)>1/4.The Euler product∏_p≡13(1-1/p^2z)^-4has meromorphic continuation tothe region Re(z)≥ 1/2with a pole of order 2 when z = 1/2and is nonvanishing on the line Re(z) = 1/2,so ∏_p≡13(1-1/p^2z)^4is holomorphic in the region Re(z)> 1/2.The remaining Euler product∏_p≡13(1+2/p^3z-3/p^4z+⋯)is absolutely convergentin the region Re(z)> 1/3.We specialize to the line spanned by D_0in the vector space of toric divisors,and writeZ_0(s) = Z(sD_0)where s now denotesa single complex variable. The height zeta function Z_0(s) = Z(sD_0)can be meromorphically continued to the half-plane Re(s)>1and its only pole in this region is at s = 2 with order 2.LetE(s) =(1-1/3^z)^2 ∏_q≡23(1-1/q^2z) ∏_p≡13(1-3/p^2z+2/p^3z). Then the Laurent expansion of Z_0(s) at s = 2 has the form c_2(s-2)^-2 + c_1(s-2)^-1 + ⋯= 4L(1,χ)^2E(2)(s-2)^-2 +(4L(1,χ)(γ L(1,χ) + L'(1,χ))E(2) +4 L(1,χ)^2 E'(2))(s-2)^-1 + ⋯. Explicitly, c_2 =16π^2/243∏_q≡23(1-1/q^2) ∏_p≡13(1-3/p^2+2/p^3) and c_1/c_2= 2 γ + log(2π)- 3 log(Γ(1/3)/Γ(2/3)) + 9/8log 3 + 9/4∑_q≡23log q/q^2-1 + 27/4∑_p≡13(p+1)log p/p^3-3p+2 .The infinite product for E(s)converges to an analytic function onthe half-plane Re(s) ≥ 2so E(2) and E'(2) are well-defined.The class number formula gives lim_s → 2 (s-2)ζ_(√(-3))(s/2) = 2·2^r_1·(2π)^r_2· R · h/w ·√(|D|) = 2·2^0·(2π)^1· 1 · 1/6 ·√(3) =2π/3√(3). Thus the coefficient of the leading term is c_2= (1-1/3)^2 (2π/3√(3))^2 ∏_q≡23(1-1/q^2) ∏_p≡13(1-3/p^2+2/p^3) .The coefficient c_1 can be computed usingthe factorizationζ_(√(-3))(z) = ζ(z) L(z,χ)and <cit.> -L'(1,χ)= ∑_n = 2^∞χ(n) log n/n = π/√(3)( log(Γ(1/3)/Γ(2/3)) - 1/3 (γ + log(2π))).The expression for Z_0(s) followsfrom combiningLemma <ref> andProposition <ref>.Fix the isomorphism Pic(S) ⊗→taking the ample generator to 3.It remains to be seen thatthe image of the line spanned by D_0 inthe vector space of toric divisorsis identified with Pic(S) ⊗such that D_0 corresponds to s = 1.The canonical divisor K is D_0+D_1+D_2.The surface S has Picard rank one<cit.>and the unique ample generatoris equivalent up to torsion inthe divisor class group to -Kby <cit.>and <cit.>.One computes that 3D_0 is linearly equivalent to Kso 𝒪(D_0)=1/3𝒪(K)↔ s =1. § PROOFS OF THEOREM <REF> AND THEOREM <REF> Let x ∈ℂ be a rootof an irreducible polynomialwith rational coefficients with Galois group C_3and t^2-coefficient -1.Then x is a normal element inthe Galois extension ℚ(x)/ℚ.Let σ be a generator for C_3 andset y = σ x, z = σ^2 x.Suppose for the sake of contradiction thatthe points x,y and z lie on a plane P in (x) ⊗ containing 0.Then x,y and z lie on a line L,namely the intersection of P withthe trace-one affine hyperplane{tr^(x)_ = 1}.This implies that z-y=σ(y-x)is proportional to y-x,and thus y-x is an eigenvector of σ.The only real eigenvalue of σ is one,so y-x=z-y=x-z,all equal to some nonzero element λ of .Adding these up shows thaty+z+x-x-y-z = 0 = 3 λ, a contradiction.Let (α,β,γ) ∈^3satisfy α+β+γ=1.Then (α,β,γ) is a normal element inthe split ℚ-algebra ^3.If (α,β,γ) is not normal, then[ α β γ; β γ α; γ α β ]=3αβγ-α^3-β^3-γ^3=0.Set a=αβ+βγ+γα. First observe that1=(α+β+γ)^2=α^2+β^2+γ^2+2aand so α^2+β^2+γ^2=1-2a.Next, a=(αβ+βγ+γα)(α+β+γ) =3αβγ+α^2(β+γ)+β^2(α+γ)+γ^2(α+β)=3αβγ+α^2(1-α)+β^2(1-β)+γ^2(1-γ)=3αβγ+α^2+β^2+γ^2-(α^3+β^3+γ^3).Putting these together with (<ref>) shows that a=3αβγ-α^3-β^3-γ^3+(1-2a) = 1-2a which is impossible since a∈.A polynomial f = t^3 -t^2 + at + b = (t-α)(t-β)(t-γ)∈[t]which splits into three linear factors over will be called normal ifx = (α,β,γ) is a normal element of the split C_3-algebraK_spl = ^3.Since x is normal if and only if it has at most two identical coordinates,the split polynomial f is normal if and only ifit has at most two identical roots.From the above lemmas we see that the polynomialsunder consideration are all normal,which means they are all realized by rational points of T.Let F denote the set of polynomialsof the form t^3 -t^2 + at + b ∈[t]which either have Galois group C_3 orsplit into three linear factors over .Then any f ∈ F is normal. We have #{f ∈ F : reducible, disc(f) ≠ 0, H(f) ≤ H} = π/9√(3)H^2-16 H+O(H^t) for some 1/2<t<1and #{f ∈ F : reducible, disc(f) = 0, H(f) ≤ H} = 13H + O(1). In the error termone may take t = 131/208 <cit.>.Consider the ellipse in ^2 defined by E_H :-a=x^2+y^2+xy-x-y = 13(H^2-1). The permutation action of the symmetric group S_3stabilizes the affine hyperplane x+y+z = 1.If we identify ^2 with this affine hyperplanevia (x,y) ↦ (x,y,1-x-y) thenthe induced action of S_3 on ^2stabilizes the level sets of x^2+y^2+xy-x-y, and therefore acts on the interior of E_H.Let E_H^∘ = E_H ∪int(E_H).Then we have a canonical bijection (E_H^∘∩^2)/S_3{f ∈ F : reducible,H(f) ≤ H}. A lattice point (α,β) has a nontrivial stabilizer in S_3if and only if eitherα = β or1 - α - β∈{α,β},so the number of lattice points in E_H^∘with a nontrivial stabilizer is H + O(1).The area of E_H^∘ isA_H=2π/3√(3)(H^2-1),so the number of lattice points in E_H^∘is A_H+O(H^t) for some t < 1(conjecturally t = 12+ε).Thus #{f ∈ F : reducible, disc(f) ≠ 0, H(f) ≤ H} = 16 (A_H-H+O(H^t)) and #{f ∈ F : reducible, disc(f) = 0, H(f) ≤ H} = 13H + O(1).Theorem <ref> is now easily provenby subtracting off the count for reducible polynomials in Lemma <ref>from the Dirichlet coefficients of Z(s)as expressed in Theorem <ref>.We may now prove Theorem <ref>. Let d_n denote the nth Dirichlet coefficient of Z_0(2s).By the modular interpretation for 𝒢/C_3(Theorem <ref>),d_n is equal to the number of equivalence classes (K,x)of Galois C_3-algebras K/ℚequipped with a trace one normal element x ∈ Kand toric height √(n).Let K' denote the twist of the C_3-algebra Kby the outer automorphism of C_3.Each rational point (K,x) falls into one of the following cases(cf. examples from <ref>): * K is an abelian cubic field, * K is the split C_3-algebra K_spl=^3and x has exactly two identical coordinates, or * K is the split C_3-algebra K_spl=^3and x has distinct coordinates.(It cannot happen that K = K_spland x has three identical coordinatessince x would not be normal.)In these cases, respectively, we have * K ≇K' and (K,x) ≠ (K',x), * K ≅ K' and (K,x) = (K',x), or * K ≅ K' and (K,x) ≠ (K',x).The characteristic polynomial f of xnearly determines the rational point (K/,x) —in these cases, respectively, f arises asthe characteristic polynomial for * precisely the two rational points (K,x) and (K',x), * only the rational point (K_spl,x), or * precisely the two rational points(K_spl,x) and(K_spl',x).[Let σ be a transposition in S_3.Then (K_spl,σ x) has the same characteristic polynomialas (K_spl,x) but it does not give us another rational pointsince(K_spl',x)=(K_spl,σ x).Thus these two rational pointsaccount for (K_spl,σ x) for any σ∈ S_3.] Let F denote the set of polynomialst^3 -t^2 + at + b ∈[t]which either have Galois group C_3 orsplit into three linear factors over .Then any f ∈ F is automatically normal(Corollary <ref>) so arises as the characteristic polynomialfor some rational point in T.The preceding analysis shows thatthe number w_f of rational points of Twith characteristic polynomial equal toa given f ∈ Fis given by (<ref>).Thus among f ∈ F with H(f) = √(n)we have that 2#{irreducible}= d_n- #{reducible, disc(f) = 0} -2#{reducible, disc(f) ≠ 0}. Now we sum over f with H(f) ≤ H.Then 2∑_F_irr, H(f) ≤ H 1 =∑_n ≤ H^2d_n -∑_F_red, H(f) ≤ H disc(f) = 0 1 -2∑_F_red, H(f) ≤ H disc(f) ≠ 0 1. By Lemma <ref> this is ∑_n ≤ H^2d_n -13H-2(π/9√(3)H^2-16 H +O(H^t)) = ∑_n ≤ H^2d_n -2π/9√(3)H^2 +O(H^t). Applying standard Tauberian theorems to Z_0(s)and using the information about the polesand meromorphic continuationin Lemma <ref> and Proposition <ref>shows that ∑_n ≤ H^2d_n =12c_2H^2 log H + 12 c_1 H^2 + O_ε(H^1+ε) for any ε > 0.Putting this all together shows that ∑_F_irr, H(f) ≤ H 1 = 14c_2H^2 log H + 14c_1 H^2-π/9√(3)H^2 + O_ε(H^1+ε). This is the asymptotic count for polynomials of bounded toric height.By the comparison between toric height and root height(Remark <ref>),the asymptotic count for polynomials of bounded root heightis obtained by replacing H with √(3)H.By the Riemann hypothesisone expects∏_p≡13(1-1/p^2z)^4to have analytic continuationto the region Re(z) > 1/4,and also for ζ_(√(-3))(z)to be nonvanishing at z = 1/3,in which case the O_ε(H^1+ε)in (<ref>)should in fact beaH^2/3log H + b H^2/3+O(H^t)for some computable nonzero constants a,bwhere t = 131/208 <cit.>is the best known exponent for the error term inthe Gauss circle problem.abbrv | http://arxiv.org/abs/2310.17831v1 | {
"authors": [
"Shubhrajit Bhattacharya",
"Andrew O'Desky"
],
"categories": [
"math.NT",
"11C08, 11G50, 14M25"
],
"primary_category": "math.NT",
"published": "20231027005845",
"title": "On monic abelian trace-one cubic polynomials"
} |
[Electronic mail: ][email protected] Focke Meler Gluing Solutions, S.A. (Arazuri, Spain) The analysis of cylindrical resonators is part of standard physics curricula but, unlike for their rectangular counterpart, their mode structure is hardly ever visualized. The aim of this work is to show a way of doing it, providing a set of interactive web applications and citing potential use cases in the form of both academic courses and published research. These cover several branches of physics and engineering, showing that these materials can be useful for a broad audience. Visualization of cylindrical resonances This is the version of the article before peer review or editing, as submitted by the author to the European Journal of Physics. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at <https://doi.org/10.1088/1361-6404/acf5b6>.§.§ Supplementary materialThis article makes reference to a set on online supplements. A copy is hosted by IOP publishing at the URL cite above, but the supplements are not part of the Version of Record and another copy can be found at <https://cr.rf.gd/EJP_44_6_065802>.§.§ CopyrightThe https://www.eps.orgEuropean Physical Society holds copyright to the Version of Record. More information can be found on the https://crossmark.crossref.org/dialog/?doi=10.1088/1361-6404/acf5b6 domain=pdf date_stamp=2023-10-09CrossMark entry of the article.The present version of the manuscript is shared by the author under a https://creativecommons.org/licenses/by-nc-nd/4.0/CC BY-NC-ND license. The accepted version is currently under a 12-month embargo and will be shared under the same license after completion of that period.§.§ IOP Publishing policiesMore information about https://publishingsupport.iopscience.iop.org/preprint-pre-publication-policyPreprint policy, https://publishingsupport.iopscience.iop.org/publishing-support/authors/copyright-and-permissionsCopyright and Permissions and sharing of https://publishingsupport.iopscience.iop.org/questions/supplementary-material-and-data-in-journal-articlesSupplementary materials are published on the website of IOP Publishing.§.§ Broken-link policyThis document will not be updated, even if any of the links above stop working. In that case, the reader will have to find that information on their own. However, the author of this article is happily available to be contacted for that or any other matter. Visualization of cylindrical resonances Brais Vila0000-0001-8124-6616 May 15, 2023 =======================================§ INTRODUCTIONCavity resonators are studied in both undergraduate<cit.> and graduate<cit.> courses because they are ubiquitous: from their unexpected effects in the manufacturing of industrial goods<cit.> to their complex engineering for quantum computing<cit.> and nuclear fusion<cit.>, the likelihood that physics and engineering students will find them in their future careers is very high.Ease of manufacturing favors rectangular and cylindrical cavities, but the latter offer the higher quality factor<cit.> and are therefore the single most relevant type in practice, even more so when rectangular shapes are impractical due to size<cit.> or curvature-dependent dynamics<cit.>.Laboratory assignments involving these devices are common in undergraduate courses because they offer an excellent opportunity to work on relatively advanced topics without surpassing the skills of the students or the budget of a small department<cit.>. The concepts explored there usually go beyond what time allows for in theory classes, leaving a gap that can be closed using the web applications distributed with this article<cit.>. The main idea is that those concepts can be rapidly understood with adequate visualizations of the most important formulas in the theory, a method commonly used to study wave phenomena.<cit.>. Therefore, this article contains almost no mathematical formulas, favoring screenshots of the applications instead. However, all the underlying mathematical details are explained in two separate files named documentation and mathematical model<cit.>, giving the reader every opportunity to learn the complete theory.The starting point for the applications is a very simple idea, presented in section <ref>, about how to visualize the information contained in the formula for the resonant frequencies of a cylindrical cavity. Section <ref> explains why it is important to represent the mode structure along with the field distribution associated with the resonant modes: it helps to understand which mode should be used for a specific application. Some general aspects about how that can be taken to practice are discussed in section <ref>. Since these are all aspects concerning the design process, its importance is examined in section <ref>. Section <ref> explores to which extent those analyses can be applied to other types of problems and the conclusions are summarized in section <ref>.§ VISUALIZING THE MODE DISTRIBUTION The key concepts a student must understand to undertake some laboratory assignments<cit.> are related to the existence and frequency distribution of the resonant modes, rather than their individual characteristics. Supplement A, shown in Fig. <ref>, can be used to show the student what resonant modes are and why their frequencies are different: since the particles always move at the speed of sound, the period of oscillation is directly proportional to the length they must travel.The argument can be extended to more complex cases<cit.> but the more general approach is to solve the wave equation with adequate boundary conditions<cit.>, leading to Eq. (<ref>) for a closed cylindrical resonator of length l and radius a:f_m,n,n_z=c/2√((n_z/l)^2+(α_mn/a)^2)[ m=0,1,2...; n_z=0,1,2... ] Here, α_mn=x'_mn/π, with x'_mn being the n^th zero of J'_m(x), the first derivative of the Bessel function of the first kind and order m. For acoustic resonances, it is customary<cit.> to index the zeros n=0, 1, 2... whereas for electromagnetic resonances n=1, 2, 3... is used<cit.>. Although that is ultimately a matter of notation, there is a clear reason supporting that convention, which is related to whether the trivial solution should be included as a root of J'_m(x)=0 and is explained in the documentation of the supplements<cit.>. For a similar reason, n_z=0 must be excluded for TE modes. For transverse magnetic TM modes, the zeros of J_m(x) appear instead of those of J'_m(x).The fact that the modes are evanescent below their resonant frequency<cit.> is represented by the inequality f≥ f_m,n,n_z. Along with Eq. (<ref>), it represents the region under an ellipse with semi-axes 2fa/c and 2fl/c on the α_mn,n_z-plane. This suggests drawing a plot like Fig. <ref>, where each mode is represented by a point and those that can actually resonate are contained in the green-shaded area. From them, those near the border (the solid green curve) are more likely to be excited, but ultimately that depends on the feed. A general description of how that happens is provided in section <ref>. Supplement B generates interactive plots like Fig. <ref>: the user can vary the parameters using the sliders on top and see how the ellipse changes. In Fig. <ref>, seven modes can potentially be excited but only AC_103 resonates at the selected frequency. The number of modes below the curve can be counted using supplement C. Supplements D and E do the same for cylindrical electromagnetic resonators.It is surprising that visualizing the mode structure, as shown in Fig. <ref>, is hardly ever attempted for cylindrical cavities. Remarkable exceptions come as three-dimensional plots with a complex mode distribution<cit.> or even more intricate two-dimensional charts<cit.>. The plots presented here are simpler and easier to understand. In contrast, this kind of visualization is commonplace for rectangular resonances. An example of this kind of mode counting from undergraduate solid-state physics courses is the density of states of phonons in a crystal<cit.>. The exact same approach is used when counting modes in a rectangular microwave cavity<cit.> or a rectangular acoustic chamber<cit.>.Not visualizing the mode structure leads to missing an important piece of information: the very reason why the plot in Fig. <ref> works is that the zeros of Bessel functions are ordered<cit.>. The relevant interlacing properties are summarized at the end of the documentation of the supplements<cit.>. Students are rarely given that information, without which the frequencies in Eq. (<ref>) cannot be ordered.§ VISUALIZING THE FIELD DISTRIBUTION Some laboratory assignments require that the students have a good understanding of the characteristics of specific modes<cit.>. That involves using plots like Fig. <ref>, which are usually obtained by running a simulation and included in publications treating resonators<cit.>. The web applications accompanying this article offer a more portable and lightweight solution: in supplements B and D, the user can navigate through the modes and see their shape using the applet on the right. The coordinates of these plots can be chosen dragging the red dots. The mode in Fig. <ref> can be used to design an ECR ion source or an X-ray source, but the presence of two (white) nodal lines at z=l/3 and z=2l/3 requires adjusting the magnetostatic field accordingly, which can only be avoided in the TE_111 mode<cit.>. The fact that we can visualize this information while we choose the height and radius of the cavity and see where the mode is located in relation to the ellipse from Fig. <ref> is extremely practical and, in the author's opinion, should be implemented in professional simulation packages commonly used for designing resonators.Visualizing the modes is important because having a good qualitative grasp of their properties is necessary. For instance, TM_mn0 modes are useful because their fields are longitudinally uniform<cit.>. Modes of higher longitudinal order present a particularity that is normally not mentioned in the literature. The reader is encouraged to open supplement D and plot the TM_111 mode for a cavity with dimensions a≈ 5 mm, h≈ 100 mm. After changing the radius to a≈ 350 mm, a warning appears that the plots are not updated. Refreshing the plot renders a completely different field distribution. This is a peculiarity of TM modes: no warning is issued when doing the same for TE modes because their field distribution never changes (and neither does that of the acoustic modes in supplement B and the quantum-mechanical modes in supplement F). The explanation for this nuance is given in the documentation of the supplements along with other mathematical details<cit.>.§ UNDERSTANDING COUPLING Practical resonators must have some kind of input port and they might have an output port or a load inside. For the analysis presented in sections <ref> to <ref> to be valid, there must exist a coupling mechanism between those other elements and the resonant modes of the cavity. For a simple example, let us revisit Fig. <ref> and think how that mode could be excited using a rectangular waveguide, as shown in Fig. <ref>. Assuming a TE_01-mode electric field in the waveguide, indicated by the red arrows, it is obvious why the orientation shown on the right-hand side is the better option. However, the reader must be aware that it might be possible to excite a TE mode with a TM port, particularly if the field at the junction is zero. Supplement D can be used to visualize why that is more likely to happen with modes TE_112 or TE_114 than in Fig. <ref>. More complex feeds for specific modes are described in the literature<cit.>.An analogous situation takes place when an acoustic source is used<cit.> and coupling occurs in a similar manner<cit.>. The analysis differs a little when the excitation is not of acoustic nature yet has a well-defined frequency<cit.>, but it can still get trickier: acoustic resonances can also be triggered by fluid flows that are not inherently oscillatory without the presence of the cavity<cit.>. To study how coupling might still occur, let us think of a cylindrical cavity with a radial inlet on the curved wall and an axial outlet on one of the flat walls, as shown in Fig. <ref>. When the inlet is kept at constant pressure, the jet exiting the outlet oscillates at the resonant frequencies given by Eq. (<ref>). These oscillations have been observed experimentally<cit.>, but a theoretical explanation has not been given. To fill that void, a toy model is included in the supplemental material accompanying this article, accessible through the link mathematical model<cit.>. It is based on the assumption that the jet switches between the two paths shown in Fig. <ref>, coupling with the fundamental mode of the cavity. The latter can be visualized using the right-hand side of supplement B to plot the mode AC_100. For the left-hand side, parameters a≈ 100 mm, l≈ 80 mm and f≈ 1 kHz should be used. Despite its simplicity, the model gives the reader a sound understanding of what coupling is and reproduces many of the general aspects of cavity resonators. Most notably, it shows how self-sustained oscillations of the jet due to its interaction with the cavity couple with its first resonant mode, resulting in the frequency response shown in Fig. <ref>: for a certain range of inlet pressures, coupling leads to very stable operation. More details can be found in the supplemental material<cit.> This kind of coupling phenomena might occur between modes of the feed and the cavity<cit.>, a cavity and a load<cit.>, an electron beam and a cavity<cit.>, an airflow and a cavity<cit.> and many other pairs. No general method will be valid to analyze every case, but a good understanding of the modes and their distribution is a prerequisite for every approach.§ DESIGNING THE RESONATORIn sections <ref> and <ref>, we have seen two examples of how a resonator might be used in the form of an ECR ion source and a fluidic oscillator. For the latter, a theoretical model of the coupling between its resonant modes and the incoming air jet has given us Fig. <ref>, which shows that the resonator can be used to stabilize the oscillator. The key word here is theoretical: what if the pressure range for which coupling occurs is so low that we cannot get the flow rate we need? What if it is so high that turbulence makes it useless? Similar questions can be asked about the ion source: we need a TE mode to interact with the helical motion of the electron, but what if the ohmic loss on the walls of the cavity is too high for the TE_111 mode? We might then choose a higher TE_11p mode, but that makes monomode excitation harder to achieve and, as mentioned in section <ref>, requires adjusting the magnetostatic field accordingly. Will we actually be able to manage a practical solution?In this sense, we could describe design as the process that takes us from what is theoretically conceivable to what is practically achievable with a resonator. Fortunately, both of the aforementioned examples do actually work and a plethora of simple uses only require straightforward designs, but physicists and engineers are currently tackling extremely difficult technical problems designing resonators in areas like quantum computing<cit.> and nuclear fusion<cit.>.Our first decision when designing a resonator will be opting for either a monomode or a multimode cavity. By now, the reader should be able to visualize in Fig. <ref> that, strictly speaking, only two types of monomode cavities exist (AC_001 and AC_100), but a proper design will allow us to select other modes like AC_103. Supplement D can be used to visualize the electromagnetic case.In general, monomode cavities are used whenever the field distribution needs to be known. This includes low-power applications like sensing and measurement, but the fact that monomode cavities can be designed to concentrate high fields in specific locations also makes them perfect for power applications like sintering, plasma ignition, maser and laser excitation and telecommunications devices such as antennas and filters.Multimode cavities are studied for room acoustics and microwave processing of large loads. The most popular example is the household microwave oven, which can also be used in the undergraduate laboratory<cit.>. They are designed to have as many modes as possible and mode-counting plots for rectangular cavities are present in the literature<cit.>. Analogous plots for cylindrical cavities, like Fig. <ref>, are equally important but much harder to find. They can be generated using supplements C and E. The latter can also be used to illustrate how difficult monomode coupling can be in advanced applications like nuclear fusion<cit.>: introducing a=22.6 mm, h=0 and f=(240± 0.5%) GHz yields 1621 and 1586 modes respectively, for a cumbersome total of 35 competing modes within that range.The design process varies greatly from one application to another and between industry and academia. For instance, a chemist might want to gently heat a liquid in a small plastic test tube to enhance a chemical reaction. A microwave designer contracted for the task could open supplement D, introduce the parameters a≈ 100 mm, h≈ 45 mm and visualize the mode TM_210, which can be excited with the TM port shown in Fig. <ref>. The plot shows that the field vanishes at the curved wall, which favors two electric field maxima being aligned with the waveguide longitudinal axis. Like the outlet in Fig. <ref>, a small pipe can be welded on top of each maximum to process four samples, which would only penetrate a few millimeters into the cavity to prevent excessive microwave absorption leading to boiling. Such a small perturbation barely affects the fields, which would be confirmed via simulation. That would be a common industrial procedure that, for a simple case like this, would be completed in a couple of days. In contrast, it took 17 years of innovative scientific work to advance from using a TE_0,2 mode to TE_25,10 in the cavities of gyrotrons for fusion plasma heating<cit.>.§ MODIFYING THE APPLICATIONS The source code of the applications accompanying this article is made available to the public without restrictions of use. The reader can therefore tweak them at will, implementing minor adjustments like changing the limits of the sliders. More-significant changes can be made to visualize other types of problems. For instance, if we were studying the creation of photons in a cylindrical resonator with time-dependent length, we could note that the parametric resonant case, in which the number of photons grows exponentially, takes place when a specific mode is located exactly halfway between the origin and the ellipse<cit.>. Supplement D could be modified to show that explicitly. Another example is the particle-in-a-box model, taught in the vast majority of undergraduate courses in quantum mechanics. The cylindrical case is included in standard textbooks<cit.> and has been used to study electron behavior in nanowires and graphene sheets<cit.>. Its analytical solutions have been implemented in supplement F, where the probability of a particle being at a certain position inside the cylinder, or having its momentum pointing in a specific direction, can be visualized. The left-hand side of the application, analogous to Fig. <ref>, raises two natural questions. First, whether it makes sense to speak of a well-defined energy (represented by the ellipse) for a particle in a superposition of states. Second, whether only the states with lower energy values (those below the ellipse) can actually be detected. The answer to both questions is no<cit.> and the ellipse is drawn here just as a reference for the energy levels of the modes. The fact that all these questions come up naturally makes supplement F worth including, especially considering that it has been shown that students tend to have difficulty understanding the meaning of energy eigenvalues <cit.>. Yet another question arises: if acoustic resonances were described in terms of particle movement in section <ref>, with surface currents playing the same role in the electromagnetic case, is it possible to find a similarly intuitive interpretation of this quantum resonance? A very interesting answer is considering the Schrödinger equation a diffusion equation and describing the modes in terms of probability currents, although such an interpretation has only been shown possible in the one-dimensional case and is far from trivial to scale up to higher dimensions<cit.>.§ CONCLUSIONSWe can visualize the mode structure of a cylindrical resonant cavity using very simple two-dimensional graphs that, presented interactively and combined with plots of the relevant fields, serve a double purpose: on the one hand, they can be a great aid for students learning about resonators; on the other, they can be used as a design tool in an industrial environment.That idea has been implemented in a set of JavaScript applications and this paper provides a gradual explanation of how they can potentially be used: sections <ref> and <ref> are adequate for undergraduate students in preparation for certain laboratory assignments, sections <ref> and <ref> offer an accessible introduction to topics that graduate students will encounter working with resonators and, lastly, section <ref> touches upon potential uses of the applications beyond those for which they were originally designed.While resonant cavities are part of standard physics curricula, the time allocated in theory courses is usually insufficient to cover all aspects relevant for laboratory sessions. The main contribution of this article is providing a visual manner of teaching the theory, under the premise that it will speed up concept acquisition and allow for more-complete preparation of laboratory work. However, starting with an intuitive approach to the topic does not mean that the learning process must end there: all the mathematical details about how the applications work are explained in the accompanying documentation. The author wishes to thank the Government of Navarre for partially funding this work under the programme Industrial doctorates 2018-2020.§ AUTHOR DECLARATIONSThe author has no conflicts to disclose.44supplements See supplemental material at <<https://doi.org/10.1088/1361-6404/XXXXXX>>.labSantaB Web Site of the Physics Department, UC Santa Barbara, <<https://web.physics.ucsb.edu/ phys128/experiments/oldexperiments/acoustic/acoustic.pdf>>.uspas Web Site of the U.S. Particle Accelerator School, <<https://uspas.fnal.gov/materials/18ODU/ODU-Microwave-Measurements.shtml>>. See lecture on “Cylindrical Resonator, Coupled Multi-Cell Resonators and Relevant Parameters” by Frank Marhauser.witting Harald L. Witting, “Acoustic resonances in cylindrical high-pressure arc discharges,” J. Appl. Phys. 49 (5), 2680–2683 (1978).choiT Hyeongrak Choi, Strong Light-Matter Interaction with Cavities for Quantum Information Processing, PhD thesis, Department of Electrical Engineering and Computer Science, MIT, 2022 (Massachusetts Institute of Technology Doctoral Theses), p. 45–62.thumm Manfred K. A. Thumm et al., “High-power gyrotrons for electron cyclotron heating and current drive,” Nucl. Fusion 59 (7), 1–37 [073001] (2019).jacksonJohn D. Jackson, Classical Electrodynamics, 3rd edition (John Wiley & Sons, New York, 1999), p. 371–374.hattori Koichiro Hattori, Keiji Sakai and Kenshiro Takagi, “Observation of Thermal Phonon Resonance in Cylindrical Microcavities,” Jpn. J. Appl. Phys. 38, 4932–4935 (1999).franke Milton E. Franke, Grinnell Jones III and William A. Olsen, “Jet-driven Cylindrical Cavity Oscillators,” J. Dyn. Sys., Meas., Control. 95 (2), 125–131 (1973).ffield Amy Ffield and Richard Wolfson, “Microwave measurements of a fluorescent lamp plasma,” Am. J. Phys. 55 (7), 637–641 (1987).oliveira João Oliveira et al., “An accessible microwave cavity experiment for plasma density determination,” Eur. J. Phys. 42 (3), 035203 (2021).elias Florence Elias, Stefan Hutzler and Mauro S. Ferreira, “Visualization of sound waves using regularly spaced soap films,” Eur. J. Phys. 28 (4), 755–765 (2007).barreiro Nadia L. Barreiro et al., “Demonstration of acoustic resonances in a cylindrical cavity applying the photoacoustic technique,” Eur. J. Phys. 38 (5), 055805 (2017).moloney Michael J. Moloney, “Plastic CD containers as cylindrical acoustical resonators,” Am. J. Phys. 77 (10), 882–885 (2009).jaafar Rosly Jaafar, Anis N. M. Daud and Mohd R. M. Yusof, “Visualizing the superposition principle of sound waves in both-closed-end resonance tube,” Phys. Educ. 54 (2), 025004 (2019).labPurdue Hui Yu, Giuseppe D. A. Giuliani and Gábor A. Csáthy, “An acoustic resonator with a closed geometry,” Am. J. Phys. 84 (1), 71–75 (2016).varberg Thomas D. Varberg et al., “Determining the speed of sound and heat capacity ratios of gases by acoustic interferometry,” J. Chem. Educ. 94 (12), 1995–1998 (2017).dori J. Dori and J. Belcher, “Learning electromagnetism with visualizations and active learning,” in Visualization in Science Education, edited by J. K. Gilbert (Springer, Dordrecht, 2005), p. 187murello Anna Murello and Edoardo Milotti, “Using a free software tool for the visualization of complicated electromagnetic fields,” Eur. J. Phys. 35 (1), 015014 (2014).girwidz Raimund V. Girwidz, “Visualizing dipole radiation,” Eur. J. Phys. 37 (6), 065206 (2016).franklin Joel Franklin and Andrew Ryder, “Electromagnetic field visualization in virtual reality,” Am. J. Phys. 87 (2), 153–157 (2019).ghali Hani A. Ghali, Nancy Y. Ammar and Ihab Adly, “An Interactive Mobile Hub for Teaching Electromagnetics Courses [Education Corner],” IEEE Antennas Propag. Mag. 62 (4), 117–127 (2020).ahmed Shaeema Z. Ahmed et al., “Student use of a quantum simulation and visualization tool,” Eur. J. Phys. 43 (6), 065703 (2022).chhabra Mahima Chhabra and Ritwick Das, “Undergraduate students' visualization of quantum mechanical eigenstates and the role of boundary conditions,” Eur. J. Phys. 44 (2), 025702 (2023).redwood Martin Redwood, Mechanical waveguides: The propagation of acoustic and ultrasonic waves in fluids and solids with boundaries (Pergamon Press, New York, 1960), p. 57–70.morsePhilip M. Morse, Vibration and sound, 2nd edition (McGraw-Hill Book Company, New York, 1948), p. 389–401.balanisConstantine A. Balanis, Advanced Engineering Electromagnetics, 2nd edition (John Wiley & Sons, New York, 2012), p. 483–505.nyforsEbbe G. Nyfors, Cylindrical microwave resonator sensors for measuring materials under flow, PhD thesis, Helsinki University of Technology, 2000 (Helsinki University of Technology Radio Laboratory publications, report S243), p. 39, 85–167.mehdizadehMehrdad Mehdizadeh, Microwave/RF Applicators and Probes for Material Heating, Sensing, and Plasma Generation, 2nd edition (William Andrew Publishing, Oxford, 2015), p. 112–183, 365.kittelCharles Kittel, Introduction to Solid State Physics, 7th edition (John Wiley & Sons, New York, 1996), p. 120–122.meredithRoger Meredith, Engineers' Handbook of Industrial Microwave Heating, 7th edition (IET, London, 1998), p. 153–156, 221–227.palmai Tamás Pálmai and Barnabás Apagyi, “Interlacing of positive real zeros of Bessel functions,” J. Math. Anal. Appl. 375 (1), 320–322 (2011).orozco Eduardo A. Orozco et al., “Simulation of bunched electron-beam acceleration by the cylindrical TE 113 microwave field,” Int. J. Mod. Phys. A 34 (36), 1–12 [1942030] (2019).gerigk Frank Gerigk, “RF cavities, overview, part II”, CERN Accelerator School course on RF for Accelerators (Berlin, June 21, 2023)maksimov Dmitrii N. Maksimov et al., “Coupled mode theory for acoustic resonators,” Wave Motion 56, 52–66 (2015).elnaggar Sameh Y. Elnaggar, Richard Tervo and Saba M. Mattar, “Coupled modes, frequencies and fields of a dielectric resonator and a cavity using coupled mode theory,” J. Magn. Reson. 238, 1–7 (2014).kumar Nitin Kumar et al., “RF Behavior of Cylindrical Cavity Based 240 GHz, 1 MW Gyrotron for Future Tokamak System,” J. Infrared Millim. Terahertz Waves 38 (11), 1342–1356 (2017).barnes Benjamin K. Barnes et al., “Plasma generation by household microwave oven for surface modification and other emerging applications,” Am. J. Phys. 89 (4), 372–382 (2021).crocce Martín Crocce et al., “Hertz potential approach to the dynamical Casimir effect in cylindrical cavities of arbitrary section,” J. Opt. B: Quantum Semiclass. Opt. 7 (3), S32-S39 (2005).liboffRichard L. Liboff, Introductory Quantum Mechanics (Addison-Wesley, Reading, Massachusetts, 1980), p. 86-90, 393.baltenkov Arkadiy S. Baltenkov and Alfred Z. Msezane, “Electronic quantum confinement in cylindrical potential well,” Eur. Phys. J. D 70 (4), 81 (2016).rohrlich Yakir Aharonov, Sandu Popescu and Daniel Rohrlich, “On conservation laws in quantum mechanics,” Proc. Natl. Acad. Sci. U.S.A. 118 (1), e1921529118 (2020).mita Katsunori Mita, “Schrödinger's equation as a diffusion equation,” Am. J. Phys. 89 (5), 500–510 (2021). | http://arxiv.org/abs/2310.18514v1 | {
"authors": [
"Brais Vila"
],
"categories": [
"physics.class-ph"
],
"primary_category": "physics.class-ph",
"published": "20231027221235",
"title": "Visualization of cylindrical resonances"
} |
=1 highlight math style=enhanced, colframe=red,colback=white,arc=0pt,boxrule=1pt r̊ wv̌ x y a EH̋ B F D p m P Q Kk̨ŭpmatrix A T A C i s p𝐅 G g𝔾 Iξ̂α̃αϵ↔ S2212- [email protected] de Física de Materiales, Paseo Manuel de Lardizabal 5, 20018 Donostia-San Sebastian, Spain. The multipolar expansion of the electromagnetic field plays a key role in the study of light-matter interactions.All the information about the radiation and coupling between the incident wavefield and the object is embodied in the electric and magnetic scattering coefficients {a_ℓ m, b_ℓ m} of this expansion. However, the experimental determination of {a_ℓ m, b_ℓ m} requires measuring the components of the scattered electromagnetic field in all directions, something that is enormously challenging. In this Letter, we demonstrate that a single measurement of the Stokes vector at an angle of choice unlocks fundamental Nanophotonics magnitudes that are concealed in the scattered field. The unveiled quantities are: [|a_ℓ m|^2, |b_ℓ m|^2, { a_ℓ m b^*_ℓ m}, { a_ℓ m b^*_ℓ m}].Strikingly, our Stokes polarimetry approach allows for distinguishing between the magnetic and electric nature of the radiated electromagnetic field.Thereby, our findings, supported by exact analytical theory, can find applications across all branches of Nanophotonics and Optics, and greatly facilitate routine light-scattering measurements.The Stokes Vector Measurement: A Paradigm Shift in Electric-Magnetic Light Distinction Jorge Olmos-Trigo January 14, 2024 ======================================================================================Introduction.— The multipolar expansion of the electromagnetic field is a key tool in the study of light-matter interactions <cit.> and has historically played a pivotal role in several branches of Nanophotonics. These include optical forces <cit.>, optical torques <cit.>, and chiral light-matter interactions <cit.>,among many others <cit.>.The multipolar expansion of the electromagnetic field is typically written as an infinite sum of electric and magnetic vector spherical harmonics that are, in turn, weighted by its corresponding electric and magnetic scattering coefficients, respectively <cit.>. Researchers typically have access to the multipolar expansion of the incident wavefield since its coefficients are known. However, the situation changes when the incident wave interacts with an object. In the latter case, the electric and magnetic scattering coefficients {a_ℓ m, b_ℓ m} are unknown quantities, and their determination is crucial to solving the scattering problem under investigation. In this setting,ℓ and m denote the multipolar order and total angular momentum, respectively <cit.>. From the theoretical perspective, the following multi-step procedure is used to retrieve {a_ℓ m, b_ℓ m}: First, numerical methods are employed to obtain the components of the scattered electromagnetic field in all directions [When the object has spherical symmetry, Mie theory can be directly employed.]. Some examples are the T-matrix approach <cit.>, the Discrete Dipole Approximation (DDA) <cit.>, along with all kinds of Maxwell solvers. Subsequently, by projecting the scattered field onto the corresponding electric (or magnetic) vector spherical harmonic,the desired electric (or magnetic) scattering coefficient can be determined <cit.>. However, a fundamental problem arises in the previous approach to determine {a_ℓ m, b_ℓ m}: it lacks experimental equivalence, primarily due to the formidable task of measuring the components of the scattered field in all directions.In this Letter, we present a Stokes polarimetry approach that solves this experimental challenge. Specifically, we demonstrate that a measurement of the Stokes vector at an angle of choice grants access to fundamental Nanophotonics magnitudes that are typically hidden in the scattered field. We anticipate that the unveiled Nanophotonicsquantities are [|a_ℓ m|^2, |b_ℓ m|^2, { a_ℓ m b^*_ℓ m}, { a_ℓ m b^*_ℓ m}]. Strikingly, through our Stokes polarimetry technique, it is possible to distinguish and quantify the electric and magnetic contributions to the total scattered field.Consequently, our findings, supported by exact analytical theory, are promising for all branches of Nanophotonics andOptics, and strongly encourage experimental verifications.The Stokes vector and the multipolar expansion of the field.— The four components of the Stokes vector 𝐒 = [s_0, s_1, s_2,s_3]unambiguously describe the polarization state and energy flux of a given electromagnetic field in the far-field limit <cit.>. Now, following Bohren's notation <cit.>, the components of the Stokes vector read ass_0= |E_θ|^2 + |E_φ|^2,s_1= |E_θ|^2 - |E_φ|^2,s_2=-2{E_θ E^*_φ},s_3= 2{E_θ E^*_φ}.Hereanddenote the real and imaginary parts, respectively. By inspecting Eqs. (<ref>)-(<ref>), one can notice that s_0 is the total scattered intensity, s_1 is the degree of linear polarization, s_2 is the degree of linear polarization at 45 degrees, and s_3 denotes the degree of circular polarization (see Bohren's book <cit.> for an exquisite explanation).In order to determine each of the Stokes components (also referred to as Stokes parameters <cit.>), we need to obtain the complex amplitudes of the transversal components of the multipolar expansion of the scattered field E_θ and E_φ in the far-field limit. This multipolar expansion can be found in many books <cit.>. Hereafter, we follow Jackson's notation in its third edition to describe the multipolar expansion of the scattered field <cit.>. After some algebra (see Appendix <ref> for the detailed calculation), it can be shown that the scatteredfield (k )̊ can be conveniently written in the far-field(when kr →∞) aslim_kr →∞ (k )̊ = e^ikrkr[ E_θ𝐞̂_θ + E_φ𝐞̂_φ],where E_θ = E_0C̅_ℓm (φ) [a_ℓ mτ_ℓ m (θ) - im b_ℓ mπ_ℓ m (θ) ],E_φ = E_0C̅_ℓm (φ) [ ima_ℓ mπ_ℓ m (θ) + b_ℓ mτ_ℓ m (θ) ].Note that we have already assumed that ℓ and m are fixed values, namely, the optical response of the object can be fully described by Eq. (<ref>), where E_0 is the amplitude of the incident field,k is the radiation wavevector,r = ||̊ denotes the observation distance to the center of the object, andθ and φ denote the scattering and azimuthal angles, respectively. Moreover, we have defined [Interestingly, π_lm(θ) and τ_l m (θ)are real-valued functions that Craig Bohren defined to tackle the absorption and scattering by a sphere for m = 1(see Eq. 4.46 of Ref. <cit.>).]π_ℓ m(θ) =P^m_ℓ(cosθ)/sinθ, τ_ℓ m(θ)= d P^m_ℓ(cosθ)/dθ,where P^m_ℓ(cosθ) are the Associated Legendre Polynomials <cit.> and C̅_ℓm (φ)= (-i)^ℓ + 2/√(ℓ ( ℓ + 1) )√(2 ℓ +1/4π(ℓ - m)!/(ℓ+m)!)e^im φ. The electric and magnetic scattering coefficients from the Stokes vector.—At this point, we have all the ingredients to calculate the Stokes vector. To that end, let us insert Eqs. (<ref>)-(<ref>) into Eqs. (<ref>)-(<ref>). After some algebra, it can be shown thats̃_0= [ (|a_ℓ m|^2 + |b_ℓ m|^2)γ_ℓ m(θ)- 4 {a_ℓ m b^*_ℓ m}η_ℓ m(θ) ],s̃_1= (|a_ℓ m|^2 - |b_ℓ m|^2)ν_ℓ m(θ),s̃_2= -2 {a_ℓ m b^*_ℓ m}ν_ℓ m(θ),s̃_3= 2 [{a_ℓ m b^*_ℓ m}γ_ℓ m(θ) -(|a_ℓ m|^2 + |b_ℓ m|^2)η_ℓ m(θ) ].Here, we have defined𝐒 = |E_0|^2|C̅_ℓ m( φ)|^2 𝐒̃ along withγ_ℓ m(θ)= [ τ_ℓ m ^2(θ) + m^2 π_ℓ m ^2(θ)],η_ℓ m(θ) = m τ_ℓ m (θ)π_ℓ m (θ),ν_ℓ m(θ)= [ τ_ℓ m ^2(θ) - m^2 π_ℓ m ^2(θ)].Let us briefly discuss the underlying physics behind Eqs. (<ref>)-(<ref>). These relations give the dressed Stokes vector 𝐒̃ as a function of the electric and magnetic scattering coefficients of the multipolar expansion {a_ℓ m, b_ℓ m}. Note that γ_ℓ m(θ), η_ℓ m(θ), andν_ℓ m(θ) do not depend on the optical response of the object and can be easily determined from Eq. (<ref>). Therefore, if a_ℓ m and b_ℓ m are known, then the Stokes parameters can be analytically computed via Eqs. (<ref>)-(<ref>). However, in an experiment, one does not have access to a_ℓ m and b_ℓ m as these cannot be directly measured. Therefore, we must rearrange Eqs. (<ref>)-(<ref>) to express a_ℓ m and b_ℓ m in terms of the Stokes vector 𝐒, as the latter can be measured using photo-diodes and conventional wave-plates placed in the far-field.In this regard, it is of utmost importance to note that the Stokes parameters mix electric and magnetic radiation. That is, by simply measuring them, one cannot distinguish between the magnetic and electric nature of the radiated field. A key step remains to be made in order to achieve this distinction that lies in the fundamental principles of electromagnetism.Taking all the previous information into account, we can rewriteEqs. (<ref>)-(<ref>) inthe followingmatrix representation, 𝐃_ℓ m = U_ℓ m (θ) 𝐒,andU_ℓ m (θ)= 1/2 ν^2_ℓ m(θ)[γ_ℓ m(θ)ν_ℓ m(θ) 0 2η_ℓ m(θ);γ_ℓ m(θ) -ν_ℓ m(θ) 0 2η_ℓ m(θ); 0 0 -ν_ℓ m(θ) 0; 2η_ℓ m(θ) 0 0γ_ℓ m(θ) ],with 𝐃_ℓ m = [ |a_ℓ m|^2; |b_ℓ m|^2; {a_ℓ m b^*_ℓ m}; {a_ℓ m b^*_ℓ m} ], 𝐒 = [ s_0; s_1; s_2; s_3 ].Equations (<ref>)-(<ref>) are the main results of this Letter. The electric and magnetic scattering coefficients of the multipolar expansion, which dictate the radiation and coupling between the incident wavefield and the object, can be calculated by measuring the Stokes vector at an angle of choice. As a matter of fact, we have not made any assumption on the nature of the incident wavefield. Thereby, Equations (<ref>)-(<ref>) can be applied under general illumination conditions: a typical plane wave but also twisted (structured) light such as Gaussian and Laguerre-Gaussian beams with well-defined angular momentum of light <cit.>. Moreover, Eqs (<ref>)-(<ref>) introduce an unprecedented and striking advantage <cit.>: the capacity to distinguish and quantify the electric and magnetic contributions within the total electromagnetic radiation. To the best of our knowledge, this is the first time that such a distinction has been made in electromagnetism theory by means of the Stokes parameters. The physical properties of D_ℓ m. To get a deeper insight into the relevance of our findings, we now discuss the features of each of the components that conform D_ℓ m. |a_ℓ m|^2 (|b_ℓ m|^2): This scalar term gives full access to the electric (magnetic) contribution to the scattering cross-section σ_sca <cit.>. To understand this fact, let us derive the scattering cross-section using the standard expression <cit.> k^2 σ_sca = ∫_Ω s_0 d Ω = ∫_0^2 π∫_0^π s_0 sinθ d θ d φ = |a_ℓ m|^2 + |b_ℓ m|^2.From Eq. (<ref>),we can note that s_0 needs to be measured in all directions (Ω = 4 π), something that is experimentally demanding. Even if we can experimentally measure it <cit.>, distinguishing between the electric and magnetic contributions within σ_sca is impossible—both are combined.Our analytical findings, summarized in Eqs (<ref>)-(<ref>), provide a solution to this fundamental experimental limitation. We can now determine |a_ℓ m|^2 and |b_ℓ m|^2 separately from a measurement of the Stokes vector. This advancement is a game-changer, as it allows us to differentiate between electric and magnetic resonances in objects. Note that the objects can be dielectric <cit.>, plasmonic <cit.>, or hybrid (metallo-dielectric) <cit.>. The only restriction is that the optical response of the object needs to be described by fixed values of ℓ and m. Importantly, this electric-magnetic distinction can be achieved under general illumination conditions since we have not imposed any requirement on the incident wavefield.The scattering cross-section has traditionally held a pivotal role in various branches of Nanophotonics. To provide more specific insight, let us briefly explore a few Nanophotonics branches where our findings can make substantial contributions. In this regard, let us first highlight the ones in which the scattering cross-section is the signature and pivotal quantity.Super-scattering. The scattering cross-section is used to unveil super scattering regimes <cit.>, namely, scattering beyond the single-channel limit, σ_sca > 2 π / k^2. Now, super-scattering is typically measured via the scattering cross-section and thus, using Eq. (<ref>).Notice that our results, summarized in Eqs. (<ref>)-(<ref>), can help to identify super scattering regimes without the need to measure (and integrate) the transversal components of the scattered field in all directions.Optical anapoles: These are non-radiating sources whose signature is a dip in the scattering cross-section <cit.>. In essence, when |a_ℓ m| = 0 (|b_ℓ m| = 0), an electric (magnetic) optical anapole emerges.When both |a_ℓ m| = |b_ℓ m| =0, an hybrid anapole arises <cit.>. In this regard, Eqs (<ref>)-(<ref>) clearly indicate that one can unravel optical anapoles by a single measurement of the Stokes vector. Remarkably, we can also differentiate the nature of the optical anapole (electric, magnetic, or hybrid) upon this polarimetry measurement. Extinction and Optical Theorem: The optical theorem establishes that the extinction cross-section σ_ext can be determined by measuring the amplitude of the scattered electromagnetic field in the forward direction <cit.>. The optical theorem is extremely useful when the incident wavefield is a plane wave; however, it fails when impinging with non-planar waves, such as Gaussian beams <cit.> and twisted light <cit.>.Interestingly, the experimental determination of |a_ℓ m|^2 and |b_ℓ m|^2 using our Stokes polarimetry approach allows capturing the electric and magnetic contribution to the extinction cross-section for lossless objects since, in such cases,σ_sca =σ_ext. In other words, we can determine σ_extfor the cases in which the optical theorem is no longer valid. Now, let us turn our attention to the interference terms of D_ℓ m, namely, {a_ℓ m b^*_ℓ m}, {a_ℓ m b^*_ℓ m}. First, we note that to unravel {a_ℓ m b^*_ℓ m}, one only needs to measure s_2. That is, {a_ℓ m b^*_ℓ m} is decoupled from the rest of Stokes parameters as it can be inferred from Eq. (<ref>). In contrast, this is not the case for {a_ℓ m b^*_ℓ m}: one must measure s_0, s_1 and s_3.These interference terms: {a_ℓ m b^*_ℓ m}, {a_ℓ m b^*_ℓ m} have not been as well-studied as the scattering cross-section in scattering theory.Fortunately, recent developments have shed light on these interference terms <cit.> within the framework of the Generalized Lorentz Mie theory (GLMT) <cit.>. The GLMT gives the exact solution of a homogeneous sphere under general illumination conditions <cit.>. To make these interference terms more accessible to a broad audience, we consider a specific scenario where the object under consideration is a homogeneous sphere. In this setting,these interference terms can be written as {a_ℓ m b^*_ℓ m} = {g^e_ℓ mg^m_ℓ m^* }{a_ℓ b^*_ℓ} - {g^e_ℓ mg^m_ℓ m^* }{a_ℓ b^*_ℓ},{a_ℓ m b^*_ℓ m} = {g^e_ℓ mg^m_ℓ m^* }{a_ℓ b^*_ℓ} + {g^e_ℓ mg^m_ℓ m^* }{a_ℓ b^*_ℓ}.Here, we have made use of a_ℓ m = - a_ℓ g^e_ℓ m and b_ℓ m = - a^m_ℓ g_ℓ m, where {a_ℓ, b_ℓ} are the electric and magneticMie coefficients, respectively and {g^e_ℓ m, g^m_ℓ m} the electric and magnetic coefficients characterizing the incident wavefield, respectively <cit.>. We now anticipate some notable results: Eqs. (<ref>)-(<ref>) show that one can either retrieve {a_ℓ b^*_ℓ} or/and {a_ℓ b^*_ℓ}, upon a Stokes vector measurement, and by manipulating the helicity of the incident wavefield. To show this fact, we first calculate how Eqs. (<ref>)-(<ref>) transform when the incident wavefield carries well-defined helicity σ= ± 1 <cit.>. In this scenario, g^e_ℓ m = i σ g^m_ℓ m <cit.>, and thus, we are left with {a_ℓ m b^*_ℓ m}/|g^e_ℓ m|^2 = σ{a_ℓ b^*_ℓ} = σ|a_ℓ| |b_ℓ| sin(ξ_e- ξ_m),{a_ℓ m b^*_ℓ m}/|g^e_ℓ m|^2 = - σ{a_ℓ b^*_ℓ} = - σ|a_ℓ| |b_ℓ| cos(ξ_e- ξ_m),where we have used a_ℓ = |a_ℓ| e^i ξ_e and b_ℓ = |b_ℓ| e^i ξ_m. Equations (<ref>)-(<ref>) are important results of this Letter. The inference terms between the electric and magnetic Mie coefficients, namely, {a_ℓ b^*_ℓ} and {a_ℓ b^*_ℓ},can be separately determined from a single measurement of the Stokes vector at an angle of choice. These interference terms have recently emerged as key magnitudes in various branches of Nanophotonics. For instance, {a_ℓ b^*_ℓ} has shown to be of utmost significance in the preservation of helicity <cit.>, Kerker conditions <cit.>, surface-enhanced circular dichroism enhancements <cit.>, light transport phenomena <cit.>, and optical forces <cit.>. In stark contrast, {a_ℓ b^*_ℓ} has remained relatively unexplored until recently, primarily appearing in the context of spinless optical mirages <cit.> and optical forces <cit.>. More specifically, {a_ℓ b^*_ℓ} plays a pivotal role in the recoiling force. In particular, the product between {a_ℓ b^*_ℓ} and the imaginary Poynting vector gives rise to an intriguing optical force that has not been discussed until very recently <cit.>.Hitherto, we have shown that through Eqs. (<ref>)-(<ref>), both {a_ℓ b^*_ℓ} and {a_ℓ b^*_ℓ} can be determined.As a matter of fact, once these are obtained, one can change the incident wavefield (for instance from a circularly polarized plane wave to a linearly polarized Gaussian beam) to study in detail how the scattering properties of the object are modified. This approach allows us to delve, for instance, into the intricate dynamics of the object, elucidating the roles of the real and imaginary components of the Poynting vector <cit.>. Moreover, one can notice another remarkable feature that can be unveiled by capturing {a_ℓ b^*_ℓ} and {a_ℓ b^*_ℓ} from a measurement of the Stokes vector: the relative phase between the electric and magnetic Mie coefficients, given by ξ_e - ξ_m. Note that this relative phase can be effectively determined as the modulus of |a_ℓ| and |b_ℓ| can be separately measured, as we demonstrated earlier.At this point, let us highlight that all our results discussed so far can be applied to chiral spherical objects. One just needs to make the following substitution in the Mie coefficients <cit.>: a_ℓ→ a_ℓ - i σ c_ℓ and b_ℓ→ b_ℓ - i σ c_ℓ, where a_ℓ, b_ℓ and c_ℓ are the chiral Mie coefficients <cit.>. Capturing electric and magnetic phases from their amplitudes.— For lossless spherical objects, it is essential to grasp an important simplification—a relationship between amplitude and phase that was established by Hulst in 1957 <cit.>. In such scenarios, the electric and magnetic Mie coefficients can be elegantly expressed in terms of real-valued phase angles <cit.>:a_ℓ = i sinα_ℓ e^-iα_ℓ,b_ℓ = i sinβ_ℓ e^-i β_ℓ.In the absence of losses, it becomes clear that { a_ℓ} = |a_ℓ|^2 = sin^2 α_ℓ and{ b_ℓ} = |b_ℓ|^2 = sin^2 β_ℓ. These relationships unveil a groundbreaking revelation: through measurements of the amplitudes of the electric and magnetic Mie coefficients, we can accurately determine their respective phases. This striking approach aligns perfectly with our work, summarized in Eqs. (<ref>)-(<ref>), as we have shown that we can measure amplitudes of the electric and magnetic scattering coefficients separately. Let us now be concise: when dealing with a spherical lossless object, we can obtain an exact solution to Maxwell's equations from a measurement of the Stokes vector at an angle of choice as both amplitudes and phases are determined.Conclusions.— We have demonstrated that a measurement of the Stokes vector reveals key magnitudes at the core of Nanophotonics. In particular, our exact analytical findings underscore that it becomes feasible to distinguish between the electric and magnetic nature of the radiated electromagnetic field emitted by objects with fixed values of square and total angular momentum. This intriguing electric-magnetic light distinction marks a significant departure from conventional light-scattering measurements.Additionally, we have unraveled the interference terms between the electric and magnetic Mie scattering coefficients within the same Stokes vector measurement. In this vein, we have shown that a (generic) incident wavefield with well-defined helicity can uncouple these intertwined interference terms, thus, allowing us to extract its physical meaning individually. This information can be found in Eqs. (<ref>)-(<ref>). As a noteworthy observation, we have established that, for spherical objects without optical losses, it is not only the electric (or magnetic) amplitude but also the electric (or magnetic) phase that can be extracted from a measurement of the Stokes vector. In such a scenario, our analytical findings suggest the feasibility of achieving an experimental solution to Maxwell's equations from a measurement of the Stokes vector.In a nutshell, the results presented in this work greatly facilitate the experimental calculation of key quantities in Nanophotonics and Optics. Therefore, its implications are likely to have a profound impact on the Nanophotonics and Optics community. § ACKNOWLEDGEMENTSJ.O-T. acknowledges Adrian Juan-Delgado and Dr. Cristina Sánz-Fernández for useful comments. J.O-Tacknowledges support from the Juan de la Cierva fellowship No. FJC2021-047090-I ofMCIN/AEI/10.13039/501100011033 and NextGenerationEU/PRTR and acknowledges financial support from the Spanish Ministry of Science and Innovation (MCIN), AEI and FEDER (UE) through project PID2022-137569NB-C43.§ DISCLOSURESThe authors declare no conflict of interest.§ REFERENCES§ THE SCATTERED ELECTROMAGNETIC FIELDD IN THE FAR-FIELD In this Appendix,we determine the complex amplitudes of the transversal components E_θ and E_φ presented in the main text (see Eqs. (6)-(7)).Let us start by writing the scattered electromagnetic field (k )̊ interms of electric and magnetic multipoles <cit.>,(k )̊/E_0 = a_ℓ mN_ℓ m(k )̊ +b_ℓ mM_ℓ m(k )̊.HereM_ℓ m(k )̊ =h^(1)_ℓ(kr)X_ℓ m()̊ and k N_ℓ m(k )̊ = i ∇×M_ℓ m( k )̊ are Hansel multipoles <cit.>, X_ℓ m()= 𝐋Y_ℓ m(θ, φ)/√(ℓ(ℓ +1 )) are vector spherical harmonics,h^(1)_ℓ (kr) are the spherical Hankel function of the first kind, k is the radiation wavelength,r = ||̊ denotes the observation point,θ and φ are the scattering and azimuthal angle, respectively. In this framework,𝐋= -i ×̊∇ is the total angular momentum operator and Y_ℓ m(θ, φ) are spherical harmonicsdefined as in Ref. <cit.> Y_ℓ m (θ, φ) = √(2 ℓ +1/4 π(ℓ - m)!/(ℓ+m)!)e^im φ P^m_ℓ(cosθ),where P^m_ℓ(cosθ) are the associated Legendre Polynomials <cit.>.Moreover,a_lm and b_lm stand for the (dimensionless)electric and magnetic scattering coefficients, respectively, ℓ and m being the multipolar order and total angular momentum of the scattered electromagnetic field introduced in Eq. (<ref>), respectively.Now, let us calculate Eq. (<ref>) in the far-field limit, namely, when kr →∞. After algebra, we arrive from Eq. (<ref>) to(k )̊/E_0 =i e^ikr/kr[(-i)^ℓ[ a_ℓ m(×X_ℓ m()) - b_ℓ mX_ℓ m()] ],where we have made use of the following relations <cit.>lim_kr →∞N_ℓ m(k )̊ =i e^ikr/kr (-i)^ℓ(×X_ℓ m()),lim_kr →∞M_ℓ m(k )̊ = e^ikr/kr (-i)^ℓ+1X_ℓ m(). At this point, let us express the vector spherical harmonics X_ℓ m() in spherical coordinates <cit.>.Now, thetotal angular momentum operator 𝐋 reads in spherical coordinates as𝐋 = -i [- 𝐞̂_θ1/sinθ∂/∂_φ+ 𝐞̂_φ∂/∂_θ].Therefore, we can write X_ℓ m() in spherical coordinates as X_ℓ m() = -i/√(ℓ ( ℓ + 1) )[- 𝐞̂_θ1/sinθ∂/∂_φ+ 𝐞̂_φ∂/∂_θ] Y_ℓ m (θ, φ).Now, by taking into account Eq. (<ref>), we can write X_ℓ m()=C_ℓm (φ)(- i m π_ℓ m (θ ) 𝐞̂_θ + τ_ℓ m(θ) 𝐞̂_φ),×X_ℓ m()=-C_ℓm (φ) (τ_ℓ m(θ) 𝐞̂_θ +i m π_ℓ m(θ )𝐞̂_φ),where we have defined π_ℓ m(θ) =P^m_ℓ(cosθ)/sinθ, τ_ℓ m(θ)= d P^m_ℓ(cosθ)/dθ,andC_ℓm (φ)= -i/√(ℓ ( ℓ + 1) )√(2 ℓ +1/4π(ℓ - m)!/(ℓ+m)!)e^im φ. At this point, let us insert Eqs. (<ref>)-(<ref>) into Eq (<ref>). After some algebraic manipulation, it can be shown that lim_kr →∞ (k )̊= e^ikrkr[ E_θ𝐞̂_θ + E_φ𝐞̂_φ],whereE_θ = E_0C̅_ℓm (φ) [a_ℓ mτ_ℓ m (θ) - im b_ℓ mπ_ℓ m (θ) ],E_φ = E_0C̅_ℓm (φ) [ ima_ℓ mπ_ℓ m (θ) + b_ℓ mτ_ℓ m (θ) ],where C̅_ℓ m(φ) = (-i)^ℓ + 1 C_ℓ m(φ). | http://arxiv.org/abs/2310.17946v1 | {
"authors": [
"Jorge Olmos-Trigo"
],
"categories": [
"physics.optics"
],
"primary_category": "physics.optics",
"published": "20231027074107",
"title": "The Stokes Vector Measurement: A Paradigm Shift in Electric-Magnetic Light Distinction"
} |
Universal Virasoro Constraints for Quivers with Relations Arkadij Bojko January 14, 2024 =========================================================Author contribution A.S. and G.M.F. conceived the idea and design, developed the mathematical framework, performed all simulations, analyze and interpret the results and wrote the manuscript.D.G and S.A.L. provided critical discussions, guidance, and supervision throughout the research process.In the age of information abundance, attention is a coveted resource. Social media platforms vigorously compete for users' engagement, influencing the evolution of their opinions on a variety of topics. With recommendation algorithms often accused of creating “filter bubbles”, where like-minded individuals interact predominantly with one another, it's crucial to understand the consequences of this unregulated attention market. To address this, we present a model of opinion dynamics on a multiplex network. Each layer of the network represents a distinct social media platform, each with its unique characteristics. Users, as nodes in this network, share their opinions across platforms and decide how much time to allocate in each platform depending on its perceived quality. Our model reveals two key findings. i) When examining two platforms — one with a neutral recommendation algorithm and another with a homophily-based algorithm — we uncover that even if users spend the majority of their time on the neutral platform, opinion polarization can persist. ii) By allowing users to dynamically allocate their social energy across platforms in accordance to their homophilic preferences, a further segregation of individuals emerges. While network fragmentation is usually associated with “echo chambers”, the emergent multi-platform segregation leads to an increase in users' satisfaction without the undesired increase in polarization. These results underscore the significance of acknowledging how individuals gather information from a multitude of sources. Furthermore, they emphasize that policy interventions on a single social media platform may yield limited impact. § SIGNIFICANCE STATEMENTUnderstanding how people consume information on social media is key to unveil its influence on political opinions and shaping effective platform regulation policies. A Facebook study suggests that their recommendation algorithm, suggesting like-minded content, has a limited impact on political attitudes, as such beliefs remain unchanged even when individuals are exposed to more diverse content. Our opinion dynamics model challenges this study, demonstrating that the interplay between diverse and homophilic content recommendations across various platforms can sustain opinion polarization, highlighting the importance of multi-media information gathering. Our model also reproduces the observed relationship between opinions' stance and social media preferences. These insights underscore the intricate relationship between recommendation algorithms, news consumption, and opinion dynamics in the digital age, with the potential to inform and guide policy decision-making within the ever-changing landscape of information dissemination. § INTRODUCTION In our contemporary landscape, online social networks have evolved into pivotal platforms for the acquisition of political news <cit.>. These modern communication platforms have several advantages for democracy: they simplify access to information, boost citizen participation, enable individuals to express their views, counteract misinformation, and enhance transparency and responsibility in political actions. Ideally, individuals can tap into social media to encounter a range of ideological perspectives and consequently make more informed choices <cit.>. An extensive literature, empirical <cit.> and theoretical <cit.>, pertains to how social media influence opinion dynamics, fostering (or not) the appearance of “echo-chambers” where individuals are mainly connected with like-minded peers. Echo chambers are often believed to stem from the recommendation systems utilized by social media platforms, which tend to link individuals with similar views. This phenomenon contributes to the rise of opinion polarization <cit.>. Furthermore, these automated algorithms interact with the cognitive limitations of individuals, who tend to gravitate towards information that aligns with their existing beliefs and actively avoid contradictory information <cit.>. It is thus crucial to distinguish between the influence of algorithms and inherent human tendencies when examining the genesis of echo chambers. A recent empirical study <cit.> has shown how increasing the amount of cross-cutting content on Facebook does not significantly alter political opinions, thereby suggesting that social media recommendation algorithms may not contribute to opinion polarization. They note however that political information on Facebook is mainly incidental (6.7% of the total news consumption), and thus people might get political information from other sources. More in general, the above cited studies focus on the effect of only one social media platform, while contemporary news consumption is characterized by its reliance on a multitude of sources <cit.>. People exhibit different news repertoires depending on a variety of needs <cit.>. Several scholars <cit.> stress that an adequate study of political opinions must consider the interplay between news repertoires and political communication processes. Here, we answer to the following question: what are the implications of social media platforms competing for users' attention on opinion dynamics? To address this, we develop a model of opinion dynamics which integrates three main ingredients. i) People can connect on different platforms, each platform represented by a layer in a multiplex network. These networks differ in their recommendation algorithms (a single parameter representing its homophily) and in their political focus. ii) Users' opinions evolve according to a well-known non-linear model <cit.>, on the basis of interactions taking place on the multiplex. Opinions depend on the social interaction strength, issue controversy and the heterogeneous activity profile on social media. iii) Users allocate their time among the different social media platforms, depending on their personal preferences and limited information processing.Our model provides two pivotal insights. Firstly, when considering two platforms — one governed by a neutral recommendation algorithm and the other by a homophily-centric algorithm — we find that even with a user majority on the neutral platform, opinion polarization can endure. Secondly, as users dynamically allocate their social engagement across platforms based on their (strong or weak) homophilic inclinations, agents manifest a pronounced separation across platforms. While most associate network fragmentation with “echo chambers”,this emergent multi-platform segregation boosts user satisfaction without increasing polarization. These findings highlight the importance of recognizing the multifaceted avenues through which individuals assimilate information.§ RESULTSAs detailed in Sec. <ref>, we consider a system composed by Γ social platforms populated by N agents (see Fig. <ref>) having continuous opinions x_i(t)∈ (-∞,+∞), i={1,...N}. The opinions evolve according toẋ_̇i̇=-x_i+∑_γ=1^Γ(K^(γ)∑_j=1^N A_ij^(γ)(t)tanh(c x_j)), a generalization of a well-known model <cit.>. In absence of social interactions, i.e. the second addend on the r.h.s. equals zero, all opinions relax to x^*=0. This assumption allows us to study the network influence on opinions, isolating it from the other possible causes of polarization such as identity politics <cit.>, cognitive biases <cit.> and economic inequality <cit.>. The tanh(cx) term reflects the fact that the opinion change for each interaction is limited. The parameter c represents how controversial a topic is. For c high enough, all opinions equally contribute to the dynamics since |tanh(cx)| saturates to 1, meaning that people are maximally susceptible to be socially influenced. On the other hand, if c is very small, only users with extreme opinions are effectively able to influence others. The platform-dependent parameter K^(γ) represents the social interaction strength. The larger K^(γ), the larger opinions change as a consequence of given interactions. Its platform-dependence captures the idea that, for a given topic, users may consider a platform more “appropriate” than another. For instance, since the exposure to political content on Facebook is often incidental <cit.>, while users tend to consume more political news on Twitter <cit.>, it is reasonable to assume that, in the realm of politics, Facebook social interaction strength K^(FB) is lower than Twitter K^(TW). This results in users giving less credit to the political news they are exposed on Facebook. The opinion dynamics model evolves on a multiplex directed network, where each layer represents a single social platform, as pictured in Fig <ref>. At every time step (see Sec. <ref> and SI for further details), user i can be active (news producer), passive (news consumer) or both with probability a_i, p_i and a_ip_i respectively. Conditional on being active (resp. passive), user i chooses a platform γ with probability ρ_i^(γ). Note that he might be active on platform γ and passive on platform γ', each with probability ρ_i^(γ) and ρ_i^(γ'), respectively. Active users on a platform/layer contact passive users on the same platform/layer. The probability that a γ-active user i contacts a γ-passive user j on platform γ at time t is q_ij^(γ)(t) ∝ |x_i(t) - x_j(t)|^-β^(γ), leading to A_ji^(γ)(t)=1. The exponent β^(γ) represents the degree of homophily of the recommendation engine of platform γ.The model with a single platform (Γ=1) has been studied in <cit.>. Figure <ref> shows the main qualitatively different dynamics that the model exhibits, as a function of the different parameters, obtained initializing the opinions uniformly in [-1,+1]. Specifically, when K^(1)=K is small, social interaction is negligible and opinions relax to 0 (Fig. <ref>). If instead social coupling is relevant (K=3), but no homophilic recommendation engine is present (β=0), a one-side radicalization appears where all the opinions have the same sign (Fig. <ref>). When both K and β are big enough, opinions split into two opposite sides. The intuition is that with β≠ 0 agents tend to connect only with like-minded peers, an interaction which further polarizes users' stance. This effect makes it even more likely to connect with like-minded individuals; this vicious cycle fosters polarization (a more exhaustive phase diagram for the single-platform model is reported in <cit.>). Note that this phase manifests only if opinions are initialized with different signs. Otherwise, there is no range of parameters which leads to polarization.As anticipated in Sec. <ref>, we ask whether polarization can persist when users spend a tiny fraction of their time on politically-oriented social media with an homophilic recommendation engine, while engaging the rest of their time on a politically neutral platform. To explore this scenario, we consider Γ=2 platforms and stationary and homogeneous allocation probabilities, i.e. ρ_i^(γ)(t)=ρ^(γ) for all i and for γ={1,2}.Clearly, ρ^(1)+ρ^(2)=1. Both the assumptions of stationarity and homogeneity will be later relaxed. Platform 1 has a set of parameters such that, if users were only there, opinions would converge to neutral consensus (β^(1)=0 and K^(1) small). On the other hand, platform 2 is assumed to adopt an homophilic recommendation engine, which translates in β^(2)≠ 0. The social interaction strength K^(2) is left as a varying parameter, meaning that platform 2 could have exhibited both neutral consensus or polarization if it were the only platform, depending on its value. We are then interested in exploring the opinion dynamics for different values of ρ^(1) and K^(2). The former represents how long users engage on platform 1 (the “politically neutral” platform); the latter captures how “polarizing” platform 2 is.We define the rescaled vector of opinions at equilibrium as 𝐱={x_1^(eq)/K^(1)ρ^(1)+K^(2)ρ^(2),…,x_N^(eq)/K^(1)ρ^(1)+K^(2)ρ^(2)} in order to compute a set of three metrics which allow us to distinguish different opinion phases. In particular, such metrics are the standard deviation of the opinions σ(𝐱), the absolute value of the average opinion |μ(𝐱)| and the absolute value of the average opinions' sign |⟨sign(𝐱) ⟩|. In Figure <ref> we show the results of our analysis for β^(1)=K^(1)=0 and β^(2)=3. We can observe three main phases. i) Neutral Consensus (σ(𝐱) ≈ 0, μ(𝐱) ≈ 0), observable when platform 2 is not polarizing enough (i.e. K^(2) not high enough) w.r.t. the time spent on the neutral platform 1[The neutral consensus phase can have both |⟨sign(𝐱) ⟩| ≈ 0 and |⟨sign(𝐱) ⟩| ≈ 1, as the opinions never really reach exactly 0.]. ii) Radicalization (|μ(𝐱)|>>0, ⟨sign(𝐱) ⟩=1) is evident by the fact that all opinions share the same sign. Such phase is driven by an initial relaxation towards x_i(t)≈ 0 ∀ i due to the neutral platform. Then, when close to 0, opinions start to share the same sign and are progressively amplified by the polarizing (now, rather, radicalizing) platform. iii) Polarization (σ(𝐱)>>0, ⟨sign(𝐱) ⟩<1) emerges if platform 2 can sustain diverging opinions, i.e. if K^(2) is big enough to off-set the time users spend on the neutral platform ρ^(1). The take home message is that polarization can persist even when users spend most of their time on a politically neutral platform (ρ^(1) >0.5), thus suggesting the importance of considering that users gather information from different sources. In Fig. <ref> of the SI, we show a similar phase diagram for a different value β^(2). The qualitative picture remains the same.The above results are obtained by assuming that users' taste for social media platforms may depend on factors which are not captured in the model (e.g. better user interface), therefore we had an homogeneous and stationary allocation probability ρ={ρ^(1), ρ^(2)}. Hereafter, we suppose that users dynamically choose their Social Media Repertoire (SMR), depending on the perceived political quality of platforms. Based on the psychological theory of optimal distinctiveness <cit.>, we imagine that each user looks for a trade-off between assimilation (homophily) and differentiation (debate). As detailed in Sec. <ref>, we capture such desired balance into the (user-dependent) parameter ϕ_i, which represents the desired fraction of “far” opinions (i.e. contributing to differentiation) user i wants to be exposed to. In particular, inspired by bounded confidence theory <cit.>, user i considers “far” opinions those x such that |x_i-x|>r. The others are considered “close” opinions, contributing to assimilation. Thus, while in bounded confidence theory users do not engage with peers having far opinions, we relax this hypothesis by introducing ϕ_i, which can be seen as user i's desired probability to contact a distant peer. The idea is to capture the observed desire of debate. Indeed, as reported in <cit.>, despite the general tendency for social media networks to form homogeneous communities, networks formed through reply-to messages reveal a users' stance heterophily, with individuals using replies more often to express divergent opinions. We define the utility (i.e. satisfaction) of each user as U_i(t)=-(f_i(t)-ϕ_i)^2, where f_i(t) is the fraction of distinctiveness experienced by user i, which is compared to the desired one. Clearly, f_i(t) depends on the SMR of user i (i.e. ρ_i(t)={ρ^(γ)_i(t)}_γ=1^Γ), since user's exposure to distinctiveness and assimilation depends on the connections formed on each platform, which in turn depend on his allocation probability. We assume therefore that user i dynamically updates ρ_i(t) in order to maximize his U_i(t) (see Sec. <ref> and SI for additional details).In the context of platforms' battle for users' attention, social media are interested in maximizing users' satisfaction (which translates in higher activity, thus revenue).We will now show how a market populated by two platforms achieves higher average satisfaction without the undesired drawback of increasing polarization. This is surprising, as one would expect that the additional degree of freedom increases social fragmentation, thus increasing polarization.Suppose that each platform can vary its characteristics by tuning its β^(γ)[We assume that K^(γ)=K for all γ, implying that they all have the same political focus.]. First, we consider a single platform (i.e. Γ=1) and consider K=c=3, r=2 and ϕ_i=0.2 ∀ i. In this setting, the highest single-platform average utility ⟨ U ⟩_1 is reached in β^*=3, i.e. ⟨ U ⟩_Γ=1(β^*)=⟨ U ⟩_1. On the other hand, a market populated by Γ=2 platforms can result in an higher satisfaction. Figure <ref> summarizes our results by showing the variation of average utility and standard deviation in passing from Γ=1 to Γ=2 for different values of β^(1) and β^(2). In particular, we define Δ⟨ U ⟩(β^(1),β^(2)) = ⟨ U⟩_Γ=2(β^(1),β^(2))-⟨ U ⟩_1 and Δσ (𝐱; β^(1),β^(2)) = σ_Γ=2(𝐱; β^(1),β^(2))-σ_1(𝐱), where σ_1(𝐱) = σ_Γ=1(𝐱; β^*). Even if in the figure we reported, for the sake of completeness, the values corresponding to β^(1)=0 or β^(2)=0, in the following discussion we are going to neglect those points. The reason is that they correspond to a radicalized regime, which is highly undesired in a democracy, which lies on dialogue and debate. Thus, restricting the attention to(β^(1), β^(2)) ∈ [1,5]^2, the average utility increases for many couples of values with respect to the best possible single-platform average utility. The reason is that users can dynamically change their SMR, thus allocating their time among platforms to satisfy their desired differentiation ϕ. Moreover, focusing our attention on the point (β^(1)=2, β^(2)=1) (or, by symmetry, (β^(1)=1, β^(2)=2)), which is arguably very close to the two-platform optimum, we can see that the corresponding variation of polarization is almost zero, i.e. Δσ(𝐱; 2, 1) ≈ 0. As we show in Fig. <ref>, the explanation for higher satisfaction without increasing polarization is the emerging quasi-segregation between moderate and extreme opinions. The moderate individuals, in absence of the extreme ones who populate another platform, get radicalized less, effectively reducing polarization. This is in qualitative agreement with findings on media habits and opinion stance, where individuals at both the left and right ends of the spectrum tend to be clustered around a single media source[Note however that there are some important differences between liberals and conservatives that the model cannot capture] <cit.>. So, by focusing only on users' satisfaction, the competition of platforms brings notable benefits. Finally, the points for which satisfaction decreases with respect to Γ=1 are those where both β^(1) and β^(2) are large enough. In this regime, users connect only to very close peers, and can not find an alternative, cross-cutting platform. Thus, they cannot satisfy their desire for differentiation encapsulated in ϕ=0.2. In SI Fig. <ref> we show an alternative situation, where ϕ=0.1. In that case, users' desire for differentiation is too low and maximizing the utility leads to an increased polarization.§ DISCUSSION Online social networks have become crucial for political news consumption <cit.>. Existing studies <cit.> explore the impact of recommendation algorithms on opinion dynamics, with some findings on Facebook's limited influence <cit.>. However, news consumption spans various platforms <cit.>, and individual preferences <cit.> play a role. It is thus crucial to consider the interplay between news repertoires and political communication processes to understand political opinions fully.Here we show that when examining two platforms — one with a low political focus and a neutral recommendation algorithm, and another more politically oriented with a homophily-based algorithm — even if users spend the majority of their time on the neutral platform, opinion polarization can persist. This result casts some doubts on the generality of conclusions drawn from the recent study on the impact of Facebook recommendation algorithm on political opinions <cit.>, which shows that people's political attitudes did not significantly change when they were exposed to more diverse content. Indeed, the consumption of political news on Facebook is incidental (low political focus), and the polarization might originate from (little) time spent on other news sources. When we allow users to dynamically optimize their satisfaction by adjusting their SMR, the further segregation of individuals brought by multiple platforms leads to an increase of global satisfaction without, unexpectedly, an increase of polarization, with respect to the single platform case. In fact, more active (i.e. more extreme) users separate from users with “low” activity (i.e. more moderate), thus the latter are no longer influenced by the former, resulting in a reduced (or, at least, not increased) polarization.While our multiplatform opinion dynamics model offers a comprehensive framework for understanding interactions across diverse platforms, it is imperative to acknowledge the inherent constraints and nuances that underpin its design. The following considerations delve into these aspects, shedding light on both the model's robustness and areas that warrant further refinement. In the first place, while our model describes social media platforms, there exist traditional media not captured by it, which influence users' opinions. Broadcasting platforms such as TVs and radios are an example, contributing to opinion formation in a way which is structurally and temporally different from social media. With reference to our model, they would act as hub nodes in the network of interactions, each diffusing a certain opinion to other nodes. However such nodes, unlike “normal” ones, do not change (or, rather, change very slowly) their opinions over time. Thus, effectively, traditional platforms can be seen as idiosyncratic relaxation terms in Eqn. <ref>, as opposed to having all opinions relax to zero x_i(∞)=x^*=0 when K^(γ)=0 for all γ. The interplay between these two sources of information is left for future studies. Regarding how users are affected by peers, while we assume that they converge (i.e. their opinions approach each other when interacting), other models <cit.> consider also the presence of “polarizing” nodes, whose opinions move away if exposed to those of opposite sign. Since this characterization does not qualitatively change our results, we neglect it. In fact, as we have shown, both polarization and consensus emerge without the introduction of this additional feature. Our last assumption is the conservation of activity over time, i.e. a_i(t)=a_i ∀ i,t. In modeling users' satisfaction across platforms, it would be reasonable to require that users' activity (which corresponds to users' engagement on social platforms) could decrease if they are not able to reach a certain degree of satisfaction (i.e. a certain value of their utility). However, modeling such behavior would require a clear definition of what a decreasing activity really mean — do users move their attention to traditional media? Do they allocate their SMR on a not-considered social platform? Or do they literally stop consuming news? — We believe that it is reasonable to assume the considered system (which, we note, can be made arbitrary large by increasing N and/or Γ) as an isolated social system, in which social energy (i.e. users' activity) is conserved[Although note that daily time spent on social media by internet users worldwide has been steadily increasing from 2012 to 2023<cit.>.]. In fact, it is possible to think that our model is valid in a span of time T in which activities remain constant, meaning that they change on a time scale much greater than T.Finally, we want to stress how β^(γ) encapsulates the role of the recommendation engine of platform γ. Typically, recommendation systems aim to infer users' needs, tastes and preferences, on the basis of their behavior on the platform, in order to suggest them the “best” product <cit.>. In terms of social media platforms, recommendation systems try to connect people who are “similar” according to some metric. Obviously, different platforms collect different kind of users' data, deploy different recommendation algorithms and have different purposes. Such dissimilarities translate into a non-homogeneous degree of recommendations' homophily. We imagine that the distances between users, evaluated by the recommendation algorithms by considering the huge amount of data social media collect, can be projected into 1-dimensional distances between political opinions, further distorted by the degree of homophily β^(γ). Such assumption is based on the well-known phenomenon of issue alignment <cit.>, i.e. individuals are much more likely to have a certain combination of opinions than others. On this note, an interesting extension of the model would be to consider a multi-topic opinion dynamics on a multiplex network; the idea would be that on a given layer/platform it might be more likely to talk about a topic than another.§ MODEL We consider a system of N agents, where each agent i has a one-dimensional continuous opinion variable x_i(t) ∈ (-∞, +∞). The sign of x_i describes the agent’s stance (e.g. being pro or against abortion). The absolute value of x_i quantifies the strength of this opinion: the larger |x_i|, the more extreme the stance of agent i. Moreover, we also consider that agents can interact on Γ different platforms. Each agent allocates his “time” in these Γ platforms. In the following sections we detail how opinions and interaction networks evolve and how each agent divides his attention among the platforms (see SI for a detailed outline of the model simulation).§.§ Opinions updateThe opinion dynamics is driven by the interactions among agents, captured by a system of N coupled ordinary differential equations,ẋ_̇i̇=-x_i+∑_γ=1^Γ(K^(γ)∑_j=1^N A_ij^(γ)(t)tanh(c x_j)).i ∈{1,..N}K^(γ) > 0 represents the social interaction strength among agents on platform γ. The tanh(c x), with c > 0, embodies the fact that an agent i influences others in the direction of his own opinion, but such influence is “bounded”. The term A_ij^(γ)(t) is the entry of the N× N temporal Adjacency matrix A^(γ)(t) corresponding to platform γ. §.§ Network updateAt every time step, user i can actively engage with the social media on platform γ with probability a_i ρ_i^(γ), and/or passively engage on platform γ' with probability p_i ρ_i^(γ') [Note that we can have γ=γ'.]. Active users contact users that are passive on the same platform, meaning that the opinions of the former affect the opinions of the latter. For example, if j is active on γ and contacts i, who is passive on γ, then A_ij^(γ)=1. The term a_i (resp. p_i) is called activity (resp. passivity). Moreover, ρ_i^(γ) represents the probability that user i, conditional on being active/passive, chooses platform γ. Of course, ∑_γ=1^Γρ_i^(γ)=1. We assume that a_i ∈ [ϵ, 1] for all i, and that the activities are distributed according to a power law F(a) ∼ a^-η. Moreover, we assume for simplicity that p_i=1 ∀ i.The temporal adjacency matrices A_ij^(γ)(t) are assumed to evolve according to an activity-driven (AD) temporal network <cit.>. At each time step, each γ-active user contacts m γ-passive users. It is further assumed that these links are reciprocated with probability r. The probability q_ij^(γ) that agent i contacts agent j on platform γ is given by the following expression:q_ij^(γ)=|x_i-x_j|^-β^(γ)/∑_k∈𝒫^(γ)|x_i-x_k|^-β^(γ)where 𝒫^(γ) is the set of γ-passive users and β^(γ)≥ 0 captures the degree of homophily of γ's recommendation algorithm. §.§ SMR update While for a part of our results we considered ρ_i^(γ)=ρ^(γ) homogeneous and constant in time, we also developed a model for which it changes on the basis of the observations of each user. In particular, we suppose that users allocate their “social energy” among platforms depending on their perceived quality. Grounded on the well-known psychological theory of optimal distinctiveness <cit.>, individuals desire a balance of between assimilation (homophily) and differentiation (debate)(see <cit.> for an example of discrete opinion dynamics with optimal distinctiveness preferences). Borrowing from bounded confidence theory <cit.>, an agent with opinion x considers those with opinion in [x-r, x+r] contributing to assimilation, while the others to differentiation. Formally, if α_i^(γ)(t)(resp. δ_i^(γ)(t)) is the number of in-degree connections contributing to assimilation (resp. differentiation) on γ at time t for user i[α_i^γ(t)+δ_i^γ(t) is always equal to the total in-degree of node i. ], we define his utility as: U_i(t_n) =-(f_i(t_n)-ϕ_i)^2 =-(∑_γ∑_m=n-L+1^nδ_i^(γ)(t_m)/∑_γ∑_m=n-L+1^nδ_i^(γ)(t_m)+∑_γ∑_m=n-L+1^nα_i^(γ)(t_m)-ϕ_i)^2.Here f_i(t) is the experienced distinctiveness, while ϕ_i the desired one. The parameter L encapsulates the time window over which users perceive the distinctiveness (which, in time units, is τ=Ldt), thus representing their “memory”. For example, consider Γ=2, L=1 and ϕ=0.5. If user i is connected to user j on platform 1 with |x_j-x_i|<r, and to user k on platform 2 with |x_k-x_i|>r at time t_n, then it follows that δ_i^(1)(t_n)=α_i^(2)(t_n)=0 and δ_i^(2)(t_n)=α_i^(1)(t_n)=1. Thus, f_i(t_n)=ϕ_i=0.5 and U_i(t_n)=0 is maximized. It remains to specify how users maximize their utility. They can only control the fraction of time spent on each platform γ, proportional to ρ_i^(γ)(t_n). We assume that users update their platform allocation every L steps, i.e. their preferences stay constant during the time interval over which assimilation and differentiation are experienced[In other words, users gather experience before changing their mind about a given social media.]. For this reason, each user can estimate the quality of platforms given her current preferences. Such estimates are in turn used to update the platform allocation, on the basis of an anticipated utility. It is reasonable to assume that each user acts as if the assimilation and differentiation experienced on each platform are proportional to the time spent on it. For this reason, it is possible to write:∑_m=n-L+1^n δ^(γ)_i(t_m) =L ω_δ_i^(γ)(t_n)ρ_i^(γ)(t_n),∑_m=n-L+1^n α^(γ)_i(t_m) = Lω_α_i^(γ)(t_n)ρ_i^(γ)(t_n).Here, ω_δ_i^(γ)(t_n) andω_α_i^(γ)(t_n) are the differentiation and the assimilation slopes estimated by the user i. Formally, defining ρ=(ρ^(1),…,ρ^(Γ)), each user aims to maximize:Û_i(ρ, t_n)=-(ω_δ_i(t_n)·ρ/ω_δ_i(t_n)·ρ+ω_α_i(t_n)·ρ-ϕ_i)^2,where ω_δ_i(t_n) = (ω^(1)_δ_i(t_n),…,ω^(Γ)_δ_i(t_n))is the vector of differentiation slopes estimated using user's previous interactions (an analogous definition holds for the assimilation slopes ω_α_i(t_n)). Û_i(ρ, t_n) is the utility estimated by user i at time t_n. He assumes that it represents his future satisfaction (i.e. for t>t_n) depending on how he reallocates his ρ_i(t_n+1). His assumption lies on hypothesizing that the slopes ω_α_i and ω_δ_i are roughly constant in time (which, instead, can vary due to the reallocation of all the other agents). User i updates then according to:ρ_i(t_n+1)= ρ_i(t_n)if n/L ∉ℕ _ρÛ_i(ρ, t_n)otherwise.Clearly, the utility of user i depends not only on his ρ_i, but also on how the other users have allocated their time on social media.Let us stress that in our model, users can only decide whether to be on a particular platform, but the connections are decided entirely by the recommendation algorithm. A justification for this comes from the work of <cit.>, which shows that on Facebook the number of new links per day increased abruptly after the introduction of a “who to follow” recommendation algorithm. In other words, the individual agency in choosing connections is negligible with respect to the volume of content suggested by the platform itself.§.§ Combined dynamics We focus on a regime in which the three processes described above have different time scales. As already mentioned, we consider the network dynamics being much faster than the opinion dynamics. This is especially true in online social media context. In particular, for each network update we integrate Eq (<ref>) for dt=0.001. Moreover, the SMR dynamics lies between the two, representing the fact that the choice of the allocation among platforms is faster than the opinion dynamics, but requires a significant number of observations and interactions. Specifically, each user updates his preferences ρ_i(t_n) every L=100 time steps. In short, every L=100 network updates each user modifies her allocation preferences, and every 1000 network updates the opinions evolve of a unit time. § CONCLUSIONSOur study examines the impact of social media competition for users' engagement on opinion dynamics. First, we show that opinion polarization can persist as long as users spend a fraction of their time on a homophilic platform, highlighting the importance of multi-sources news diets. Second, we show that individual users' preferences interact in a non-trivial way with the recommendation algorithms in the presence of multiple platforms. The model indeed predicts the observed relationship between news outlet preference and political ideology. Interestingly, a multi-platform setup may be used to curb polarization while keeping user engagement intact. To this end, it is paramount to experimentally investigate users preferences for diversity (estimates for ϕ), either via surveys or controlled experiments.This avenue may help to shed light on healthy synergies between different social media platforms. From the revenue point of view, synergies are already well-known in this environment (think about users spending time on Whatsapp, Instagram and Facebook without ever going out from the Meta universe). Future research and efforts should thus gather cross-platform data, via survey <cit.> or by experiments, in order to fully comprehend the subtle mechanism of opinion formation in online environments.§ ACKNOWLEDGMENTSThe work of D.G. has been supported by the European Union – Horizon 2020 Program under the scheme “INFRAIA-01-2018-2019 – Integrating Activities for Advanced Communities”, Grant Agreement n. 871042, “SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics” <http://www.sobigdata.eu>, by the NextGenerationEU – National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) – Project: “SoBigData.it – Strengthening the Italian RI for Social Mining and Big Data Analytics” – Prot. IR0000013 – Avviso n. 3264 del 28/12/2021. A.S. and D.G. acknowledge support from the Dutch Econophysics Foundation (Stichting Econophysics, Leiden, the Netherlands). G.M.F. acknowledges support from Swiss National Science Foundation (grant number P500PS-211064).We thank Fernando Santos, Yphtach Lelkes and Keena Lipsitz for enriching comments.unsrt§ PSEUDO CODE In each time step of the numerical algorithm the opinions, the temporal matrix and the SMR are update according to the following steps: * Each user i is only active with probability a_i(1-p_i), only passive with probability (1-a_i)p_i, active and passive with probability a_ip_i and inert with probability (1-a_i)(1-p_i).* If active (resp. passive), user i chooses the platform γ (resp γ') on which he actively (resp. passively) engages with probability ρ^(γ)_i(t) (resp. ρ^(γ')_i(t)) Note that it could be γ=γ'.* If active on platform γ, agent i influences m distinct agents j ∈𝒫^(γ) — where 𝒫^(γ) is the set of users passively engaged on platform γ — chosen according to Eqn. (<ref>). This influence is expressed by updating the temporal adjacency matrix A_ji^(γ)(t_n) = 1.* With probability r the directed linkis reciprocated, so that agent i receives influence from j, i.e. A^(γ)_ij(t_n) = 1.* Opinions x_i are updated by numerically integrating Eq. (<ref>) using the total adjacency matrix elements A_ij(t_n)=∑_γ=1^Γ A^(γ)_ij(t_n).* Each user i collects the experienced assimilation and differentiation on each platform at the time step t_n as: α^(γ)_i(t_n)=∑_j s.t. |x_i-x_j|<rA_ij^(γ)(t_n) δ^(γ)_i(t_n)=∑_j s.t. |x_i-x_j|>rA_ij^(γ)(t_n). * If (t_n,L)≠ 0, then the SMR remains constant for all users ρ_i(t_n+1)=ρ_i(t_n) ∀ i. If (t_n,L)=0, each user updates his SMR according to the following steps:* estimate differentiation and assimilation slopes ω_δ_i(t_n) and ω_α_i(t_n) according to Eqn. (<ref>)* updates SMR according to ρ_i(t_n+1)=_ρÛ_i(ρ,t_n), where Û_i(ρ,t_n) is defined in Eq. (<ref>). To perform the utility maximization, we use a gradient descent algorithm on the Γ-dimensional simplex, consisting of n_GD iterations of learning rate Δ_GD. In particular, the following equation is iterated n_GD times:ρ_i(k+1)=𝐏_Γ(ρ_i(k)-Δ_GD∇Û_i(ρ_i(k), t_n) ),where 𝐏_Γ is the projection on the Γ-dimensional simplex, k runs from k=0 to k=n_GD-1, ρ_i(0)=ρ_i(t_n) and ρ_i(n_GD)=ρ_i(t_n+1). * After each time step the temporal networks A_ij^(γ)(t_n) are deleted.Of course, when we considered a homogeneous and stationary (HS) SMR, steps 6. and 7. are ignored, i.e. ρ_i(t_n)=ρ_i(t_0) ∀ i,t. As done in <cit.>, we integrate Eq. (<ref>) using an explicit fourth-order Runge-Kutta method with a time step of dt = 0.01 in the case of HS SMR, and dt=0.001 in the case of evolving SMR. In the latter, we also consider L=100. This leads to a timescale separation between the network dynamics, the SMR update and the opinion evolution mentioned in Sec. <ref>.We independently sampled activities {a_i}_i=1^N from the power law F(a)=(1-η)a^-η/(1-ϵ^1-η), with parameters η=2.1 and ϵ=0.01 <cit.>. We also set p_i=1 ∀ i. Moreover, we consider the following parameters' values: N=800, r=0.5 and c=3.Throughout all of our simulations, we start with initial opinions {x_i(0)}_i=1^N uniformly distributed in [-1,1]. In the case of dynamical SMR, we let the opinions evolve for n_boot=2000 steps before allowing users to reallocate themselves (i.e. we ignore point 7. for the n_boot steps before t_0). The reason is that we want to provide as input to our SMR update model an opinions' spectrum which is at (metastable, in the case of polarization) equilibrium . We initialize ρ_i(t_n)=1/Γ ∀ n ∈{-n_boot+1,…, 0}.Then, at t_0, we “turn on” the SMR allocation described in point 7. of the pseudo-code.§ ROBUSTNESS OF THE PHASE DIAGRAMHere we show the phase-diagram reported in the main text with β^(2)=2, i.e. assuming the polarizing platform provides more cross-cutting content. By comparing Fig. <ref> and Fig. <ref>, we can see that the green region shrinks as β^(2) decreases, in accordance with the meta-stability analysis of polarization reported in <cit.>.§ MULTI-PLATFORM SEGREGATION To understand the benefits of the multi-platform reported in the main text, we define the following quantities.f_γ^>(t) =∑_i=1^N ρ_i^(γ)(t) ℋ(a_i-θ)/∑_i=1^N ℋ(a_i-θ)f_γ^<(t) =∑_i=1^N ρ_i^(γ)(t) ℋ(θ-a_i)/∑_i=1^N ℋ(θ-a_i),where ℋ is the Heaviside function. In words, f_γ^>(t) (resp. f_γ^<(t)) is the “effective” share of users on platform γ whose activity is above (resp. below) a threshold θ.The idea is to understand whether these two classes of users exhibit qualitatively different behavior, notwithstanding their a-priori equal preferences (i.e. ϕ_i=ϕ and r_i=r for all users). Let us consider the 2 platform case reported in the main text, where β^(1)=2, β^(2)=1. In Fig. <ref>, we plot f_1^>(t) and f_1^<(t) for θ=0.1, which roughly divide the activity profile in 90 % below the threshold and 10% above. Highly active users deterministically prefer the more homophilic platform (f_1^> ≈ 1), while less active users tend to explore both, though they prevalently occupy the platform with more cross-cutting content (f_1^< ≈ 0.3)This is consistent with what we observe in reality, i.e. people with extreme opinions tend to have a more restricted outlet of like-minded sources, which further polarizes their view. On the other hand, more moderate users typically exhibit a more diverse news diet.§ ROBUSTNESS WITH RESPECT TO DESIRED DISTINCTIVENESSHere, we want to show how satisfaction and opinions standard deviation change with respect to the single-platform case for ϕ=0.1. The main physical difference with respect to the case presented in the main text (where ϕ=0.2) is that here users have an “halved” desire for differentiation, meaning that their maximal satisfaction requires a much high number of interactions contributing to assimilation, with respect to those contributing to differentiation. Figure <ref> shows indeed how users' utility is maximized in those regimes for which they are more exposed to like-minded peers, i.e. for high values of β^(1) and β^(2). However, in this case, maximizing users' satisfaction leads to an increase in polarization, meaning that if users desire high confirmation opinions tend to extremize. | http://arxiv.org/abs/2310.18309v1 | {
"authors": [
"Andrea Somazzi",
"Giuseppe Maria Ferro",
"Diego Garlaschelli",
"Simon Asher Levin"
],
"categories": [
"physics.soc-ph",
"cs.SI",
"9110",
"J.4; J.2"
],
"primary_category": "physics.soc-ph",
"published": "20231027175544",
"title": "Social media battle for attention: opinion dynamics on competing networks"
} |
[email protected] of Systems and Computer EngineeringCarleton University 1125 Colonel By Dr Ottawa Ontario K1S 5B6 The Internet of Things (IoT) is becoming an integral part of our modern lives as we converge towards a world surrounded by ubiquitous connectivity. The inherent complexity presented by the vast IoT ecosystem ends up in an insufficient understanding of individual system components and their interactions, leading to numerous security challenges. In order to create a secure IoT platform from the ground up, there is a need for a unifying operating system (OS) that can act as a cornerstone regulating the development of stable and secure solutions. In this paper, we present a classification of the security challenges stemming from the manifold aspects of IoT development. We also specify security requirements to direct the secure development of an unifying IoT OS to resolve many of those ensuing challenges. Survey of several modern IoT OSs confirm that while the developers of the OSs have taken many alternative approaches to implement security, we are far from engineering an adequately secure and unified architecture. More broadly, the study presented in this paper can help address the growing need for a secure and unified platform to base IoT development on and assure the safe, secure, and reliable operation of IoT in critical domains.A Survey of the Security Challenges and Requirements for IoT Operating Systems Alvi Jawad================================================================================§ INTRODUCTIONThe Internet of Things (IoT) defines the next generation of automated communication. IoT refers to an expanding interconnection of billions of embedded devices via the internet to enable information exchange and interaction among its components. The development and use of IoT devices are increasing at a staggering rate. According to a recent report from Gartner, the IoT industry is expected to grow to 5.8 billion enterprise and automotive endpoints in 2020[https://www.gartner.com/document/3955769]. However, this sudden proliferation also increases the risk associated with insecure IoT devices. The rapid-growth of exposed end-points and the inherent complexity of the network widens the attack surface, making the task of ensuring safety, security, and reliability for the entire system exceedingly difficult. The Mirai Botnet attack in 2016, affecting millions of poorly configured IoT devices to stage a massive distributed denial-of-service (DDoS) attack, is a prime example of this <cit.>. A broader exploit could potentially lead to serious ramifications in safety-critical IoT systems such as the Internet of Medical Things (IoMT), autonomous vehicular networks, smart grids, and many more. Our inadequacy of system knowledge, coupled with increasingly resourceful adversaries, only intensifies this problem. Most IoT systems are open and scalable, meaning that the system components frequently participate in intensive information exchange within a growing environment. As the functionality of many devices is highly reliant on the data sent over the internet, ensuring the confidentiality, integrity, and availability of both stored and in-transit data becomes a top priority. This concern necessitates strong encryption algorithms and the need for mandatory end-to-end security <cit.>. On the other hand, a significant number of issues can also result from the complex and poorly understood interactions among individual connected devices. If a user connects to a malicious or compromised device infected by a lurking malware, it could result in all the user's devices being contaminated as end-devices are rarely equipped with strong security defenses <cit.>. Thus providing in-built security mechanisms for individual devices within the IoT system also becomes a necessity.However, the objects (e.g., sensors, actuators, and other heterogeneous devices) connected to an IoT network are heavily constrained in terms of computing power, available memory, and energy capacities <cit.>. A typical example is wireless sensor networks (WSN), whose tiny sensor nodes require them to be extremely energy-efficient and work with very limited physical and logical resources <cit.>. As a result, conventional means of securing a system, such as robust cryptographic algorithms, cannot be used as they heavily impact the performance, memory, and power consumption of these devices. Alongside these limitations, there are also inherent design and development challenges, and difficulties resulting from the heterogeneity and large scale development of IoT devices. The development of a secure operating system (OS) can resolve many of these challenges and direct the secure development of individual devices and applications. Typical OSs have minimum processing requirements that often exceed the capabilities of resource-starved IoT devices. As such, lightweight OSs specifically designed to work within scarce resource constraints, such as the Tiny operating system (TinyOS) and Contiki, have recently garnered much research attention in WSN <cit.>. However, the studies and development have largely focused on the feasibility of running on resource-starved devices and less on developing secure implementations. This also underscores the importance of proper formulation, implementation, and enforcement of standard security requirements to guide the evolution of IoT OSs <cit.>. The main contributions of this paper are as follows. * Identification of the existing and emerging security challenges in IoT and classifying them into common and IoT-specific unique challenges.* Determination of the role of an OS to resolve many of these challenges* Defining specific security requirements to guide the secure development of an IoT OS * Security-centric assessment of some of the most prominent OSs in the IoT domain by comparing and contrasting their approach to security with the security requirements, and discussing security evaluations done on them. This knowledge will help reinforce our understanding of deficiencies in the currently available IoT OSs and the risks of using them in safety-critical IoT systems. We will also gain insights into requirements to be considered when developing a unified OS for IoT, and how the security development of the current OSs should progress. It is important to note that this research was done as part of a graduate course and is far from exhaustive. The author is no longer working on it and is sharing the findings with others in hopes of benefitting someone out there.The rest of this paper is organized as follows. Section 2 briefly discusses the terminologies and concepts used for the remainder of the paper. Section 3 explores and classifies common and unique security challenges. Section 4 follows from the previous discussion and examines the role of an OS in facilitating secure IoT development. Section 5 defines specific security requirements for architecting an IoT OS. Section 6 surveys some of the dominant IoT OSs and relevant security-centric evaluations. Finally, section 7 details related work in the literature, and section 8 concludes and describes our envisioned outline for future work. § BACKGROUNDIn this section, we discuss some terminologies and concepts used throughout this paper. §.§ Classes of Constrained DevicesThe diverse resource capabilities of IoT devices mandate the need to determine an area of focus to develop targeted security solutions. According to the Internet Engineering Task Force (IETF) classification <cit.>, the three subcategories of constrained IoT devices are illustrated in table <ref>. Class 0 devices are the most resource-starved (e.g., customized sensor-like nodes having <<10 KiB of RAM and <<100 KiB of flash memory) among all IoT devices and often require the assistance of larger devices to act as gateways or proxies to participate in internet communications. These extreme resource constraints prohibit the implementation of any rigorous security mechanism, such as strong cryptographic algorithms. These devices are often minimally secured by an initial configuration and rarely reconfigured during their lifetime.Class 1 devices are less constrained and can be implemented as peers into an IP network with these limitations in mind. However, they still cannot make use of a traditional full protocol stack, such as HTTP, transport layer security (TLS), and related security protocols. These devices are capable of using lightweight protocols specifically designed for constrained devices (e.g., constrained application protocol (CoAP)) and does not require the need of a gateway to perform meaningful conversations with other nodes. Class 2 devices and beyond are much less constrained and typically capable of supporting most of the protocol stacks used on notebooks and servers. However, they can still benefit from using lightweight and energy-efficient implementations, reducing bandwidth consumption and development cost while increasing interoperability and the available resources to run applications.Due to the extreme resource constraints, software developed for class 0 devices is often bare-metal and very hardware-specific <cit.>. The development of an OS for these devices also emphasizes task specialization, and security concerns are often ignored to attain maximum device lifetime. Therefore, our analysis, meant to facilitate the development of a secure and unifying OS for all IoT devices, does not include class 0 devices. Further mentions of resource-starved devices in this paper will always refer to class 1 and class 2 devices unless otherwise mentioned. §.§ Security PropertiesIn an IoT ecosystem, the inhabiting systems and components engage in frequent internal and external information exchange with the surrounding environment. The most significant security challenge faced by the IoT is the protection of the enormous amount of data that they store, use, or transmit. In this section, we briefly outline the security properties used throughout this paper, the preservation of which is integral to building an adequately secure and resilient IoT architecture. empty §.§.§ Confidentiality and PrivacyData collected and shared between devices may contain a large amount of private information to provide better services and fulfill personal preferences <cit.>. Many applications, especially those deployed in the healthcare sector, generate traceable signatures of the location and the behavior of users. An unauthorized entity gaining access to a user device's stored and transmitted data can analyze the data to jeopardize user privacy. For example, a compromised security camera can give the attacker information about when a house or industrial location is occupied and when it is not and act as an exposed entry point to more vulnerable devices connected to the same network. This can engender serious confidentiality and privacy concerns, and thus personal and confidential information misuse must be prevented. Other examples of attacks that violate these properties include eavesdropping, side-channel attacks, traffic analysis, cryptanalysis attacks, etc. <cit.>.§.§.§ Integrity and AuthenticityIoT represents the harmony of the digital and the physical world, where an attack that manipulates information on the internet can lead to controlling actuation in the physical world <cit.>. Safety-critical devices (e.g., health monitoring devices, vehicular sensors, smart meters) depend heavily on the correctness and accuracy of the received data to make near-real-time decisions. The critical decisions are made based on the assumption that the data collected has not been manipulated in transit, and failure to prevent the unauthorized modification and corruption of the transmitted data can lead to aberrant and unwanted behaviors <cit.>.For proper scaling for IoT development, trust on the devices, and the architecture that it runs on will be a fundamental issue. Without proper authentication mechanisms in place, an adversary can masquerade as a legitimate entity within the network and use spoofed or compromised nodes to affect the operation and performance of the system. Therefore, protecting the integrity of both stored and in-transit data and validating its authenticity becomes a pressing issue as lack of countermeasures can cause severe damage to critical infrastructures such as health sectors and smart grids, and can even lead to loss of human lives. Attacks against the integrity and authenticity properties include physical and remote access tampering, spoofing, message modification and corruption, node subversion, and routing attacks, among others <cit.>.§.§.§ Availability and ResiliencyThe connected nature of the IoT infrastructure leaves it highly exposed to a range of network attacks. Unlike traditional information technology (IT) systems, critical IoT infrastructure (e.g., smart grid, Industrial IoT (IIoT)) cannot go through a complete system shutdown in the event of an incident, making availability a top priority <cit.>. A critical industrial process may be highly reliant on the accurate and timely measurement of temperature, whereas a targeted DDoS attack can make resources unavailable to the endpoint <cit.>. Furthermore, the compromise of a single endpoint can lead to subsequent exploitation of other devices on the network, potentially seizing control of critical operations. Therefore, we need concrete resiliency countermeasures to protect the critical functionalities of the system to allow critical operations to continue even when some parts of the system are compromised. The issues worsen as devices previously without connectivity become increasingly connected to the IoT, inherit the old and new vulnerabilities, and enlarge the attack surface even further. empty§ CHALLENGES FOR DEALING WITH THE SECURITY VULNERABILITIES IN IOTSecurity is an attribute that is quickly gaining research attention as we move towards an era of ubiquitous connectivity. As our dependence on software-intensive devices grows, the security issues related to individual devices and connected systems are becoming ever more prominent. While a need to address these issues has existed from long ago, several reasons inhibit us from achieving infallible security. This section explores the common and unique security challenges related to IoT development and presents a classification of the challenges, as illustrated in figure <ref>. §.§ Common Security ChallengesFirst, we take a look at the security challenges that we must face regardless of the type of system to better understand the fundamental obstacles in the path of achieving definitive security.§.§.§ Defining Adequate Security:For one, security is a property that is very hard to define and measure in a system. Security is not compositional, meaning a system built from secure components is not necessarily secure itself. Therefore, even if we could prove that all the devices in an IoT system are individually secure, we cannot guarantee that the system is impervious to attacks. Although there exist standard security properties of a system (i.e., the CIA triad), to account for all the real-world heterogeneous systems, authentication and accountability had to be added later to form the extended CIA model. Even then, impenetrable security remains an intractable goal, and what constitutes adequate security becomes a subjective question defined heavily by the type of system and the system stakeholder's security requirements.§.§.§ Dynamic Nature of Security:Another challenging aspect is the fact that security is dynamic over the lifetime of a product, and security controls must be implemented in every stage of the product lifecycle. Any component or sub-system, previously determined to not need any protective mechanism, can be deemed to be security-critical after the discovery of an exploit. This is especially concerning in emerging technologies dealing with enhanced connectivity like IoT. While the devices hitherto without connectivity features get increasingly connected to the internet, new vulnerabilities ensue. A single insecure device can then open a loophole in the system and leave the entire network exposed to an adversary. In the past few years, DDoS attacks on the IoT, such as the Mirai, Hajime, and BrickerBot attacks have proven just how severe the consequences can turn out to be <cit.>. The integration of novel and upcoming technologies often introduces modifications to the system behavior and necessitates redefining the security requirements and revisiting the security controls already in place.§.§.§ Presence of an Intelligent Adversary:What makes security different from safety issues is the presence of a group of adversaries constantly trying to compromise the system through existing and undiscovered vulnerabilities. This new factor introduces the unpredictability and adaptability of human behavior and precludes the use of formal methods such as probabilistic measures to specify and verify system security. While a security engineer must constantly monitor and provide defensive mechanisms for every single vulnerability in a system, an attacker needs to find only a single vulnerability to disrupt the safe and effective operation of that system. The task gets even more challenging in IoT as the defenders not only need to defend from existing and undiscovered attacks but also attacks that will inevitably emerge with the integration of every novel IoT technology. As these adversaries become more proficient and gain access to tools and attacks with increased magnitude and sophistication, the need to secure every embedded device assisting our everyday life becomes a priority. empty§.§ Complex Trade-offsWe then examine the security challenges that are exclusive to IoT systems and focus our discussion on challenges stemming from the heavily resource-constrained nature of devices.§.§.§ Security vs. Resource Consumption:Many IoT devices, especially purpose-specific sensors and actuators, are extremely constrained in terms of processing power, memory, and energy consumption <cit.>. While there are strong and robust cryptographic algorithms available, they also strain the already limited resources on these devices. Public-key operations are typically resource-intensive, and many IoT devices may not have enough memory to store a certificate or perform a cryptographic operation to validate it <cit.>. This can result in degraded performance, and thus higher latency in safety-critical applications such as wireless pacemakers and advanced driver assist systems (ADAS) in autonomous vehicles, leading to bodily harm or even potential fatalities.Some IoT devices are designed to operate for years and may be located deep within the fabric of a system without any physical access to them. The battery capacity is directly linked to the number of computations performed during its lifetime. These devices may be involved in the critical operation of a system and are expected to perform their intended functions for a fixed period of time. Implementing rigorous security measures can shorten the expected battery life of these devices and create unwanted disruptions in the safe and reliable operation of the system. §.§.§ Security vs. Cost:There is a cost associated with every available security mechanism. Many designers tend to focus on the functionalities, and many companies are only interested in short-term profits <cit.>. Many developers disregard security altogether, and others develop devices with cost-effective security measures with little to no effective security in place. Some issues are also related to the consumers' priorities and their lack of concern about personal privacy. Even when some security measures are offered as a cost option, the buyers of the product often tend to go for the cheapest option with no built-in security. §.§ Challenges due to the large scale of devicesWith the possibility of hundreds of billions of devices being connected to the internet arises the issue of managing the scalability of the IoT ecosystem <cit.>. The following are a few security issues that arise from a large number of devices and their increased connectivity.empty §.§.§ Increased Attack Surface: As IoT architectures are equi- pped with more and more embedded devices and network connectivities, the attack surface continues to expand. This aggregation of network-connected devices so far have led to the development of a range of hardware and communication technologies, and consequently varying vulnerabilities resulting from both. Every single exposed node in the network can act as an entry point for attackers during communication, posing the risk of data interception and modification during unencrypted transmission. A single compromised device can lead to vulnerability propagation through other linked components of the network, and the entire network may end up being compromised. As an added problem, many of these devices are from third party manufactures and involve ensuring and maintaining security throughout the entirety of the supply chain. §.§.§ Secure Authentication and Authorization:A secure IoT system must validate the identity of the user before allowing them to access the system. In the IoT architecture, especially at the application layer, the numerous entities participating in data exchange make identity authentication and trust management a complicated task <cit.>. Issuing individual certificates for each object in IoT may be infeasible, and the absence of a global root certificate authority (CA) hinders designing an authentication system for IoT <cit.>. Ad-hoc networks such as the mobile ad-hoc networks (MANETs) and vehicular ad-hoc networks (VANETs) present the authentication problem of entities quickly entering and leaving the network. Furthermore, the presence of a wide variety of applications and devices renders the creation of an exhaustive security policy and managing access permissions difficult. For example, due to the high data integrity and authenticity concerns, strong cyrptographic mechanisms such as public key cryptosystems are desirable. However, they can lead to significant computational overhead and mechanisms using cyrptographically pre-shared keys are not applicable as the rapidly growing of the number devices can overwhelm key management <cit.>.§.§.§ Identity Management:Before ensuring security for individual devices, we need to deal with the issue of uniquely identifying an end-point in a scalable manner. As devices will be involved in information exchange with consumers and controllers concurrently, establishing appropriate identity controls and trust relationships between entities is crucial to maintain data privacy and exclusivity <cit.>. Traditional naming systems for uniquely identifying a host such as the Domain Name System (DNS) is insecure and remains vulnerable to attacks such as DNS cache poisoning or man-in-the-middle attacks <cit.>. Although DNSSEC, the security extension of DNS, can provide integrity and authentication security, the high computation and communication overhead makes it unsuitable for deployment in IoT.Limitations of the IPv4 internet led to the development of IPv6 as one of the prominent enablers of IoT, which underscores the need for a transition from legacy platforms to and IP-enabled infrastructure <cit.>. With the emergence of IPv6, each device can have its own unique ID and support auto-configuration. This gives IoT devices the ability to address other devices on the network individually and improves performance by a great margin. However, the lack of backward compatibility with IPv4 and the high-cost of changing internet service provider (ISP) infrastructure is still a challenge that we need to overcome. Additionally, the ISPs gaining greater visibility of network traffic compromises net neutrality and raises privacy concerns. §.§ Challenges due to the heterogeneity of devicesThe IoT is an evolving ecosystem comprised of a large number of heterogeneous devices. With its rapid growth comes the need to address the interoperability and portability challenges as well as the call for security across this wide array of devices, as discussed below. §.§.§ Collision of Security Paradigms:The design of a security mechanism for any system typically follows one of two well-established security paradigms. In paradigm A, we secure the product before it enters the market by implementing secure design mechanisms, security testing, certification, and licensing. This approach is suitable for products for which patching is impossible or difficult such as autonomous vehicles, thermostats, refrigerators, etc. Conversely, the goal of paradigm B is to make security agile. In this case, we put the product as fast as possible to the market and apply patches, updates, and mitigations to secure the device when needed. This method works well for devices that will continue to have new vulnerabilities, and applying patches are easy enough to do. Devices that fall into this category include smartwatches, smartphones, and laptops, among others.When it comes to IoT devices, we're starting to see a collision between the two paradigms. Novel smart devices that fall into paradigm B are being increasingly connected to devices or infrastructures that must be secured under paradigm A. A typical example is the advanced metering infrastructure (AMI), where smart meters (paradigm B) are connected in two-way communication to legacy systems (paradigm A) such as electric grids or windmills to create a smart grid. This connection inherits the vulnerabilities resulting from both types of systems and complicates the system design and how security should be provided for the entire system. The way to come up with a hybrid system to manage a compromise between both paradigms is still unclear.§.§.§ Interoperability and Portability:Ensuring cyber and ph- ysical security for devices entails dealing with the heterogeneous hardware in IoT. WSNs, for example, can present combinations of heterogeneous sensors and actuators with general-purpose computing elements <cit.>. This creates interoperability issues and the need for easier porting of applications across different hardware. However, many current IoT OSs are designed with specific hardware in mind. A high emphasis on security may affect creating interoperable operating systems. For example, while Mbed OS from Arm provides high end-to-end security, it only supports only a small number of platforms (5 so far) <cit.>. This necessitates the development of an IoT operating system (OS) that can provide high-level application programming interfaces (APIs) to remove hardware dependency and support interoperable security protocols. §.§.§ Lack of Standards and Guidelines:To develop IoT architectures with an emphasis on security, we need properly formulated, implemented, and enforced security requirements and policies throughout their life-cycle <cit.>. This task of creating a universal standard is challenging because these standards would need to consolidate security policies and requirements for the rapid growth of IoT devices with a high degree of variance. The fact that the standards would need to evolve depending on the current and emerging needs of the stakeholders such as the government, industry and users necessitates frequent revision of the requirements.While various standards for resource-constrained IoT devices on the network level, such as those standardized by IETF (i.e., Bluetooth Low Energy (BLE), IPv6 over Low -Power Wireless Personal Area Networks (6LoWPAN)) are available, there is still a need for standards and guidelines for regulating the development of IoT devices. Compliance with the IEC 61508 CMV: Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems Standard[https://webstore.iec.ch/publication/22273] provides the assurance that the system provides necessary risk-reduction required to achieve adequate security for its safety-related functions. A similar standard addressing the cybersecurity requirements and relevant set of guidelines for developing IoT devices is crucial for the development of a security-centric mindset for system-designers, product developers, and consumers alike.§.§.§ Diversified security needs:Iot is a massive collection of devices that vary in size, application, hardware, suppliers, and resources available to those devices. The problem with providing security to all these different types of devices is the fact that security is subject to the type of device or system and varies radically from application to application. One of the prominent challenges would be the establishment of an architecture capable of handling the scalability of billions of IoT devices with varying trust relationships in the fabric <cit.>. As mentioned before, a single security paradigm isn't enough to deal with this heterogeneity. The secure development of an IoT architecture thus needs to account for the heterogeneity of all the devices, secure communication between those devices, and the control and management of firmware updates necessary to enable that. empty§.§ Challenges due to Innate DesignThe inherent design of IoT devices, as well as the automated environment, leads to both cyber and physical security challenges in the IoT development. Some of these challenges are described below.§.§.§ Increased Lifetime of Devices:Cryptographic algorithms have a limited lifetime before they are either broken or need to be reworked <cit.>. Many IoT devices are designed for increased lifetime and can continue to operate for years (e.g., smart meters can continue to function for more than 40 years) on a single battery. These long-lasting devices may outlive the lifetime of cryptographic algorithms, leading to obsolete security measures. Devices may then carry undetected malware that can persist for ages before contaminating other vulnerable devices connected to the device <cit.>.§.§.§ Lack of Human Administration:Most of the IoT security mechanisms are automated and require little to no human involvement <cit.>. This lack of human administration makes detection of undiscovered vulnerabilities improbable. Unlike IT systems, vulnerabilities in IoT systems can take several months before it can be discovered and cause both primary damage (direct damage from the incident occurrence) and secondary damage (damage as it spreads through the system). We also may not have any physical access to some of the embedded IoT devices hidden deep within the fabric of the system. That means we might not have the physical means to stop or perform a factory reset in case of a malfunction or a compromise. Therefore, developers must provide comprehensive built-in security mechanisms and a procedure for sending firmware updates to all the vulnerable models. Devices without an update mechanism must protect the initial configuration from tampering, theft, and other forms of compromise throughout the lifetime of the product <cit.>.§.§.§ Physical Security of Devices:In addition to lacking proper cyber protection, a wide variety of IoT devices, e.g., security cameras and doorbells, are located in places that lack physical protection. There is a possibility of intentional or inadvertent displacement of fixed IoT devices and the theft of mobile devices<cit.>. These devices are extremely vulnerable to being controlled or replaced with spoofed devices by adversaries, which can then be used to perform reconnaissance and further attacks. To illustrate, the OBD-II port in an autonomous vehicle, typically used to generate diagnostic information, is also accessible and exploitable by an attacker. This situation asserts the fact that we must always maintain complete control over all the embedded devices in an IoT system and the connected infrastructure, which is a challenging task.§.§ Challenges due to the Developers' StanceThe following discussion explores some of the security issues from the developers' angle, their priorities, and the limited opportunities presented to them.§.§.§ The Developers' Angle:Some security vulnerabilities also relate to the developers of IoT devices. Device manufacturers are strongly incentivized to produce and get their products to the market as fast and cheaply as possible. Many third-party developers view security as an optional addition, and thus security measures are the first things to get abandoned in times of budget constraints and tight schedules. This results in chips being developed with a focus on functionality and little attention to security mechanisms or application of security patches. Once a product enters the market, the end-users may have little to no means of patching the device. The lack of security concerns and knowledge of a typical user, along with the carelessness and error-prone nature of humans, exacerbates this situation and leaves millions of active internet-connected devices vulnerable to attacks.Most of the developers are not security experts, and thus their designs tend to have less focus on the security aspect. Although some developers understand the necessity and actively try to ensure security for their devices, the inherent complexity of the vast IoT network, coupled with inadequate documentation of existing IoT vulnerabilities, thwart their efforts. The fact that the implemented security mechanisms need to be maintained throughout the lifetime of the product creates reluctance and ultimately dissuades many developers from putting any security measures at all in their devices. §.§.§ Designed-in Security:Many IoT devices are built with only its features and connectivity in mind. To build an adequately secure system, security measures, monitoring, and recovery mechanisms have to be built-in and continued throughout the lifetime of the product. This requirement asserts the fact that not only we have to develop individually secure IoT devices, we must also ensure the security of existing networks and the connected infrastructure throughout the development process <cit.>. Designed-in security also has to start from the inception of development. If security controls are not built parallelly from the earliest phase of the development lifecycle, it becomes very complex to implement it mid-way or after development. This is especially true for the ever-changing IoT ecosystem, where the integration of novel technologies will mandate modifications in the system specification. These requirement specifications can be changed during the development process, but changing an entire system usually involves incurring substantial costs.§.§.§ Malicious and Deliberate Backdoors:IoT devices typically lack security mechanisms such as an intrusion detection system (IDS) or antivirus to detect and remove software vulnerabilities. The reason behind this is these mechanisms require real-time scanning that can result in unaffordable overhead in resource-constrained IoT devices <cit.>. Attackers can exploit this weakness to plant a backdoor in a vulnerable device to monitor and manipulate its operation. Additionally, many vendors intentionally insert backdoors in their software products for purposes of collecting usage information, management, and testing. However, an intelligent adversary can examine the code and employ reverse engineering technologies to discover the backdoor leading to theft and misuse of private data.empty§ ROLE OF AN OPERATING SYSTEM At the heart of resolving many of these security issues lies the secure development of an OS designed specifically with IoT resource-constraints in mind. These requirements entail the development of a lightweight OS capable of running with low-energy consumption and very small memory footprints while still providing rich abstractions for execution environments <cit.>. The source code of the OS has to be small enough to fit both the memory and RAM usage requirements while still allowing room for applications to run on top of the system. The OS should be able to take full advantage of the power-saving modes available to the device applications and network protocols in order to conserve energy. Although the OS cannot stop the rapid growth of IoT devices, it can streamline and direct an OS-centered development of products by providing a suitable unified platform. To accommodate the manifold devices and their requirements, the OS can provide software primitives to enable simple hardware-independent application development and APIs to allow ease of integration with the wide variety of IoT hardware and use cases <cit.>. The issue of portability to new platforms and the need to account for various peripheral devices mandate the need for a flexible hardware model and potentially a hardware abstraction layer. The OS can tackle communication interoperability issues by supporting multiple network protocols such as Zigbee, IEEE 802.11, BLE, 6LoWPAN, 3G, 4G, etc. Software can be run on the cloud due to resource constraints, and the OS can collaborate and manage the software execution.In IoT, diversified hardware platforms and customized operating systems make the generation of security awareness and development of a unified security solution difficult <cit.>. Once again, an OS can be a secure platform that can facilitate and guide the development of secure products. A unified OS may provide security both on the hardware and software levels by providing features such as secure storage, secure boot, and a secure execution environment (SEE), among others. Secure transmission of data can be ensured by the inclusion of secure communication protocols through an API. Additionally, it can simplify the job of the developers by providing support for standard programming languages and mature software development kits (SDKs) to allow easier application writing, testing, and verification. The OS can implement firmware updates in the device and manage the signing, validation, and revocation on the cloud side.We need to see IoT OS and device development from a different perspective as IoT development migrates and gets increasingly involved in safety-critical services or infrastructures. As an example, the field of automotive vehicle industry goes through rigorous checking, verification, and validations because of their safety-critical nature. As people get more concerned about the large-scale deployment of IoT, individual hardware and software would need to meet higher engineering requirements before it can be released to the market. This entails the development of IoT focused standards and guidelines that can be centered around the development of a unified OS and the features that it provides.As security is dynamic over the lifetime of a product, security has to be designed-in the OS, and continuous enhancement of the security features through timely updates is desired. However, product designers are presented with various OSs and hardware architectures to choose from, often too many. The limited availability of information can often create confusion and overwhelm the developer, leading to a subpar choice. If there was a properly-documented definitive OS, certified by international standards to be safe and secure, developers could reliably choose that OS to leverage the in-built protection mechanisms while implementing their own and applicable third-party security on top of it. This results in a small, but more focused development of a unified platform, much like Linux, that can ultimately fulfill the needs of both the IoT industry and individual developers alike.Our ultimate vision is to architect a highly secure platform from the ground up. To this end, we aim to direct the secure development of a unifying OS that can support the needs of the existing and emerging IoT devices while providing built-in measures to tackle the security challenges in the IoT domain.§ SECURITY REQUIREMENTS In order to build an IoT OS capable of addressing the security challenges discussed above, we first need to define the security requirements to be considered during the development phase. These requirements can vary heavily depending on the system stakeholders' needs, concerns, objectives, and system architecture under consideration. In light of the secure design principles, as delineated in the US National Institute of Standards and Technology (NIST) Special Publication 800-160 (<cit.>), and some effective approaches adopted by some IoT OSs, we specify the key security requirements to direct the development of a unified IoT OS below.empty§.§ Open SourceThis requirement follows directly from the secure design principle of "Open Design". The principle emphasizes the transparency of the security mechanism and states that the security of a system should not depend on the secrecy of its design or implementation. This contradicts the idea of "Security through obscurity" where the strength of the security depends on the user's ignorance of how the security mechanism works. IoT, as an emerging technology, will include a wide variety of knowledgeable users and attract attention from resourceful attackers. If these users or attackers can grasp the mechanism's workings, they can crack the security of the system. This has been apparent in Microsoft's proprietory Windows OSs and their numerous vulnerabilities.This leads us to the resolution that the development of an adequately secure OS should follow an open-source design. The license of the OS source code should be either copyleft (e.g., Linux) or permissive (e.g., under BSD, MIT license) with a high degree of freedom. This approach can boost the number of contributors and reviewers, resulting in better bug-fixing, and thus a secure and higher quality code <cit.>. Copyleft licenses, in particular, can facilitate the development of an integrative community around a common code base, and ultimately balance the protection of end-users while supporting industry, as exemplified by Linux <cit.>. §.§ Isolation of ResourcesIn a typical flat security model, a single vulnerability can expose the whole system to further exploitation. This model results in a fairly large attack surface, as an attack affecting any part of the OS can further compromise other parts, potentially taking control of the entire device. To reduce this attack surface, we need to completely isolate the OS resources into sections that contain the security-sensitive resources and sections that do not. This approach is in accordance with the design principle of "Isolation" and can play a major role in endorsing the segregated design of an IoT OS.The private or critical resources segment can be comprised of the resources and functions that are security-critical, such as the secure storage (contains cryptographic keys and credentials), critical code sections, firmware updates, etc. The other segment, namely the public or uncritical resources segment, should then contain resources and functions that are not directly related to the security of the OS. Examples of these are network stacks, application protocols, device management, and diagnostics, etc. This makes the attack surface as small as possible and restricts the potential damage done once a vulnerability has been exploited. The two isolated regions should be different in how they access the memory and address program execution. For one, the public side should never be able to write directly to flash memory. The OS should provide APIs that allow operations to the private segment, but the critical resources should never be directly accessible to the public segment. Cryptographic keys, for example, should never be allowed to leak into the public segment. Additionally, the private side should provide a secure runtime environment for secure code execution and leave small memory footprints. Finally, the private side can verify the integrity of device data and perform a clean reset if any sign of compromise has been found. In this way, even if an attack on the public side is successful, the security-sensitive data remain safe.empty§.§ Customized Development ApproachThe isolation of the public and critical resources discussed in the last section allows us to use a customized development approach tailored to the needs of each section. In light of the rapid growth of IoT devices, the public section would need much faster development. With the target of getting devices to the market as early as possible, we must allow developers to adopt an iterative approach and quicker product development and delivery. While this approach can potentially open the systems to vulnerabilities, attacks on the public side cannot affect the private side of the OS, keeping the system secure. This meets the faster development requirement for IoT without compromising the security of the device.The goal for the private side, however, is the protection of critical resources and the steady, measured, and exhaustive development of security mechanisms. As code with a certain degree of complexity can rarely be expected to be bug-free, the development of this section will need to account for constant testing, reviewing, bug fixing of existing code, and patching vulnerabilities resulting from new features. As security issues become more debated, compliance with updated standards and guidelines will also be an issue. All these requirements mandate that the development of the code on the private section be slow, stable, and rather unchanging, ultimately resulting in a mature and high-quality codebase through constant review and revision.§.§ Hybrid Programming ModelThe programming model in an OS defines the way an application developer can model the programs for that OS. Typical programming models used for IoT OSs are event-driven systems and multi-threaded systems <cit.>. In an event-driven system, each task detects and reacts to triggers by an external event, such as an interrupt or stimulus. In contrast, a multi-threaded system can allow multiple threads of concurrent execution of a program. Each task in this model can run in its own thread context, and an inter-process communication (IPC) API allows communication between the tasks. Each thread requires its own stack, and the stack typically has to be over-provisioned, consuming memory resources <cit.>. Purely from a performance perspective, event-driven systems can share the limited resources between all processes (by using the same stack) and never run into concurrency problems, as illustrated by their use in multiple WSN OSs, including Contiki and TinyOS. However, more complex and novel IoT applications may require capabilities of a fully-fledged OS to ease application design, i.e., multi-threading<cit.>.From a security viewpoint, however, multi-threading poses some serious issues. First of all, it introduces increased complexity and requires expertise from the application developers making error identification and testing much more complex. Locking mechanisms are needed to prevent concurrently running threads from modifying shared resources, and mishandled concurrency can lead to unfamiliar or unwanted behavior. On the other hand, event-driven systems present a simple design and easier testing and verification of the model. Contiki is based on an event-driven model but still provides optional preemptive multi-threading support for programs with explicit needs <cit.>. To achieve maximum security while leveraging the benefits of multi-threading, a hybrid model similar to that of Contiki may be adopted. §.§ Simplification of Security ChoicesFrom the secure design principle of "Economy of Mechanism" follows the idea of streamlining the development of IoT. Adherence to this principle means architecting the OS in a way that simplifies the job of the developers rather than letting each developer come up with a security mechanism for themselves. For example, we can limit the number of security choices offered by the OS architecture to simplify the design, allowing for simpler and more rigorous testing. The goal is to support the limited security knowledge of the developers by providing them with simplified design choices and minimal, but critical information. The reduction of options will result in fewer assumptions made by developers, ultimately leading to an effective security architecture with fewer risks. With the vision of building a secure and better platform from the ground up, this meets the need to support the numerous developers involved in key roles in the development of the IoT industry. §.§ Standard Programming Language and Code MaturityIn designing an OS for IoT devices, we typically have the choice to select (i) a standard programming language (typically ANSI C or C++), and (ii) an OS-specific language or dialect <cit.>. Selection of a dialect, such as nesC (a dialect of C) used by TinyOS, requires a higher learning curve but can support system performance and safety through enhancements missing in low-level languages like C. Standard programming languages, on the other hand, can allow for a near-zero learning curve and support the use of well-documented and mature tools. This increases code reviewability and allows simpler debugging following the two design principles of open design and a simplified mechanism. The maturity of the code - typically defined by the age, documentation per lines of code, and the size of the development community - can also be a good indicator of high-quality and secure implementation of the OS. §.§ Targeted MediationThe design principle of "Complete Mediation" states that every access to the operations of the device must be checked against the access control mechanism. To fully implement complete mediation, the OS should limit the caching of information and check the access control policy every time an entity requests access to system resources. While this method strongly augments security, this is very resource-intensive. An OS for resource-starved IoT devices should thus take a different approach to provide mediation. Rather than providing full mediation for every single access, the OS should provide mediation for all security-sensitive operations. This will allow the OS to efficiently perform public operations in real-time with relatively no performance overhead while still providing adequate security for operations on critical system resources. This approach requires redefining the security requirements and revising the access control policies. §.§ Compliance with IoT-centric Standards and ProtocolsThe distinct needs of application development and IoT device constraints lead us to the resolution that a single OS may not be capable of fulfilling all of these requirements. This foregrounds the need for standardized protocols to improve interoperability and portability of applications across varying hardware and platforms. RIOT, for one, uses a POSIX like API to achieve interoperability for all of its supported hardware (from 16-bit microcontrollers to 32-bit processors) and is aiming towards achieving full POSIX compliance <cit.>. However, compliance with traditional standardized APIs such as full POSIX compliance can only be achieved by a few OSs in PC and may not be suitable for IoT OS development <cit.>. On the other hand, we also need standard interoperable security protocols for the network layer. Datagram Transport Layer Security (DTLS), standardized in 2006 by IETF to reduce power consumption and delays while maintaining similar security protections as TLS, can be a key enabler of secure IoT connectivity. While work focused on standardization has indeed progressed over the past few years, many works are either incomplete or are ahead of product implementation, resulting in severe complexity in the integration of security libraries into developers' hardware of choice <cit.>. Therefore, the development of developer-friendly IoT-centric open standards both on the system level and the network level is imperative for stable and secure IoT OS development.empty§.§ Support for Lightweight CryptographyIn light of the constraints of IoT devices, typical cryptographic mechanisms can prove to be too resource-intensive and expensive <cit.>. Therefore, there is a need to implement a new optimized and lightweight cryptographic scheme designed specifically for resource-constrained embedded devices. This cryptographic scheme would need to ensure a high level of security while using minimal memory, power, and execution speed requirements <cit.>. Elliptic-curve cryptography (ECC) uses much smaller key sizes compared to non-EC cryptography to provide equivalent security, making it a suitable scheme for embedded IoT devices. The Cryptographic API of the OS should support security services for signing, validation, and revocation of existing and emerging lightweight cryptographic schemes.§.§ Support for Common Security MechanismsWith strict IoT constraints in mind, the OS should provide as many common security mechanisms available for traditional OSs as possible. First of all, the OS should provide secure storage to enable the physical protection of cryptographic keys and other security-sensitive data. The secure storage should also be persistent (e.g., on-chip ROM memory) over the lifetime of the product to prevent loss of data during power cycles <cit.>. Various debugging and diagnostic interfaces, e.g., the JTAG interface used to debug errors during manufacturing and development, are potentially exploitable and must be secured.The OS can also adopt a secure booting mechanism to bring the device to a familiar and trusted state after system startup. This is similar to the verified boot[https://source.android.com/security/verifiedboot] feature in Android, where a full chain of trust is established during device boot up, and integrity and authenticity of the next stage is verified by each stage before execution is handed over. As discussed before, an isolated SSE can provide additional security by running any operation requiring security-sensitive data in a trusted environment. As long as the computation overhead is small enough, security modules, such as MiniSec for TinyOS, can also be implemented on top of the OS architecture to achieve targeted protection <cit.>. During firmware updates, rollback protection should be enabled to stop devices from downgrading to older versions and preclude the possibility of persisting vulnerabilities. empty § MODERN IOT OSESIn this section, we examine some of the prominent IoT OSs and compare and contrast their approaches to architecting an OS to the security requirements discussed in the previous section. We also briefly explore some of the OS-specific in-built security approaches and security evaluations performed in the literature.§.§ ContikiContiki was originally developed in 2002 with the goal of supporting WSNs running constrained 8-bit MCUs <cit.>. Contiki is open source and is available under a BSD license on GitHub[https://github.com/contiki-os/contiki] and various other platforms. Development of Contiki has focused on power efficiency and lightweight memory management, typically requiring only 2 kilobytes of RAM and 60 kilobytes of ROM for configuration <cit.>. Contiki supports the dynamic loading and unloading of individual applications or services at runtime. As individual application binaries are smaller than the entire system binary image, it allows Contiki to use less energy and less time for transmission of the binary image through the network. This also allows multiplexing of hardware of a sensor network across multiple applications or users.Contiki has received frequent research attention from academic users, and over the years has turned into one of the most widely used OSs for WSN capable of running 16-bit and modern ARM 32-bit MCUs. Contiki uses a hybrid programming model. The base system runs on an event-driven kernel, and preemptive multi-threading is implemented as an application library on top of that <cit.>. The choice of using the multi-threaded model is optional, and applications that specifically require such a model of operation can be linked to the library. Contiki is written primarily in C, while still providing runtime environments for implementation in Java and Python. A large variety of independently developed forks exist, including many that are closed source, resulting in both highly-mature and experimental codebase sections.Contiki uses the Cooja/MSPsim simulator to support debugging and various testing mechanisms, including unit testing, regression testing, and full-system integration testing <cit.>. It also provides features such as a shell, a file system, a database management system, cryptographic libraries, among others. Additionally, Contiki supports many lightweight communication standards, including IEEE 802.15.4, 6LoWPAN, CoAP, MQTT, TSCH, and RPL. Its core IPv6 has been certified with a silver certification in the IPv6 Ready Logo Program <cit.>. Finally, the core system provides a basic abstraction, featuring a hardware-independent software infrastructure for the application-driven nature of the heterogeneous sensor devices.McBride et al. in <cit.> perform a security analysis of Contiki using static program analysis tools. In static analysis, the code is interpreted without execution for errors to identify evasive bugs and unwanted program behavior. The analysis shows that although the number of potential bugs has increased across different Contiki releases, the average bug density (number of bugs per thousand lines of code) has consistently decreased over time. They also identify two major vulnerabilities (a use-after-free vulnerability and a persistent cross-site scripting (XSS) attack) and a few minor issues, leading to their documentation and patching.The authors of <cit.> examine the performance characteristics of different security primitives, such as block ciphers, cipher-based message authentication code (CMAC), etc. under Contiki. Additionally, they present ContikiSec, a secure network layer for Contiki, designed for secure transmission over wireless sensor networks. ContikiSec supports a configurable design centered around three security modes that focus on preserving confidentiality, integrity, and authentication. The design aims to provide additional security while balancing low-energy consumption and small memory footprints with the strongest security mode (ContikiSec-AE) consuming approximately 15% more energy compared to Contiki running in default mode.§.§ TinyOSTinyOS, developed since 2000, is one of the first and most widely used OSs in WSNs. The core OS can fit within a memory of only 400 bytes, and many applications require only 16 KB of memory, capable of supporting the efficient, low-power operation of complex, concurrent programs <cit.>. The OS is implemented in a C dialect called nesC, which puts some restrictions on C (e.g., use of function pointers) to improve code efficiency and robustness.The source code of TinyOS is available under the BSD license on GitHub[ https://github.com/tinyos/tinyos-main], the complex and customized nature of which precludes the formation of bigger community around TinyOS <cit.>. TinyOS follows a purely event-driven model, supporting modules of components with a high level of sophistication and optimization; optimized code can be faster and smaller than even original hand-written code in C <cit.>.empty NesC provides automated static race detection, removing the concern about bugs generated due to concurrency during program composition <cit.>. Various simulators in TinyOS, e.g., TOSSIM, Viptos, QualNet, Avrora, EmTOS, etc. can be used to simulate different implementations and applications of TinyOS. TOSSIM, in particular, can be used to analyze TinyOS at a very basic level and find numerous bugs in its source code and its various applications <cit.>. TinyOS also facilitates the security assessment of WSNs by allowing the study of common attacks (e.g., wireless injection attacks, DOS attacks, man-in-the-middle attacks) on TinyOS.Authors in <cit.> present TinySec, a link-layer cryptographic solution for TinyOS, designed to fit the resource constraints and security needs of IoT. TinySec has very low impacts on bandwidth and latency and features minimal energy overhead as different modes of security require anywhere from 3% (TinySec-Auth, least secure mode) to 10% (TinySec-AE, most secure option) overhead in sensor network applications. The block ciphers used in TinySec are RC5 and Skipjack due to their speed and suitability for software implementation on embedded microcontrollers, and TinySec provides ease of switching between both. TinySec has been widely implemented in WSN, including implementations in custom hardware, and has paved the way to the development of many other security projects such as TinyPK, TinyCrypt, and SecureSense, among others.MiniSec is an open-source general-purpose security module designed for TinyOS to offer high-levels of security within the energy consumption and memory constraints of wireless sensor nodes <cit.>. It was designed to leverage the low energy consumption of TinySec and high-levels of security provided by ZigBee. Two available communication modes, single-source communication, and multi-source broadcast communication allow MiniSec to achieve high-levels of data secrecy, authentication, replay protection. MiniSec-B (used for broadcast communication) always outperforms TinySec and requires as little as 1/3 of the energy consumed by TinySec.§.§ RIOTOne of the newer members of the IoT OS family is RIOT, which has received significant academic research attention and grassroots community support since its emergence in 2012. Part of the reason for this popularity is RIOT's IoT-centric design and a developer-friendly API, allowing for standard programming in C or C++ and the use of well established debugging tools such as GCC, GDB, and Valgrind <cit.>. Applications can also be developed under Linux or Mac OS using the native port, and are highly portable, running seamlessly on a wide range of hardware, including 8-bit, 16-bit, and 32-bit platforms. RIOT is open-source, and the source code is available under GNU Lesser General Public License (LGPLv2.1) in GitHub[https://github.com/RIOT-OS/RIOT]. Its energy-efficient design requires only 1.5kB min RAM and 5kB min ROM for execution. RIOT has achieved partial POSIX compliance and is working towards attaining full POSIX capabilities.To the best of our knowledge, RIOT development so far has focused more on its real-time and multi-threading support for IoT as opposed to security. Its modular microkernel structure provides minimal isolation by preventing bugs in a single component (e.g., device driver, or the file system) from harming the whole system <cit.>. RIOT provides full multi-threading support following the classical multi-threading concept with memory passing inter-process communication (IPC) between threads and minimal computational and memory overhead (<25 bytes per thread) <cit.>. Applications can create as many threads as needed, limited only by the memory and the stack size available for each thread.§.§ FreeRTOSFreeRTOS development evolved around the goal of creating a real-time operating system (RTOS) to cater to the real-time needs of industrial and commercial contexts <cit.>. Developed since 2002, FreeRTOS is available under the MIT open-source license and enjoys support from many professional and community contributors. Its tiny kernel (memory footprint can be as small as 9kB) is capable of supporting more than 40 MCU architectures, some of which also provide a tick-less power-saving mode, ensuring high portability and energy efficiency. FreeRTOS supports a multi-threaded programming model. The OS itself is written in C while providing seamless C++ application support and an integrated IDE <cit.>. With no networking capabilities built-in, FreeRTOS has to depend on additional tools and libraries available to its ecosystem <cit.>. While these resources are available, this creates a heavy reliance on third-party providers for network stacks as well as testing and debugging purposes. Detailed IoT-specific pre-configured demos and references allow developers to take advantage of libraries to establish a secure connection to the cloud. A fork of the FreeRTOS code base, SafeRTOS, is developed with significant considerations for safety and supports a wide range of international development standards. empty§.§ Mbed OS Mbed OS from Arm is designed exclusively with IoT implementation in mind to facilitate the development of connected products based on the Arm Cortex-M microcontroller <cit.>. It is a free, open-source OS under the Apache 2.0 license developed primarily by Arm, alongside its partners, and numerous individual developers around the world. Mbed OS is written using the C and C++ programming languages. It features automated inclusion of modular library structures and an online integrated development environment (IDE) for ease of program development. It has a RTOS core, enabling deterministic and multi-threaded real-time software execution. Mbed OS has achieved a Thread certification and supports many lightweight communication protocols, including BLE, Thread, 6LoWPAN, Mobile IoT (LPWA), Ethernet, and WiFi.The development of Mbed OS emphasizes high end-to-end security by implementing security mechanisms in device hardware, software, communication, and maintaining it throughout the device lifecycle. It features hardware-enforced isolated security domains at the lowest level of the OS to restrict access to memory and peripherals. The platform security architecture (PSA) implementation of Mbed OS, illustrated in figure <ref>, shows the isolation between a secure processing environment (SPE) and a non-secure processing environment (NSPE). The SPE contains cryptographic assets, credentials, and critical code sections, and provides a SEE for the execution of security functionalities <cit.>. The NSPE, on the other hand, contains application firmware, OS kernel, libraries, and other nonsecure hardware resources.The secure partition manager (SPM) is a PSA-compliant software hypervisor that manages the isolation on a hardware level and provides standardized APIs to ensure secure IPC between SPE and NSPE. Correct use of SPM provides resiliency against persisting malware and prevents secret data leak between different modules in an application. The Mbed OS API allows the simple development of portable applications while leveraging its multi-layer security and communication features. Communication security is reinforced by the simple inclusion of secure sockets layer (SSL) and transport layer security (TLS) protocols using an API. § RELATED WORKZhang et al. <cit.> define the "things" in IoT as physical or virtual objects with connectivity and discuss 7 major areas of ongoing and prospective research work. The authors primarily address the security challenges stemming from the heterogeneity and large scale of IoT devices and networks. Their research emphasizes the need for research in inheriting software vulnerabilities and malware and call for novel IoT-specific and secure object identification, authentication, and lightweight cryptographic protocols. The authors of <cit.> highlight the need for embedded security and propose an embedded security framework for the development of IoT. Their framework is based on 6 key security requirements, featuring a mix of hardware and software-level security for a hybrid, cost-effective implementation. The authors, however, acknowledge the framework's dependency on precise definitions of parameters such as resource constraints, and network and system specifications. In light of the differences between the IoT architecture and traditional IT systems, the authors of <cit.> assert the importance of immediate detection and providing provisional measures to isolate the anomalous section and prevent the spread of damage. They propose a seven-step systematic approach, named as the cyber kill chain model, to identify and prevent misuse of built-in OS commands or software. The authors of <cit.> perform a survey of IoT operating systems in which they focus on identifying the communication protocols and software development kits (SDK) available to each OS. Their research acknowledges the distinctions between traditional and IoT-specific operating systems and underscores the need for additional security measures to build robust WSNs. Hahm et al. <cit.> perform a detailed analysis of the specific requirements that must be satisfied by an OS to run on low-end IoT devices. They briefly survey the applicable OSs for class 1 and class 2 devices in the IoT domain, focusing on the need to identify a unifying OS for all IoT devices. Additionally, their research specifies key design choices, focusing on distinct technical and non-technical properties concerning the development of an IoT-specific OS. Finally, they perform a detailed case study on representative OSs based on three different categories (event-based OSs, multi-threading OSs, and pure RTOSs) underscoring the resolution that different OSs fit different criteria, and a unifying architecture and capabilities of an OS are yet to be determined. In contrast, our work aims to facilitate the understanding of fundamental security challenges by exploring the various facets of the IoT ecosystem. We also present the role of an OS in addressing many of these issues and identify some key security requirements to support the steady and secure development of IoT devices. Additionally, we survey the modern literature in the IoT OS domain to determine adherence to our security requirements and discuss some security-centric approaches and evaluations.empty § CONCLUSION AND FUTURE WORKIn this paper, we provide a classification of the common and unique security challenges arising from the different aspects of IoT devices and the architecture. More specifically, we focus our study on the complex trade-offs due to resource-constraints, and challenges stemming from the inherent design, large-scale, and heterogeneity of devices. We also dive into the developers' perspective to try and understand the issues that inhibit them from designing adequately secure products. To the best of our knowledge, an effort to classify all the security challenges in IoT development did not exist prior to our work.We also discuss the pivotal role of an OS in resolving many of the aforementioned challenges and acting as a concrete platform to direct the changes necessary for a better IoT architecture. In addition, we specify security requirements following secure design principles and highlight the principles of "Open Design," "Isolation," and "Economy of Mechanism." We also suggest the use of a hybrid programming model, standard and mature programming languages and tools, and emphasize the need for IoT-specific open standard development.Finally, we survey some of the dominant IoT OSs and compare and contrast the approach they have taken to our security requirements. We also examine some of the security-centric evaluations done on specific OSs and discuss their findings to gain greater insight into the current state of development for IoT OSs.Our conclusion is that current IoT OS development has a divergent focus, and with the exception of Mbed OS, very few OSs are emphasizing designed-in security required to architect a secure design platform. Most OSs surveyed in this paper fail to comply with many of the specified security requirements, leading us to the resolution that the secure development of a unifying OS, around which secure development of IoT devices can be centered, is still far away.Our future work would involve a more rigorous specification of the security requirements in compliance with published security standards and guidelines for IoT OS development. The ultimate goal is the development of a security requirement framework to define and direct IoT OS development towards a secure and resilient IoT architecture.emptyACM-Reference-Formatempty | http://arxiv.org/abs/2310.19825v1 | {
"authors": [
"Alvi Jawad"
],
"categories": [
"cs.OS",
"cs.CR"
],
"primary_category": "cs.OS",
"published": "20231027191907",
"title": "A Survey of the Security Challenges and Requirements for IoT Operating Systems"
} |
Engineering the Kitaev spin liquid in a quantum dot system Sankar Das Sarma January 14, 2024 ==========================================================Contributions for all the authors can be found in Section <ref>. * equal work^† contact: {hanhu | pengc}@microsoft.com In this paper, we explore FP8 low-bit data formats for efficient training of large language models (LLMs). Our key insight is that most variables, such as gradients and optimizer states, in LLM training can employ low-precision data formats without compromising model accuracy and requiring no changes to hyper-parameters. Specifically, we propose a new FP8 automatic mixed-precision framework for training LLMs. This framework offers three levels of FP8 utilization to streamline mixed-precision and distributed parallel training for LLMs. It gradually incorporates 8-bit gradients, optimizer states, and distributed learning in an incremental manner. Experiment results show that, during the training of GPT-175B model onH100 GPU platform, our FP8 mixed-precision training framework not only achieved a remarkable 39% reduction in real memory usage but also ran 75% faster than the widely adopted BF16 framework (i.e., Megatron-LM), surpassing the speed of Nvidia Transformer Engine by 37%.This largely reduces the training costs for large foundation models. Furthermore, our FP8 mixed-precision training methodology is generic. It can be seamlessly applied to other tasks such as LLM instruction tuning and reinforcement learning with human feedback, offering savings in fine-tuning expenses. Our FP8 low-precision training framework is open-sourced at https://github.com/Azure/MS-AMPaka.ms/MS.AMP.§ INTRODUCTION Large language models (LLMs) <cit.> have demonstrated unprecedented capabilities in language comprehension and generation, leading to breakthroughs in reasoning, math, science, and many other tasks <cit.>.However, training LLMs is extremely costly. For example, PaLM takes 6,144 TPUv4 chips to train a 540B model, while GPT-3 175B consumes several thousand petaflop/s-days of compute for pre-training <cit.>. This motivates the needs of reducing the training costs of LLMs, especially for the scaling of next-generation super-intelligent models.Low-precision training is one of the most promising directions to reduce the costs, as it can provide high speed, small memory footprint, and low communication overhead. Most existing training systems, e.g., Megatron-LM <cit.>, MetaSeq <cit.>, and Colossal-AI <cit.>, train LLMs with either FP32 full-precision or FP16/BF16 mixed-precision by default. This is not essential, however, to achieve full accuracy for large models. With the release of Nvidia H100 GPU, FP8 is becoming the next-generation datatype for low-precision representation <cit.>. Theoretically, FP8 can achieve 2× speed-up, 50% - 75% memory cost savings, and 50% - 75% communication savings compared with current 16-bit and 32-bit floating point mixed-precision training, which is very promising for scaling-up next-generation foundation models. Unfortunately, the current support for FP8 training is rare and limited. The only usable framework is the Nvidia Transformer Engine (TE) <cit.>, but it applies FP8 solely for GEMM computation and still retains master weights and gradients using high precision, e.g., FP16 or FP32. As a result, the end-to-end speed-up, memory and communication cost savings are very limited, which does not fully unveil the power of FP8. To address this issue, we propose an extremely optimized FP8 mixed-precision framework for LLM training. The core idea is to infiltrate FP8 compute, storage, and communication into the whole progress of large model training, making the forward and backward pass all used the low-precision FP8, thus largely reducing system workloads compared to previous frameworks <cit.>. Specifically, we design three optimization levels that utilize FP8 to streamline mixed-precision and distributed training. The three levels gradually incorporate 8-bit collective communication, optimizer, and distributed parallel training in an incremental manner. The higher optimization level indicates using more FP8 during LLM training. Moreover, for large-scale training, such as GPT-175B trained on thousand of GPUs, our framework provides FP8 low-bit parallelism, including tensor, pipeline, and sequence parallelism, paving the way to next-generation low-precision parallel training.Training LLMs with FP8 is non-trivial. The challenges stem from issues such as data underflow or overflow, coupled with quantization errors arising from the narrower dynamic range and reduced precision inherent in FP8 data formats. These challenges cause numerical instabilities and irreversible divergences throughout the training process.To tackle them, we propose two techniques:precision decoupling and automatic scaling for preventing the loss of critical information. The former one involves decoupling the influence of data precision on parameters such as weights, gradients, optimizer states, and assigning reduced precision to components that are not precision sensitive. The latter one is to preserve gradient values within the representation range of FP8 data formatsthrough the dynamic adjustment of tensor scaling factors, thereby alleviating underflow and overflow occurrences during all-reduce communication. To validate the proposed FP8 low-precision framework, we apply it to GPT-style model training, encompassing both pre-training and supervised fine-tuning (SFT). The experimental results demonstrate the effectiveness of our FP8 methodology, yielding substantial benefits including a 29% to 39% reduction in real memory usage (e.g., 29% reduction for GPT-7B while 39% for GPT-175B ) and a notable63% to 65% decrease in weight-related communication overhead compared to the prevalent BF16 mixed-precision training approach. Without changes to any hyper-parameters, such as learning rate and weight decay, the models trained using FP8 exhibitperformance equivalency to those employing BF16 high precision, both in pre-training and downstream tasks. It is noteworthy that during the training of GPT-175B model, our FP8 mix-precision framework reduces training time by 37% compared to TE <cit.>,while consuming 42% less memory on H100 GPU platform.More importantly,the reduction in costs achieved through the utilization of low-precision FP8 can be further increased, as the scale of models continues to expand, which is presented in Fig. <ref>.For fine-tuning, we employ FP8 mixed-precision for instruction tuning and reinforcement learning with human feedback (RLHF) to better align pre-trained LLMs with end tasks and user preferences.Specifically, we fine-tune pre-trained models on publicly user-shared instruction-following data <cit.>. The models tuned with our FP8 mixed-precision demonstrate comparable performance to those utilizing the half-precision BF16 <cit.> on the AlpacaEval <cit.> and MT-Bench <cit.> benchmarks, while achieving 27% improvements in training speed. Moreover, FP8 mixed-precision exhibits considerable potentials in RLHF, a process that necessitates loading multiple models during training.Through the utilization of FP8 in training, the prevalent RLHF framework AlpacaFarm <cit.> canyield a 32% reduction in model weights and a 62% reduction in optimizer states' memory consumption.This further demonstrates the versatility and adaptability of our FP8 low-precision training framework. We are making the following contributions to drive the design of next-generation FP8 low-precision training and inference framework for LLMs. * A new FP8 mixed-precision training framework. It unlocks 8-bit weights, gradients, optimizer, and distributed training gradually in an add-on fashion, which is convenient in use. This 8-bit framework can be used as a simple drop-in replacement for existing 16/32-bit mixed-precision counterparts, without requiring any changes to the hyper-parameters and training receipts.Additionally, we provide a Pytorch implementation that enables 8-bit low-precision training in a few lines of code. * A new family of GPT-style models trained with FP8. We apply the proposed FP8 scheme to GPT pre-training and fine-tuning (i.e., SFT and RLHF), and demonstrate its potentials on a variety of model scales ranging from 7B to 175B parameters. We equip prevalent parallel computation paradigms with FP8 supports, including tensor, pipeline, and sequence parallelisms, enabling the utilization of FP8 to train large foundation models. We open-source the first FP8 GPT training codebase based upon Megatron-LM <cit.> implementation. We expect the release of our FP8 framework will establish a new paradigm for next-generation low-precision training system dedicated to large foundation models.§ FP8 LLMSMixed-precision <cit.> has been widely used in LLM training to improve compute and memory efficiency. The most popular mixed-precision schemes are FP16-FP32 and BF16-FP32. Because of the restricted numerical range of FP16, FP16-FP32 scheme has been known instabilities for training large models<cit.>. Consequently, the community now commonly adopts BF16-FP32 for training LLMs, such as Megatron-Turing NLG-530B <cit.>, Bloom-175B <cit.> and Gopher <cit.>. The underlying reason is that BF16 has a wide dynamic range to maintain numerical stability while matching the performance of the full-precision FP32. Moreover, BF16 employs half the number of bits as compared to FP32, thus reducing considerable memory footprints while improving compute efficiency. FP8 is a natural evolution from 16-bit data formats to further reducing computing costs. However, training LLMs with reduced-precision FP8 poses new challenges. The dynamic range and representation precision of FP8[The details of FP8 data formats are presented in Appendix <ref>.] are much lower than BF16 and FP16, which inevitably induces more training collapses, such as loss spikes or even NaNs. To address the issues, tensor scaling techniques are proposed <cit.>. The core idea is multiplying higher precision values with a scaling factor prior to their casting to FP8 in order to move them into a range that better overlaps with the representable range of a corresponding FP8 format[The details of FP8 tensor scaling are introduced in Appendix <ref>.]<cit.>. Such a per-tensor scaling technique reduces data quantization errors while improving numerical stability and accuracy, thus enabling the utilization of the lower-precision FP8 for training large models.Unfortunately, the current support for FP8 low-precision training is restricted. Nvidia TE <cit.> only supports FP8 compute for linear layers in Transformer <cit.>, while leaving all other operations, such as weight update and gradient synchronization, still using higher precision. In this work, we present an extremely optimized FP8 mixed-precision strategy for LLM training. The new FP8 optimization includes three key perspectives: FP8 communication, FP8 optimizer, and FP8 distributed training. By integrating these aspects, the training of LLMs such as the 175B GPT-3 model can fully harness the advantages of FP8 low-precision and improve training efficiency. §.§ FP8 Gradient and All-Reduce Communication Existing mixed-precision training methodologies <cit.> typically employ 16-bit or 32-bit datatype for the computation and storage of gradients, resulting in a high bandwidth requirement for collective communication throughout the training process. We found that directly applying FP8 to gradients leads to a decrease in accuracy.The fundamental issue lies in the underflow and overflow problems arising from the low-bit all-reduce operation.Specifically, there are two standard methods aggregating gradients across GPUs during all-reduce: pre-scaling and post-scaling. Pre-scaling divides the gradient g_i calculated on the i-th GPU by the total number of GPUs (i.e., N) before being summed, which is formulated as:g = g_1/N + g_2/N + ⋯ +g_N/N.When N is large, this division can cause data underflow, especially for FP8 low-precision representation of gradients. To mitigate this issue, post-scaling performs the gradient summation first, followed by the division scaling during the gradient collection process: g = (g_1 + g_2 + ⋯ + g_N)/N.This post-scaling approach keeps the gradients close to the maximum value of the FP8 datatype, effectively alleviating the underflow issue. However, this approachencounters overflow issues when aggregating gradients. In contrast, we propose an automatic scaling technique to resolve both the underflow and overflow issues in the pre-scaling and post-scaling approaches. To be specific, we introduce an auto-scaling factor μ, that changes on the fly during the training,to reduce the occurrences of overflow and underflow in gradients:g'_i = μ· g_i.A statistical analysis is conducted on the gradient values of g'_i,with the objective of quantifying the proportion of values that attains the maximum feasible value within the FP8 representation range. If the ratio of the maximum value exceeds a specified threshold, i.e., 0.001%, μ is set to 1/2 in the subsequent training step, thereby mitigating the risk of overflow. Conversely, when the ratio consistently remains the threshold, we opt to exponentially increase μ to 2 overthe span of 1,000 training steps, thereby effectively mitigating the risk of underflow occurrences.Another key obstacle of FP8 collective communication lies in devising an effective strategy to manage the tensor-wise scaling factors that are associated with each gradient tensor. The current NCCL implementation <cit.> lacks the capability of performing all-reduce operation considering the additional tensor-wise scaling factors. Meanwhile, efficient implementation is also very challenging, especially considering that the NCCL gradient summation operates at sub-tensor level. This complexity increases significantly when incorporating updates for tensor-wise scaling factors.To overcome this issue, we propose a new mechanism that scales FP8 gradients across GPUs using a single shared scalar. To be specific, let (g'_i, s'_i) denote a scaling tensor which stores the weight gradient in the i-th GPU, where g'_i is a FP8 tensor and s'_i is the corresponding scaling factor. The actual weight gradient is g'_i / s'_i. Prior to the all-reduce operation over gradient tensors,we first gather the scaling factors s'_i of each gradient tensor on all GPUs and calculate the global minimum scaling factor s'_g as:s'_g = min(s'_1, s'_2, …, s'_N),where the global minimum scaling factor s'_g is shared across GPUs. We use this shared scaling factor s'_g to unify the rescaling of the gradient tensors across GPUs. In this way, all gradient tensors associated with the same weight use the same shared scaling factor to quantize the tensors into FP8 format on all GPUs:g”_i = FP8(s'_g·(g'_i / s'_i)).This approach reduces communication overhead by transmitting only a single scalar s'_g, making the additional synchronization step highly efficient. As the input tensors share the same scaling factor, it eliminates the need of considering all-reduce the scaling factors in parallel and allows for standard NCCL all-reduce operation to be performed. The final collected gradient is obtained as follows: g = g”_1 + g”_2 + ⋯ + g”_N,s = N · s'_g,where g is the final aggregated gradient and s is the corresponding scaling factor. Rescaling the scaling factor for the summed gradient g is equivalent to dividing g by N in theory.By implementing the aforementioned dual strategies of distributed and automated scaling, we can successfully realize FP8 low-bit gradient communication while preserving model accuracy. Furthermore, this approachentails storing gradients in FP8 and conducting communication in FP8 as well, thereby yielding reductions in GPU memory usage and communication bandwidth consumption. §.§ FP8 OptimizerIn the training of LLMs, Adam and its variants <cit.> are the most frequently-used optimization methods, that maintain copies of model weights, gradients, first-order and second-order gradient moments for model updates.Mixed-precision training <cit.> with Adam optimizer typically stores master weights, gradients and gradient moments in 32-bit float format for numerical stability <cit.>. Consequently, the Adam optimizer consumes 16 bytes of memory per parameter during training:4_master weights+ 4_gradients + 4 + 4 _Adam states = 16 bytes.When model size is large, the memory consumption of the variables in Adam will become a bottleneck.Previous work <cit.> has revealed that reducing precision of the variables in optimizer to 16-bit leads to accuracy degradation when training billion-scale models[BF16 lacks the precision needed for accuracy, while FP16 has a restricted dynamic range. Given these challenges, prevalent mixed-precision training methodologies rely on utilizing FP32 full-precision for master weights, gradients, and gradient moments.]. This prompts an evaluation of which variables in the optimizer should be allocated high precision and which can be accommodated with low-precision. To clarify, we decouple the influence of data precision on the variables in the optimizer and investigate which one can be assigned lower precision, i.e., precision decoupling. We find a guidingprinciple: the gradient statistics can use lower precision, while the master weights necessitate high precision.More concretely, the first-order gradient moment can tolerate a high quantization error and can be assigned with low-precision FP8, while the second-order moment requires a higher precision, as analyzed in Sec. <ref>. This stems from the fact that, during model updates in Adam, the direction of the gradient holds greater significance than its magnitude.FP8 with tensor scaling can effectively preserve the distribution of the first-order moment as the high-precision tensor, though it introduces precision degradation to some extend. Calculating the square of gradients for the second-order gradient moment might lead to data underflow due to the typically small gradient values. Therefore, allocating a 16-bit higher precision is necessary to preserve numerical accuracy. On the other hand, we find that it is crucial to keep the master weights using high precision.The underlying reason is that weight updates can sometimes become extremely small or large during training, higher precision for the master weights helps prevent loss of information when updating weights, ensuring more stable and accurate training. In the implementation, the master weights have two viable options: utilizing either FP32 full-precision or FP16 with tensor scaling. FP16 with tensor scaling offers the advantage of conserving memory without compromising accuracy. Consequently, our default choice is to employ FP16 with tensor scaling for storing master weights in the optimizer. Our FP8 mixed-precision optimizer consumes 6 bytes of memory per parameter during training:2_master weights+ 1_gradients + 1 + 2 _Adam states = 6 bytes.This new low-bit optimizer reduces memory footprints by 2.6x in comparison to the previous solution, as exemplified in Eq. (<ref>). Noteworthily, this is the first FP8 optimizer for LLM training. The experiments in Sec. <ref> show that FP8 optimizer can preserve model accuracy at various scales, ranging from 125M to 175B parameters. §.§ FP8 Distributed Parallel TrainingTraining LLMs like GPT-3 requires distributed learning strategies for parallelizing across GPUs. The frequently-used strategies include data parallelism, tensor parallelism, pipeline parallelism, and sequence parallelism. Each parallelism has its own meritsand has been used in a complementary fashion in existing systems <cit.>.For FP8 supports of these strategies, neither data parallelism nor pipeline parallelism necessitates any specific modifications, because they do not involve additional FP8 compute and communication when splitting data batches or model layers into segments across devices. Tensor parallelism partitions individual layers of a model across multiple devices, such that the shards of weight, gradient and activation tensors are placed on separate GPUs, instead of a single one. To equip tensor parallelism with FP8, we convert the sharded weight and activation tensors to FP8 format for linear layer computation, enabling the forward compute and backward gradient collective communication all using FP8.On the other hand, sequence parallelism splits input sequences into multiple chunks and the sub-sequences are fed to different devices to save activation memory. As shown in Fig. <ref>, sequence and tensor parallelism are performed in parallel to different parts of a Transformer model to make the best use of the available memory and improve training efficiency. There is a converter g between sequence and tensor parallel regions to all-gather sequence partitions in the forward pass (or reduce-scatter tensor segments in the backward pass). We add an FP8 datatype conversion prior to g, such that the all-gather (or reduce-scatter) operation uses FP8 low-bit activation to save communication cost across GPUs. In addition, Zero Redundancy Optimizer (ZeRO) <cit.> is another frequently-used distributed learning technique in large model training. The core idea of ZeRO is to shade model states over devices, such that each device only hold a fraction of data (e.g., master weights, gradients, and optimizer states) required for a training step. To reduce memory consumption, ZeRO method generally splits a single tensor into multiple partitions and distributes them to different devices. Directly applying FP8 to ZeRO is infeasible, because it is difficult to handle the scaling factors associated with the FP8 partitions. The per-tensor scaling factors should be distributed along with FP8 partitions. To address this issue, we implement a new FP8 distribution scheme that distributes each tensor as a whole across devices, rather than partitioning it into multiple sub-tensors as in ZeRO. The distribution of FP8 tensors is processed in a greedy manner, as outlined in Alg. <ref>. Specifically, our method first sorts the tensors of model states according to their sizes, and then distributes the tensors to different GPUs based upon the remaining memory size of each GPU. The distribution follows the principle that the GPUs with larger remaining memory get a higher priority in receiving new distributed tensors. In this way, the tensor scaling factors can be distributed along with the tensors smoothly, while reducing communication and compute complexity. Figure <ref> presents a visual illustration of the difference in ZeRO tensor partitioning between scenarios with and without scaling factors.§ EXPERIMENT In this section, we assess the effectiveness of the proposed FP8 mixed-precision training approach on GPT-style LLMs, including a wide range of model scales, from 125 million to 175 billion parameters. For performance ablation, we compare GPT models trained with FP8 against those trained with half-precision BF16 and full-precision FP32. For generality evaluation, we conduct experiments encompassing both FP8 low-bit pre-training and fine-tuning, considering instruction tuning and human preference alignment.§.§ Experimental Setup §.§.§ Training Dataset Our pre-training data is constructed using open-sourced language collections from several sources, including CommonCrawl[https://commoncrawl.org], The Pile <cit.>, C4 <cit.>, OpenWebText <cit.>, CC-NEWS <cit.>, CC-Stories <cit.>, Redpajama <cit.>, and Wikipedia[https://wikipedia.org].We apply fuzzy deduplication <cit.> across CommonCrawl snapshots to enhance data quality.Tab. <ref> in Appendix <ref> provides details of our pre-training data, including information such as the number of tokens from each source and associated sampling weights.For a more comprehensive understanding of the data and its cleaning pipeline, readers are encouraged to refer to Appendix <ref>. Moreover, for instruction tuning, we follow the same settings as Vicuna-v1.1<cit.>, which uses a publicly user-shared instruction following data <cit.>.For reinforcement learning with human feedback, the training data we used is a combination of the Anthropic's Helpful and Harmless dataset <cit.> and Open-Assistant dataset <cit.>. The training framework and associated configurations align with the publicly available AlpacaFarm <cit.>. §.§.§ Model Configuration The model architecture we used is a decoder-only Transformer <cit.>, which has been widely-used in recent generative LLMs like PaLM <cit.>, OPT <cit.>, and LLaMA <cit.>. In addition to the base architecture, we integrate several modifications proposed recently to improve model efficiency and effectiveness.1) Rotary Positional Embedding: Drawing inspiration from recent successful experiments <cit.>, we incorporate rotary positional embeddings (RoPE) <cit.> into our approach. This addition enables us to capture both absolute and relative positions information, enhancing performance especially when extrapolating to larger context windows. 2) Flash Attention: The standard attention implementation is bottlenecked by memory access <cit.>. Flash Attention <cit.> proposed an IO-aware exact attention algorithm which uses tiling to reduce the amount of HBM accesses, achieving substantial acceleration. We train the models using the proposed FP8 optimizer, which is built upon Adam <cit.> with decoupled weight decay <cit.>, following the common practise with the decay rates β_1 = 0.9, β_2 = 0.95, and weight decay = 0.1. The learning rate schedule is cosine-like, and the final learning rate is 10% of the maximal learning rate. We train the models for 100B tokens in total with a batch size of 4M tokens, and the input sequence length is set to 2048. The model warm-up is conducted for 1,000 iterations. Tab. <ref> presents the details of model configurations and the corresponding training settings. The training is conducted on Azure NDv5 H100 GPU platform <cit.>. §.§ Main Results §.§.§ Model PerformanceWe first compare the performance of models trained using FP8 mixed-precision with those trained using BF16. In Fig. <ref>, the pre-training loss over tokens is displayed for GPT models of 7B, 13B, and 175B parameters. The training configurations and hyper-parameters remain consistent across models trained with FP8 and BF16.The only difference lies in the mixed-precision schemes utilized. As shown in Fig. <ref>, the loss curves almost overlap with each other.The results unequivocally demonstrate that the proposed FP8 mixed-precision scheme can achieve equivalent performance to the prevalent higher-precision BF16 scheme <cit.> across a diverse array of model scales.Also, we evaluate the pre-trained models on a wide range of downstream tasks, including HellaSwag (HS) <cit.>, Lambada <cit.> BoolQ <cit.>, PIQA <cit.>, COPA <cit.>, Winogrande <cit.>, Arc <cit.>, and OpenbookQA (ObQA) <cit.>. As reported in Tab. <ref>, the FP8 pre-trained models exhibitcomparable zero-shot performance in comparison to their BF16 counterparts. This result provides further validation that models pre-trained with FP8 low-precision maintain both accuracy and intrinsic in-context learning capabilities at a level comparable to their high-precision counterparts.Furthermore, we leverage the proposed FP8 mixed-precision approach for fine-tuning LLMs in instruction following. For a fair comparison, we follow the same instruction tuning settings as Vicuna-v1.1 <cit.>, which adopts the open-sourced LLaMA-7B <cit.> as the base model for fine-tuning. Fig. <ref> presents the fine-tuning loss, where the curves corresponding to BF16 and FP8 display a notable degree of overlap. Meanwhile, the win-rate of our FP8 fine-tuned models against Davinci-003 <cit.> is also comparable to that of Vicuna-v1.1, which is fine-tuned using BF16 half-precision, as reported in Tab. <ref>. This indicates that our FP8 low-bit training scheme is versatile, as it is applicable not only to pre-training phase but also to downstream fine-tuning tasks. In addition, we further apply the proposed FP8 mixed-precision scheme to reinforcement learning from human feedback (RLHF), a more complex process to align LLMs with user preferences.Following the same training setting as AlpacaFarm <cit.>, a recent RL framework for LLM alignment, we optimize policy models with PPO algorithm <cit.>. The solely difference lies in the choice of mixed-precision training schemes, i.e., BF16 v.s. FP8. From the results reported in Fig. <ref> and Tab. <ref>, we observe a notable reduction in memory utilization, for instance, a 32% memory reduction concerning model weights and a 62% reduction concerning optimizer states. Consequently, it can be inferred that FP8 is capable of replicating the BF16 mixed-precision for RLHF training. Thisunderscores the broader applicability and versatility of our FP8 low-bit training solution. §.§.§ System Performance In this section, we evaluate system-level performance of FP8 mixed-precision, considering communication efficiency, memory utilization, and the overall speed, with an emphasis on cost savings. Our method employs 8-bit gradients for all-reduce collective communication among GPUs. Theoretically, this results in a 75% reduction in communication costs when compared to the mainstream 32-bit scheme (Despite BF16 mixed-precision computing gradients using 16-bit precision, it still employs 32-bit precision for all-reduce communication <cit.>).Due to the impact of system transmission loss, the observed practical reduction during GPT model training falls within the range of 63% to 65%, as indicated in Table <ref>. Furthermore, it is worth noting that the recent Nvidia Transformer Engine (TE) <cit.> still relies on full-precision FP32 for collective communication,resulting in the same level of reduction for our FP8 solution. When training GPT models with identical batch sizes, FP8 mixed-precision can lead to a reduction in memory footprint ranging from 28% to 39% when compared to BF16, as reported in Tab. <ref>. These reductions in memory consumption are attributed to the FP8 gradient and FP8 optimizer techniques we have introduced. Moreover, compared with TE <cit.>, our solution is also very competitive, obtaining 36.1%, 36.0%, and 42.1% additional memory reductions for different model sizes, i.e., GPT-7B, 13B, and 175B. Although TE employs FP8 for compute, it still uses high-precision optimizer and gradients, which consumes much more memory than our solution. In addition, the saved memory in our method can be used to train larger batch size or longer sequence. For example, when employing 32 H100 GPUs with a memory capacity of 80GB, our approach enables the training of models with a context of 4,096 tokens, accommodating up to 175 billion parameters. In contrast, TE can only accommodate models with a context of 2,048 tokens. This showcases the potential of integrating our FP8 mixed-precision training into existing LLMs, empowering them to train longer sequences with the same GPU resources.Moreover, our FP8 mixed-precision scheme shows a superior training throughput compared to the prevalent BF16 scheme, achieving a notable speed-up of 75% when applied to GPT-175B model. The model FLOPS utilization (MFU) of FP8 mixed-precision training is 34.2% on H100 GPUs, being 37.3% superior to TE. These findings provide substantial evidence that our FP8 scheme effectively conserves memory, reduces communication costs during the training of large models, and ultimately enhances system utilization efficiency on the latest H100 GPU platform.§.§ Ablation StudyWe ablate various design choices of FP8 mixed-precision training strategy for LLMs and report the performance in Tab. <ref> – <ref> and Fig. <ref> – <ref>.The ablation experiments are conducted on GPT models, whose architectures and training settings are elaborated in Tab. <ref>. Importantly, our ablation study yields several guidelines for the effective utilization of 8-bit datatype in LLM training, which can facilitate future research on low-bit model training.Communication. We first analyze the limitations of the conventional pre-scaling and post-scaling methods when aggregating low-bit gradients during the all-reduce communication process. As shown in Fig. <ref>, we conduct a statistical analysis on SNR, underflow rate, and overflow rate of weight gradients across different Transformer blocks. It is observed that the pre-scaling method has relative larger underflow rate when quantifying gradients from 32-bit to 8-bit, while the post-scaling method has higher overflow rate. In contrast, the proposed auto-scaling technique can diminish both the underflow ratio and the overflow ratio, while getting much better SNR, as shown in Fig. <ref> (a). This demonstrates the effectiveness of auto-scaling method in reducing quantization errors when utilizing 8-bit datatype for gradient all-reduce. r0.5 [t]0.45 0.62*Model 2*TP 2*PP 2*DPMicro Mixed 2cAct-related Comm. BS Precision Rate (%) Volume (GB) 2*GPT-13B 2*2 2*12*16 2*2 BF16 12.9 4.7 FP8 (Ours) 5.3 3.1 2*GPT-175B 2*8 2*4 2*4 2*1BF16 14.9 5.9 FP8 (Ours) 5.2 3.9Activation-related communication volume reduction in sequence and tensor parallelism, including the all-gather operator on activation and the reduce-scatter on activation gradients. [t]0.45 0.652*Model 2*TP 2*PP 2*DP Micro Mixed 4cGPU Memory BS Precision Min Max 3*GPT-7B 3*1 3*1 3*32 3*2 BF16 69.07 69.63FP8 (TE) 76.97 77.28 FP8 (Ours) 49.06 49.36 3*GPT-13B 3*2 3*1 3*16 3*2 BF16 67.98 68.18 FP8 (TE) 73.68 76.36 FP8 (Ours) 48.45 48.85 3*GPT-175B 3*8 3*4 3*4 3*1BF16 65.60 66.12FP8 (TE) 69.04 69.57 FP8 (Ours) 38.64 40.28Comparing ZeRO distribution methods in terms of memory load across GPUs. Here “Min” and “Max” denote the minimum and maximum memory utilization observed across GPUs.Our FP8 ZeRO method uses less memory while achieving memory-aware load balancing. Optimizer. We further ablate the impact of reduced precision for the variables in the AdamW optimizer.We set the BF16 mixed-precision optimizer as the baseline, since it has been widely used in existing LLM training frameworks <cit.>. Tab. <ref> presents the settings of reduced precision for the variables, while Fig. <ref> plots the corresponding training losses. We observe that: 1) FP8 master weight induces performance degradation (see the #2a vs. #3 lines in Fig. <ref>), while FP16 can maintain accuracy as FP32 (see #2a vs. #0 and #1) but requiring using tensor scaling. It reveals that the master weight is precision-sensitive. This can be attributed to the master weight's role in updating weights, which tend to exhibit small magnitudes, necessitatinghigh precision to maintain accuracy. 2) The training loss of BF16 master weight is slightly higher than that of FP16 with a scaling factor because BF16 has fewer mantissa bits, resulting in lower precision (see #2a vs. #2b). 3) The second-order gradient moment is more precision-sensitive than the first-order one, because the square calculation is easy to cause underflow and leads to accuracy degradation. Utilizing FP8 for the second-order gradient moment can lead to divergent training loss (see the #4 dot in Fig. <ref>). Parallelism. In our FP8 LLM training framework, we introduce FP8 low-bit convertors into sequence parallelism and tensor parallelism to reduce activation communication costs across GPUs. Here we conduct an analysis experiment to count the activation-related communication volume during GPT model training, and report the numbers in Tab. <ref>. It is observed that our FP8 parallel scheme results in a substantial reduction of 34% in activation-related communication costs compared to the original method utilizing BF16. Furthermore, in ZeRO distributed training, our method distributes each FP8 tensor along with its associated scaling factor as a whole, rather than partitioning the tensor into splits across GPUs. This strategy not only results in more GPU memory savings but also maintains a balanced memory load across GPUs, as demonstrated in Tab. <ref>.§ RELATED WORKMixed-precision Training. Efficient training through reduced mixed-precision has been widely used in modern deep learning to save computing costs. While some works have taken bit-reduction to the extreme, i.e. 1-bit binary networks <cit.>, they have not been successful in maintaining model accuracy <cit.>. The most practical scheme now is the FP16 half-precision method <cit.>, which can maintain accuracy while improving training efficiency.The computations during forward pass and back propagation use FP16 while the master weights use FP32. Since FP16 has a narrower dynamic range, FP16 mixed-precision entails loss scaling <cit.> to prevent loss of accuracy. Fortunately, the need for loss scaling can be avoided by using BF16 datatype, because BF16 maintains the same dynamic range as the full-precision FP32. This results in that large model training now prefers to use BF16 mixed-precision scheme, which is more stable during training <cit.>.FP8 is a natural progression from 16-bit data formats to further reducing computing cost. Early pioneering efforts in FP8 low-bit model training <cit.> have largely remained at the simulation stage.Consequently, there exists a notable gap between the projected capabilities of these approaches and their actual performance on hardware <cit.>.With the advent of Nvidia Hopper GPU architecture <cit.>, FP8 is emerging as a viable and practical data type for the next-generation low-precision training, as discussed in <cit.>.At present, the Nvidia Transformer Engine (TE) <cit.> serves as the primary framework for FP8 mixed-precision training. However, its support for FP8 usage remains somewhat constrained. TE's current implementation restricts FP8 usage solely to weight computation, retaining the storage of model weights and gradient calculations with 16-bit data types.Consequently, the end-to-end speed-up, memory and communication cost savings are limited. In contrast, our work infiltrates FP8 gradient, optimizer, and distributed training into the whole progress of model training, fully unveiling the capabilities of FP8.Large Language Models. Recent years have witnessed a substantial evolution in the field of LLMs. Autoregressive language modeling – predicting the future of a text sequence from its past – provides a simple yet powerful objective that admits formulation of numerous tasks. While there exist alternative methodologies, such as masked language modeling <cit.> and permutation language modeling <cit.>, the autoregressive method now is more promising because of its strong performance. Following the scaling laws <cit.> and the refined laws <cit.>, variousLLMs are have been proposed, including dense models: GPT-3 <cit.>, Jurassic-1 <cit.>, Gopher <cit.>, Chinchilla <cit.>,Bloom <cit.>, OPT <cit.> Megatron-Turing NLG <cit.>, PaLM <cit.>, LaMDA <cit.>, LLaMA <cit.>, and sparse models: GLaM <cit.>, and Switch transformers <cit.>. Each of them has demonstrated remarkably competitive few-shot performance across a wide range of tasks at the time of their respective releases. Nonetheless, these models still encounter challenges, such as overwhelming computational requirementsand the need for acquiring more high-quality training data. In this work, we delve intothe utilization of low-precision techniques to mitigate the training costs,which is a crucial step for the continued expansion of language models. Low-precision training has been widely used in LLM training to reduce compute cost. OPT <cit.> and GLM <cit.> utilize FP16 for forwards and backwards and FP32 for optimizer states and master weights, to reduce the GPU memory usage and improve training efficiency. Bloom <cit.> find that FP16 can cause numerical instabilities and irreversible divergences, especially when training models larger than 100B parameters, because FP16's dynamic range is limited. Consequently, Bloom and other LLMs, such as Gopher <cit.> and Chinchilla <cit.>, adopt BF16 mixed-precision, because BF16 has a wide dynamic range that is the same as FP32. LLM training and tuning with 8-bit low-precision were not well-explored in previous works, because the hardware support for FP8 is not available before the release of Nvidia Hopper infrastructure. This work presents the first exploration of FP8 pre-training and fine-tuning for LLMs, while proposing an extremely-optimized FP8 mixed-precision scheme. We hope this work could facilitate future research in FP8 and, potentially, extend to exploring even lower precision training, such as 4-bit and 1-bit. § CONCLUSION In this work, we explore 8-bit training for LLMs. We introduce a new FP8 mixed-precision training framework, which incorporates 8-bit collective communication, optimizer, and distributed parallel training in an incremental manner. To our best knowledge, this is the first work infiltrating FP8 compute, storage and communication into the whole progress of large language model training. Extensive experiments demonstrate the proposed method effectively diminishescommunication overhead and curtails memory utilization in the context of GPT model training at various scales.In future work, we plan to scale up the size and training steps of the FP8 GPT models and further train them with our 8-bit mixed-precision scheme. Moreover, we will also use the proposed FP8 scheme to train multi-modal largemodels, and explore low-bit deployment of LLMs on various edge devices, such as smart phones.§ CONTRIBUTION AND ACKNOWLEDGEMENTThis project was initially proposed by Han Hu and Peng Cheng, who are the directional lead.Shuguang Liu served as the product lead throughout the project. The contributions for all the co-authors are detailed as follows:FP8 Framework: Kan Wu, Houwen Peng, Ze Liu, Peng Cheng, Han HuSystem: Yifan Xiong, Ziyue Yang, Yuxiang Yang, Guoshuai Zhao, Peng ChengHardware Infrastructure: Guoshuai Zhao, Yuxiang Yang, Yifan Xiong, Peng Cheng, Shuguang Liu, Joe ChauData: Ruihang Li, Miaosen Zhang, Jia Ning, Chen Li, Ruizhe Wang, Houwen Peng, Han HuPre-training: Yixuan Wei, Kan Wu, Ze Liu, Miaosen Zhang, Zheng Zhang, Houwen Peng, Han Hu Alignment (SFT, RS, and RLHF): Bolin Ni, Jingcheng Hu, Yixuan Wei, Houwen Peng, Han HuEvaluation: Yixuan Wei, Bolin Ni, Jingcheng HuProduct Engineering: Yuxiang Yang, Kan Wu, Yifan Xiong, Ziyue Yang, Guoshuai Zhao, Peng Cheng We thank Eric Chung, Bita Darvish Rouhani, Yu Pei, Hyunseung Harry Yoo, Zhenghong Zhou, Gongrui Zhang, and Zhirong Wu for helpful discussions.We thank Baining Guo and Lidong Zhou for their guidance and support for this project.plainnat § APPENDIX§.§ FP8 Data FormatsIn September 2022, NVIDIA, ARM, and Intel published FP8 specification for standardization as an interchange format for AI <cit.>. The industry has moved from 32-bit precision to 16-bit, and now even 8-bit precision for AI model training. This development reflects a broader industry trend that has transitioned from high-precision to low-precision training. Notably, the proposed FP8 specification introduces two distinct data types, E5M2 and E4M3, which offer a trade-off between a larger range and higher precision of stored values <cit.>. * E4M3 consists of 1 sign bit, 4 exponent bits and 3 bits of mantissa. It can store values up to +/-448 and NaN.* E5M2 consists of 1 sign bit, 5 exponent bits and 2 bits of mantissa. It can store values up to +/-57344, +/- inf and NaN. The FP8 format <cit.> roughly follows the IEEE 754 standard. Compared to higher precision data formats such as FP16 and FP32, FP8 suffers from two kinds of representation degradation:* Lower representation range. The representation range in a data format specifies the range between the maximum and minimum values that the format can accurately represent. There are two modes, a normal mode, which defines a regular range with relatively constant precision, and a subnormal mode, which extends the range to represent smaller values with lower precision. The normal rangeprimarily depends on the number of exponent (E) bits, with more E bits resulting in a larger normal range. On the other hand, the subnormal range is primarily influenced by the number of mantissa (M) bits, where an increase in M bits leads to a larger subnormal range. As illustrated in Tab. <ref>, the representation range of FP8 is notably narrower compared to that of FP16 and FP32, especially in the case of the S1E4M3 sub-format (S denotes the sign bit). This discrepancy represents the primary challenge when employing FP8 for training large models. * Lower representation precision. The limited number of mantissa (M bits) leads to quantization representation errors. Due to the considerably fewer M bits in FP8, the representation precision of FP8 is substantially lower than that of FP16, as depicted in Tab. <ref>. This challenge stands as another significant hurdle when considering the use of FP8 for training large models. FP8 consists of two sub-formats: S1E4M3 and S1E5M2. The former offers a narrower representation range but higher precision, while the latter provides a larger range but lower precision. These two sub-formats give users the flexibility to strike a balance between their requirements for range and precision in model training. §.§ FP8 Tensor ScalingWe now discuss the underlying mechanisms for how large model training with FP8 overcomes the challenges associated with representation range and precision degradation. The key technique behind is tensor scaling, which scales the tensor values that originally locate out the representation range of a data format to its comfort zone, as visualized in Fig. <ref>.The pioneer scaling techniques <cit.> apply a global scaling factor to the loss, such that gradients of all layers are scaled by a single adaptive factor. The utilization of the global loss scaling technique, in conjunction with various other training strategies, has facilitated the widespread adoption of FP16 mixed-precision training on V100 and A100 GPUs. Remarkably, this approach has resulted in minimal to no degradation in accuracy, particularly for small to medium-sized models <cit.>. Nonetheless, when dealing with super-large models or complex tasks, such as in the training of models like DALL-E <cit.>, the global loss scaling technique still encounters significant underflow issues. As a consequence, block-wise <cit.> and layer-wise <cit.> gradient scaling are proposed. While the global scaling technique enables almost no accuracy drop for FP16 training (with a range of [5.96E-8, 6.55E+4]), the fine-grained per-tensor scaling will enable stable model training using even shallower range by FP8 (with a range of [1.95E-3, 448] for E4M3 and a range of [1.53E-5, 5.73E+4] for E5M2). Fig. <ref> shows that the representation range of FP8 has been large enough to deal with general model training. In the per-tensor scaling technique, various strategies are available for choosing the suitable scaling factor for a given FP8 tensor. Two common approaches are “just-in-time scaling" and “delayed scaling" <cit.>.* Just-in-time scaling. This strategy involves determining the scaling factor based on the maximum absolute value (amax) of the tensor being generated. However, in practical applications, this approach is often infeasible because it necessitates multiple passes through the data. Specifically, the operator first produces and writes out the output in higher precision, then calculates the maximum absolute value of the output, and finally applies this scaling factor to all values to obtain the final FP8 output. This process introduces a significant amount of overhead, which can substantially reduce the benefits of using FP8. * Delayed scaling. This strategy involves selecting the scaling factor based on the maximum absolute values observed in a certain number of preceding iterations. This approach allows for the full performance benefits of FP8 computation but necessitates the storage of a history of maximum values as additional parameters of the FP8 operators.§.§ Pre-training DataTab. <ref> presents an overview of our collected data sources along with the corresponding sampling weights employed in pre-training. The arXiv and StackExchange subsets are collected from Redpajama <cit.>, while BookCorpus2 <cit.>, Books3 <cit.>, DM-Math <cit.>, Gutenberg <cit.>, HackerNews[https://news.ycombinator.com], NIH ExPorter[https://exporter.nih.gov], OpenSubtitles <cit.>, and USPTO[https://bulkdata.uspto.gov] subsets are extracted from The Pile <cit.>. The Wikipedia data is downloaded from HuggingFace <cit.>. We use the 20220301 dump, including 24 languages: bg, ca, cs, da, de, en, es, fr, hi, hr, hu, it, jp, ko, nl, pl, pt, ro, ru, sl, sr, sv, uk, zh.We pre-process 11 CommonCrawl snapshots, ranging from 2018 to 2023, with the CCNet pipeline <cit.>. This process involves data deduplication at the line level, followed by language identification utilizing a fastText linear classifier <cit.> to eliminate non-English pages. A filtering mechanism based on an n-gram language model is employed to exclude low-quality content. In addition, we train a linear classifier <cit.> to distinguish documents similar to Wikipedia pages from randomly sampled CommonCrawl documents. Documents not classified as resembling Wikipedia are excluded. Finally, we perform fuzzy deduplication <cit.> across all the processed snapshots from CommonCrawl. We collect Python code data from Github using a repository list provided by Bing indexing <cit.>. The cleaning of the code data includes three steps. First, we remove control characters, except for \ t and \ n. Next, we remove copyright comments in the code. An alphanumeric rate filter is then applied, removing lines with a rate below 0.5 if they are comments, and discarding the entire file if its overall alphanumeric rate is less than 0.98. Files with less than 5 lines or a maximum line length exceeding 1,000 characters are also discarded. Also, files with an average line length of more than 100 characters are discarded. Lastly, a pattern search is conducted to identify key Python keywords (e.g., import, from, def, class, if, for, try, etc.) within the code. Files containing less than 3 instances of these keywords are eliminated. This comprehensive process ensures that the remaining Python code data is of high quality and suitable for use in academic research. We additionally add Python code from Stack <cit.>, and perform fuzzy deduplication within all the collected Python code. | http://arxiv.org/abs/2310.18313v2 | {
"authors": [
"Houwen Peng",
"Kan Wu",
"Yixuan Wei",
"Guoshuai Zhao",
"Yuxiang Yang",
"Ze Liu",
"Yifan Xiong",
"Ziyue Yang",
"Bolin Ni",
"Jingcheng Hu",
"Ruihang Li",
"Miaosen Zhang",
"Chen Li",
"Jia Ning",
"Ruizhe Wang",
"Zheng Zhang",
"Shuguang Liu",
"Joe Chau",
"Han Hu",
"Peng Cheng"
],
"categories": [
"cs.LG",
"cs.CL"
],
"primary_category": "cs.LG",
"published": "20231027175951",
"title": "FP8-LM: Training FP8 Large Language Models"
} |
A Quantum Approximate Optimization Algorithm Based on CNR Operation An Min Wang 2023-10-25 =================================================================== We introduce Hyper-Skin, a hyperspectral dataset covering wide range of wavelengths from visible (VIS) spectrum (400nm - 700nm) to near-infrared (NIR) spectrum (700nm - 1000nm), uniquely designed to facilitate research on facial skin-spectra reconstruction. By reconstructing skin spectra from RGB images, our dataset enables the study of hyperspectral skin analysis, such as melanin and hemoglobin concentrations, directly on the consumer device.Overcoming limitations of existing datasets, Hyper-Skin consists of diverse facial skin data collected with a pushbroom hyperspectral camera.With 330 hyperspectral cubes from 51 subjects, the dataset covers the facial skin from different angles and facial poses. Each hyperspectral cube has dimensions of 1024×1024×448, resulting in millions of spectra vectors per image.The dataset, carefully curated in adherence to ethical guidelines, includes paired hyperspectral images and synthetic RGB images generated using real camera responses.We demonstrate the efficacy of our dataset by showcasing skin spectra reconstruction using state-of-the-art models on 31 bands of hyperspectral data resampled in the VISand NIR spectrum.ThisHyper-Skin dataset would be a valuable resource to NeurIPS community, encouraging the development of novel algorithms for skin spectral reconstruction while fostering interdisciplinary collaboration in hyperspectral skin analysis related to cosmetology and skin's well-being.Instructions to request the data and the related benchmarking codes are publicly available at: <https://github.com/hyperspectral-skin/Hyper-Skin-2023>.§ INTRODUCTIONHyperspectral imaging offers a comprehensive and non-invasive approach for facial skin analysis, capturing detailed spatio-spectral information across a wide range of wavelengths <cit.>. This three-dimensional hyperspectral cube surpasses the limitations of single-point measurements, providing a deeper understanding of facial skin characteristics and spatial distribution <cit.>. Previous studies have demonstrated the potential of hyperspectral skin analysis in dermatology <cit.>, cosmetics <cit.>, and skin's well-being <cit.>, paving the way for advanced analysis and applications in these domains. This paper introduces "hyper-skin", a hyperspectral skin dataset uniquely designed to facilitate the development of algorithms targeting on consumer-based cosmetology applications. This unique dataset is curated with this specific goal in mind, focusing its practical relevance within the consumer-based cosmetology and skin beauty.Despite the potential of hyperspectral skin analysis on cosmetology and skin beauty, the high cost and limited accessibility of hyperspectral imaging systems have limited their widespread adoption.Consumer cameras, particularly those embedded in smartphones, have become an integral part of daily life and are extensively used for capturing selfies and everyday images.Hence, many works study the use of RGB images from consumer cameras for skin analysis <cit.>.While RGB images have been used for certain skin analysis tasks <cit.>, they lack the ability to capture the comprehensive spatio-spectral information provided by hyperspectral imaging, limiting the depth of skin analysis.In light of the prevalence of consumer cameras, an intriguing idea emerges: Can we reconstruct valuable information from expensive hyperspectral cubes using accessible RGB images, enabling hyperspectral skin analysis directly on consumer devices?This highlights the need for a comprehensive dataset to develop computational reconstruction methods for the question above.While RGB datasets such as those from the International Skin Imaging Collaboration (ISIC) competition series (2016 - 2020) capture visual information, they lack the corresponding hyperspectral data required for studying hyperspectral reconstruction <cit.>.On the other hand, hyperspectral datasets enable the exploration of relationships between skin spectra and spatial distribution. Although the RGB counterpart can be synthetically generated from a given hyperspectral cube using a known camera response function, publicly available hyperspectral datasets focusing specifically on facial skin analysis are limited and often inaccessible to the public. Furthermore, existing hyperspectral datasets primarily focus on the visible (VIS) spectrum (400nm - 700nm), disregarding the valuable near-infrared (NIR) spectrum (700nm - 1000nm). These limitations highlight the necessity for ahyperspectral dataset that addresses these gaps and facilitates the development of low-cost and accessible hyperspectral skin analysis on consumer devices. Our Contributions Our Hyper-Skin dataset is uniquely designed to unlock the potential of hyperspectral skin analysis directly on the consumer device.With high spatial and spectral resolution, i.e., 1024×1024×448, Hyper-Skin offers an extensive collection of hyperspectral cubes, yielding over a million spectra per image.Notably, we offer synthetic RGB images synthesized from 28 real camera response functions, allowing for versatile experimental setups.What sets Hyper-Skin apart is its comprehensive spectral coverage, including both the VIS and NIR spectrum, facilitating a holistic understanding of various aspects of human facial skin, enabling new possibilities for consumer applications to see beyond the visual appearance of their selfies and gain valuable insights into their skin's physiological characteristics, such as melanin and hemoglobin concentrations.§ RELATED WORK The potential hyperspectral solutions in the skin-related analysis have encouraged the curation of hyperspectral datasets.This section reviews existing hyperspectral datasets related to skin analysis, as summarized in Table <ref>, and reconstruction aiming to provide affordable hyperspectral solutions accessible to consumers.Skin-related Datasets SpectraCam and SpectraFace hyperspectral cameras <cit.> has been used to collect the data of normal and pathological skin with a spectral resolution of 31 wavelengths in the VIS spectrum <cit.>.The hyperspectral dermoscopy dataset consists of 330 images, including 80 melanoma images, 180 dysplastic nevus images, and 70 images of other skin lesions, with a spatial resolution of 512×272 pixels and 16 spectral bands ranging from 465nm to 630nm <cit.>. Hyperspectral dataset of 20 nude mice was collected by <cit.> as an alternative to human skin <cit.> to study acute changes in oxygenation and perfusion in irradiated skin.For markerless tracking in spinal navigation,<cit.> captured hyperspectral images of the skin from 17 healthy volunteers, with a spatial resolution of 1080×2048 and 41 spectral bands in the VISt to NIR range (450-950nm). Both work in <cit.> and <cit.> used the same Specim Spectral Camera PS V10E to acquire hyperspectral data covering the visible to NIR range (380-1055nm) with 1040 bands and a spatial resolution of 450×1310. The former dataset by <cit.>, containing data from 80 subjects,focused on vein localization, whereas<cit.> used the dataset for a routine dermatological examinations.While references <cit.> involve capturing NIR spectral information, it's important to note that these datasets are not publicly accessible.Despite the potential of hyperspectral imaging in skin-related applications, most of these existing datasets relied on expensive imaging systems with a primary focus on scientific applications, and no efforts were made to provide low-cost hyperspectral solutions for consumer devices.Skin Spectral Reconstruction Datasets Although datasets specifically focused on skin spectra reconstruction are limited, there have been notable contributions in this field. One such dataset is the Skin Hyperspectral Reflectance Database (SHRD) <cit.>, which includes 144 skin directional-hemispherical curves obtained through a novel hyperspectral light transport model. This dataset provides valuable insights into reconstructing hyperspectral information on human skin. Another study explores the reconstruction of hyperspectral information in mice skin <cit.>, using 26 SKH-1 hairless albino mice as a model system <cit.>. The researchers propose a mathematical approach to reconstruct hyperspectral data from RGB images, allowing visualization of hemoglobin content across a large skin area without the need for expensive hyperspectral imaging systems. However, these skin-spectral reconstruction datasets have limitations in terms of sample size and spatial resolution, particularly in their coverage of facial images from various angles and poses. Consequently, progress in skin-spectral reconstruction has been relatively slower compared to hyperspectral reconstruction on natural scenes or everyday objects.Natural Scenes and Everyday Objects Reconstruction DatasetsCompared to proprietary and unavailable skin-related datasets, several hyperspectral datasets on natural scenes and everyday objects have been made publicly available for the study of hyperspectral reconstruction. The CAVE dataset consists of 32 scenes with a spatial resolution of 512 × 512 pixels, 31 spectral bands ranging from 400nm to 700nm at 10nm intervals <cit.>. The HARVARD dataset includes 50 images captured under daylight illumination, with 31 spectral bands spanning from 420nm to 720nm, and a spatial resolution of 464x346 pixels <cit.>. The KAIST dataset comprises 30 hyperspectral images with a spatial resolution of 3376 × 2704 pixels and a spectral range from 420nm to 720nm <cit.>. The KAIST-depth dataset features 16 indoor scenes with a spectral range of 420nm to 680nm, 27 spectral channels, and a spatial resolution of 2824 × 4240 pixels <cit.>. The New Trends in Image Restoration and Enhancement (NTIRE) series of datasets, including NTIRE2018, NTIRE2020, and NTIRE2022, have significantly advanced spectral reconstruction from RGB images, with varying sizes, spectral resolutions, and spectral ranges <cit.>.Despite primarily focusing on natural scenes and generic objects, these publicly available datasets offer valuable resources, including facilitating the development of pre-trained models, that can be extended to skin spectral reconstruction tasks.§ HYPER-SKIN DATA CURATION AND PREPARATION This section outlines the methodology we employed for collecting facial skin data and provides a detailed description of our carefully curated Hyper-Skin dataset. §.§ Data CollectionThe data collection process was conducted carefully, taking into account the setup of devices and recruitment of participants while ensuring adherence to the university's research ethics protocol. We successfully recruited 51 participants who contributed a total of 306 hyperspectral data.To maintain the privacy and sensitivity of the human subjects involved, we have implemented a credentialization procedure to access the dataset. Interested users will be required to digitally sign an End User License Agreement (EULA) online, which outlines the terms and conditions for using the dataset, including provisions for using only the authorized image in future publications. Detailed instructions for requesting the dataset will be publicly available in our GitHub repository, where users can find a digital EULA form to facilitate the data access request. Once the EULA form is signed and submitted, users will receive a secure link via email to download the data within 24 hours.Data Acquisition Devices and Set Up The Hyper-Skin dataset was obtained using a Specim FX10 camera, covering 448 spectral bands from 400nm to 1000nm.Consider multiple factors, including participant safety, image quality, and spectral resolution, we opted to use a pushbroom camera rather than the Liquid Crystal Tunable Filter (LCTF) system used by <cit.>.The camera was moved using a customized scanner for precise scanning, as shown in Figure 1. The distance between the camera and the face was set at 40cm, providing a spatial resolution of 1024x1024 pixels.The scanner and camera were controlled by a computer running LUMO recorder software. Further setup details are available in the supplementary material. With a frame rate of 45Hz for one line, it took approximately 22.7 seconds to capture all 1024 lines.To minimize artifacts from line scanning, participants used a chin rest for stability. Halogen lamps illuminated the scene across the visible to near-infrared spectrum.Since ensuring participant safety (particularly for the eyes area) was a top priority, particularly for their eyes, the illumination level of the halogen lamps is carefully adjusted following the manufacturer's advice to prevent any risk to participants' eyes. Using the Specim FX10 camera resulted in high-quality images, setting our dataset apart from the mentioned dataset <cit.>, which contains noisy and blurry images that affect skin texture visibility.Differing from the CMU dataset <cit.>, which features a 10nm step size for 65 bands, our dataset encompasses both VIS and NIR spectrum with finer resolution.Data Acquisition ProcessParticipants were recruited through online forums and email advertisements, and their participation involved signing an informed consent form in accordance with the human research ethics protocol. The approved ethics protocol can be found in the supplementary materials. During the data acquisition process, participants were seated on a stool and asked to rest their face on a chin rest while maintaining stillness. Initially, participants were instructed to have a neutral facial expression, and three face images were captured from different viewpoints (front, left, and right) by rotating the chin rest. This process was then repeated with participants instructed to smile. A total of six images were collected for each participant. Throughout the camera scanning, a halogen light remained on. It's worth noting that even with minimal participant movement, slight shifting may occur as the FX10 camera scans line by line. To ensure high-quality images, the captured images were manually inspected by the investigator, and if any shifting was observed, the image was retaken until satisfactory results were achieved. Throughout the entire process, participant anonymity and confidentiality were strictly maintained.Participants Demographic and Cosmetology Condition Our data collection campaign attracted 51 participants, most participants are in their early 20s and 30s, with a smaller representation from other age groups (10s, 40s-50s).Male participants slightly outnumbered females, potentially due to the gender distribution in the Department of Electrical and Computer Engineering. The majority of participants identified as Asian, with a smaller number identifying as European or Latino. To improve the generalizability of our findings, we have applied for an extension of our research ethics protocol to conduct another data collection next year, aiming to include a more diverse sample. §.§ Data Preparation The Hyper-Skin dataset was created by collecting RAW hyperspectral data, which were then radiometrically calibrated and resampled into two separate 31-band datasets. One dataset covers the visible spectrum from 400nm to 700nm, while the other dataset covers the near-infrared spectrum from 700nm to 1000nm. Additionally, synthetic RGB and Multispectral (MSI) data were generated, including RGB images and an infrared image at 960nm. The Hyper-Skin dataset consists of two types of data: (RGB, VIS) and (MSI, NIR), offering different skin analysis capabilities. The visible spectrum data allows for the analysis of surface-level skin characteristics, such as melanin concentration, blood oxygenation, pigmentation, and vascularization.On the other hand, the near-infrared spectrum data enables the study of deeper tissue properties, including water content, collagen content, subcutaneous blood vessels, and tissue oxygenation. As summarized in Table <ref>, by providing these two distinct ranges of hyperspectral data, the Hyper-Skin dataset caters to different needs in skin analysis and facilitates comprehensive investigations of various skin features. Data Preprocessing We applied radiometric calibration on the RAW hyperspectral data to extract spectral reflectance information. This involved capturing a white reference image, representing a spectrally neutral surface with consistent reflectance values across all bands. A dark reference image was also obtained by closing the camera lens during capture. For precise calibration, selecting an appropriate white reference was crucial. After consultation with the camera vendor, we opted for cost-effective Teflon instead of Spectralon panels, as it provided satisfactory spectral response. The preprocessing steps included subtracting dark reference values to eliminate noise, and dividing by white reference values to normalize and convert data to reflectance values, yielding the desired spectral reflectance data. RGB and MSI data Generation The raw hyperspectral cube with 448 bands was resampled into two sets of 31-band data using SciPy's interpolation function. This downsampling to 31 bands is in line with existing practices in hyperspectral reconstruction studies, similar to the CAVE and NTIRE2018-2022 datasets. It strikes a balance between data richness and size. This approach retains hyperspectral differentiation and computational efficiency for analysis, making the data more accessible compared to the 448-band dataset, which exceeds 1TB in size. While the complete 448-band dataset is substantial in terms of size, we are prepared to provide it upon specific request. The availability of the 31-band data addresses data transfer constraints, while the backup 448-band data ensures comprehensive access to the dataset.For realistic RGB data generation, we adopted the HSI2RGB simulation pipeline based on ideal color-matching functions as outlined in <cit.>. Our emulation of consumer camera-captured images incorporates 28 camera response functions from <cit.> and <cit.>, encompassing various cameras like DSLR and smartphones. Further details on the measurement setup and gathering camera spectral sensitivity information can be found in <cit.>. While we do possess actual RGB data captured by smartphone sensors, their current utility is constrained by concerns related to alignment quality issues stem from variations in camera models, viewing angles, and overlapping fields of view. In pursuit of maintaining high-quality aligned pairs, we have chosen to provide synthetic RGB images that are perfectly aligned with their hyperspectral counterparts. This approach aligns with existing practices such as those in the NTIRE2018-2022 challenges.Data DescriptionWe intentionally chose 4 participants from the total of 51 to form the testing data. Each participant contributed 6 images, covering 2 facial expressions and 3 face poses. This selection ensured a comprehensive representation of facial poses and expressions in the testing dataset. The choice of these 4 participants was deliberate, based on their explicit consent for image use in publications, adhering to ethical standards. The remaining data was exclusively used for offline training. Collecting diverse participant images encompassed a variety of natural facial poses and expressions observed in selfies. Opting for a participant-specific approach, rather than random partitioning, prevents data overlap within the same participant in the training set, reducing potential bias. This strategic participant selection safeguards dataset integrity and subsequent analysis. § EVALUATION AND BENCHMARKS This section discusses the benchmark design with the facial skin-spectra reconstruction and then presents the experimental results from both the spatial and spectral domains. §.§ Facial Skin-spectra ReconstructionThe facial skin-spectra reconstruction task focuses on reconstructing the hyperspectral cube of the facial skin using the provided RGB image. Given a pair of RGB data represented as R ∈ℝ^w × h × c and the hyperspectral cube denoted as H ∈ℝ^w × h × C, where c<<C, the objective of the reconstruction task can be formulated as follows:H = f(R; Θ),where the goal is to find the function f(·; Θ) parameterized by Θ that maps the RGB data R to the hyperspectral cube H. Given the extensive research on hyperspectral reconstruction for natural scenes or everyday objects, we can leverage existing hyperspectral reconstruction models as baseline models for our specific facial skin-spectra reconstruction problem. §.§.§ Baseline ModelsNumerous methods have been developed to address hyperspectral reconstruction from RGB images. Interested readers can refer to the survey by <cit.> for a list of representative models that have been developed over the past two decades. For our benchmark design, we specifically consider three models, i.e.,Hyperspectral Convolutional Neural Network (HSCNN) <cit.>, Hierarchical Regression Network (HRNet) <cit.>, and Multi-stage spectral-wise transformer (MST++) <cit.>, that have emerged as winners in the NTIRE competition series held in conjunction with CVPR from the year of 2018, 2020 to 2022, respectively.§.§.§ Evaluation MetricsWe consider two types of metrics: Structural Similarity Index (SSIM) for spatial evaluation and Spectral Angle Mapper (SAM) for spectral evaluation. Our emphasis lies in assessing the facial skin spectra reconstruction in human subjects, excluding the background image from analysis. This approach allows us to precisely gauge physiological properties like melanin and hemoglobin concentrations, essential for effective hyperspectral skin analysis.Let H ∈ℝ^w × h × C represent the ground truth hyperspectral cube, and H̃∈ℝ^w × h × C denote the reconstructed cube, where C = 31 for the 31-band data. In order to focus the evaluation specifically on the facial skin-spectra components within the hyperspectral cube H, we can utilize a mask M ∈ [0,1]^w × h to exclude the background during the assessment. The mask M(i, j) = 1 indicates that the pixel at location (i, j) corresponds to the human subject, while M(i, j) = 0 signifies the background pixels that should be discarded in the evaluation process. Let H_s ∈ℝ^w × h × C be the resulting matrix that has all the background removed.To obtain H_s, we need to compute the element-wise multiplication between the mask M and H at C=k, i.e., H_s(:, :, k) = H(:, :, k) ⊙ MThis element-wise multiplication is repeated for all channels of H to obtain the final channel-wise multiplication matrix H_s.Spatial Evaluation The evaluation from the spatial domain focuses on the quality of the reconstructed cube at each band in terms of spatial similarity. For this, we use SSIM to compute the spatial similarity between the ground truth and reconstructed HSI. Let h_s^(P) and h_s^(P) be be the patch of the ground truth and reconstructed HSI, then SSIM between each patch can be described as follows:SSIM(h_s^(P), h_s^(P)) = l(h_s^(P), h_s^(P))^α c(h_s^(P), h_s^(P))^β s(h_s^(P), h_s^(P))^γwhere α, β and γ are the weighting parameters for the luminance, contrast, and structural comparison functions. For the detailed formulation of each component, please refer to <cit.>. To account for mask-edge effects, if a patch is only partially covered by the mask, SSIM calculations consider only the masked pixels. This approach focuses evaluation on relevant information and minimizes the influence of background pixels on SSIM scores. However, in cases where a patch is entirely background, division by zero occurs in SSIM calculation. To avoid mathematical errors, such patches are excluded from evaluation. This ensures meaningful and reliable SSIM scores for informative patches within the mask. Spectral Evaluation On the other hand, the spectral domain evaluation aims to assess the accuracy of the spectral signature in the reconstructed cube at specific pixel positions. For this, we used SAM to compute the cosine angle between each spectra vector of the ground truth and reconstructed HSI. Let 𝐡_ij = H(i, j, :) ∈ℝ^C be a C-dimensional spectra vector for the ground truth hyperspectral at location i and j, and let 𝐡_ij be the corresponding spectra vector for the reconstructed cube, we can compute the SAM between these 2 spectra vectors as follows:SAM(𝐡_ij, 𝐡_ij) = cos^-1(∑_k = 1^C𝐡_ij(k) 𝐡_ij(k)/√(∑_k = 1^C𝐡_ij(k)^2)√(∑_k = 1^C𝐡_ij(k)^2)) ,where 𝐡_ij(k) denote the pixel value at band k for a spectra vector located at i and j of the HSI cube.Contrary to the perception that the cosine angle restricts similarity, it's important to clarify that SAM computes the inverse cosine, yielding a range of values between 0 and 3.142. This metric is chosen to quantify the spectral alignment between two spectra in an n-dimensional space, where the dimensionality corresponds to the number of spectral bands <cit.>. A lower SAM value signifies a better alignment with the reference (ground truth) spectrum. §.§ Implementation Details and Experimental Results We conducted the experiment using two datasets prepared in Section <ref> and evaluated the performance of baseline models (MST++, HRNet, and HSCNN) <cit.>, that were trained with the NTIRE2022 dataset licensed under the GNU General Public License v3.0 <cit.>.We then proceeded to re-train these models with our Hyper-Skin dataset for 100 epochs using an RTX5000 GPU and 16-bit memory. To address memory constraints, we randomly cropped the input size to 128x128 during training, while the entire 1024x1024 image was used for testing. The hyperparameters used for re-training the models remained the same, except for reducing the number of HSCNN blocks to 20 for the (MSI, NIR) experiments and adjusting the number of input channels to 4 for the (MSI, NIR) pair of data. Adam optimizer with a learning rate of 0.0004 was employed for training all three models. §.§.§ Spatial DomainTable <ref> presents the spatial evaluation results using SSIM for two types of data: (RGB, VIS) and (MSI, NIR). The evaluation was conducted with and without the background, where the background was removed using a mask to focus on the human subject.Figure <ref> provides a result illustration with pre-trained HSCNN. The pre-trained models were only applied to the (RGB, VIS) data pair since they were trained on the NTIRE dataset, which focuses on hyperspectral reconstruction in the visible spectrum. The (MSI, NIR) data pair requires an additional channel, which is not supported by the pre-trained models, thus they were not used for evaluation. After re-training the models with our Hyper-Skin dataset, significant improvements in performance were observed, as indicated in Table <ref>. Comparing the results with and without the background, it is evident that most of the reconstruction issues are associated with the background. For applications that focus on human skin, such as hyperspectral skin analysis, the performance of skin reconstruction is crucial. The results demonstrate that the reconstruction of the skin area exhibits better performance, supporting the potential for low-cost hyperspectral skin solutions, such as reconstructing the facial skin-spectra cube from smartphone selfies. §.§.§ Spectral DomainTable <ref> shows SAM-based spectral evaluation results for two data types: (RGB, VIS) and (MSI, NIR), with and without background. SAM ranges from π/2 to 0, in which closer to 0 indicating better reconstruction. After re-training models using our Hyper-Skin dataset, both data types showed significant performance improvements, as illustrated in Figure <ref>. Note that the performance of the models was generally lower when the background was present compared to when it was removed. This highlights the impact of background interference on the reconstruction results. These findings emphasize the potential of our dataset in enhancing hyperspectral reconstruction for applications related to skin analysis and other fields that require accurate spatial information. Note that the re-trained models also showed good performance on the (MSI, NIR) data.We further verify the reconstruction performance on the real RGB image taken by a smartphone. As shown in Figure <ref>, the reconstruction model is capable of estimating the skin spectral information. Due to page constraints, the visualizations of these results are provided as supplementary materials. § LIMITATIONS, ETHICAL CONSIDERATIONS AND SOCIETAL IMPACT We worked closely with the university's ethical review board to ensure the inclusivity, privacy, and ethical integrity of the Hyper-Skin dataset. This ensured responsible data usage for skin analysis research. Our data collection followed strict participant consent procedures, including providing participants with detailed project information before obtaining written and verbal consent. The ethics review protocol and consent procedures are provided as supplementary materials. Limitations Our dataset's representativeness might be limited, potentially not capturing the full diversity of skin types, tones, and conditions across various populations. This could introduce biases and hinder model generalization. Additionally, while our deep neural network-based approach leverages the dataset for latent priors, it's constrained by dataset limitations. Novel cosmetology conditions not covered in training might affect model performance due to distribution shifts, a common challenge in machine learning <cit.>, particularly in medical datasets with skewed distributions <cit.>. To address this, we've secured ethical approval to expand data collection, targeting diverse participants and cosmetology conditions to enhance practical utility.Ethical Considerations To ensure ethical compliance throughout the data collection process, we implemented measures to anonymize personally identifiable information, such as assigning a subject ID instead of using participants' real information. Robust security measures were also put in place to protect sensitive data from unauthorized access or misuse. We strictly followed guidelines provided by the research ethics board and obtained informed consent from every participant, respecting their autonomy and ensuring they understood how their data would be used.Societal Impact Our Hyper-Skin dataset revolutionizes skin analysis by providing affordable and accessible solutions. With its ability to reconstruct skin spectral properties and estimate parameters like melanin and hemoglobin concentration, it empowers researchers and practitioners to develop low-cost skin analysis solutions directly for consumers. The dataset's societal impact extends to individuals monitoring their skin's well-being and skincare companies developing personalized products and innovative AI models. By driving advancements in skin analysis, the Hyper-Skin dataset benefits individuals, professionals, and the skincare industry as a whole.§ CONCLUSION This paper contributes to the field of hyperspectral skin analysis by providing a comprehensive collection of facial skin hyperspectral data, named Hyper-Skin dataset.The novelty of this dataset lies in its spectral coverage in the VIS and NIR spectrum, offering potential applications in skin monitoring and customized cosmetic products at the consumer's fingertips. It serves as a valuable resource for algorithm development and evaluation, with future directions including dataset diversification, advanced analysis techniques, and interdisciplinary collaborations, inviting researchers and practitioners to contribute to the advancement of hyperspectral skin analysis for human well-being. IEEEtran | http://arxiv.org/abs/2310.17911v1 | {
"authors": [
"Pai Chet Ng",
"Zhixiang Chi",
"Yannick Verdie",
"Juwei Lu",
"Konstantinos N. Plataniotis"
],
"categories": [
"eess.IV"
],
"primary_category": "eess.IV",
"published": "20231027061035",
"title": "Hyper-Skin: A Hyperspectral Dataset for Reconstructing Facial Skin-Spectra from RGB Images"
} |
Experimental Physics, Bielefeld University, Universitätsstr 25, 33615 Bielefeld, Germany Research Center Future Energy Materials and Systems (RC FEMS), University of Duisburg-Essen,Forsthausweg 2, 47057 Duisburg, Germany Interdisciplinary Centre for Advanced Materials Simulation (ICAMS), Ruhr-University Bochum, Universitätsstr 150, 44801 Bochum, GermanyInterdisciplinary Centre for Advanced Materials Simulation (ICAMS), Ruhr-University Bochum, Universitätsstr 150, 44801 Bochum, GermanyExperimental Physics, Bielefeld University, Universitätsstr 25, 33615 Bielefeld, Germany Research Center Future Energy Materials and Systems (RC FEMS), University of Duisburg-Essen,Forsthausweg 2, 47057 Duisburg, Germany Center for Nanointegration Duisburg-Essen (CENIDE), University of Duisburg-Essen, Forsthausweg 2, 47057 Duisburg, GermanyInterdisciplinary Centre for Advanced Materials Simulation (ICAMS), Ruhr-University Bochum, Universitätsstr 150, 44801 Bochum, GermanyCenter for Interface-Dominated High Performance Materials (ZGH), Ruhr-University Bochum, Universitätsstr 150, 44801 Bochum, Germany Although Niobium is a well characterized material it still shows some anomalies that are not yet understood. Therefore we revisit its metastable phases using density functional theory. First, we systematically compare energies and ground state volumes of chosen crystal structures and discuss possible transition paths to the bcc ground state structure and the energy landscape for tetragonal distortions. Furthermore, we discuss their stability by means of their phonon spectra and vibronic free energies. Second we analyze the impact of tantalum impurities on phase stability.Surprisingly we find new aspects of the energy landscape of the material which have been overlooked so far: A new local energy minimum on the bcc to omega transition path, a flat energy landscape with respect to uniaxial strain along [111] and a considerable stabilization of the σ phase by Ta substitution. 77.80.bg, 77.80.DjAb initio study of transition paths between (meta)stable phases of Nb and Ta-substituted NbAnna Grünebohm January 14, 2024 ================================================================================================ § INTRODUCTION Niobium (Nb) is one of the best studied elemental metals. Nevertheless, and surprisingly, several properties of this material remain unexplained. For instance, it has the highest superconducting transition temperature of all elements at normal pressure,<cit.> but there is still debate about the character of its superconductivity.<cit.> Nb is one of the few transition metals that exist in a bcc ground state. However, depending on the boundary conditions, there are many different metastable phases, especially under high pressure <cit.> or in nanostructures, <cit.> but their relative stability and potential transition pathways are not yet fully understood.Remarkably, Bollinger et al. <cit.> experimentally found a change of the slope in the linear thermal expansion coefficients of the bcc state by high-resolution calorimetry at 208 K and related this to a potential martensitic phase transition. However, the signatures of this phase transition were smaller than the detection limit of x-ray diffraction, and hence the nature of this phase transition remained poorly understood.Martensitic phase transitions are only possible if there is a diffusionless transition path with a moderate energy barrier connecting both the ground state structure and the metastable phase. Especially in metals, such phase transitions with their complex interplay of phononic, electronic and microstructural properties have been a rich source of research for decades,<cit.> but the question of the driving force of such a transition has not yet been fully resolved.In Nb, however, some of the typical precursors that usually occur at martensitic phase transitions were evidenced. This includes the occurrence of Kohn anomalies,<cit.> anomalies in the elastic constants for different pressure ranges, <cit.> and Fermi surface nesting, producing a Van Hove singularity in the electronic density of the states closed to the Fermi level.<cit.>Potential metastable states that may be involved in martensitic phase transformations have been investigated for Nb in several theoretical studies, as this element is often used as prototype material to test simulation methods.<cit.> The best studied metastable phase of Nb is fcc. <cit.> Other experimentally found metastable phases include hexagonal-ω (C32) <cit.> and Pnma.<cit.>The Pnma phase has been found in experiments.<cit.>Moreover, the ω structure is observed in Nb under high pressure at temperatures around 77 K<cit.> and in thin films.<cit.> These phases have been characterized by density functional theory (DFT)<cit.> and, in addition, a metastable ω-like structure with vacancies.<cit.> Further studies in literature used DFT to calculate the energies of bcc, hcp and several topologically complex phases (TCP) such as A15, Laves phases and σ structures.<cit.> In particular, the A15 phase in Nb-based intermetallic phases, such as Nb_3Sn, is famous for the occurrence of both martensitic phase transitions,<cit.> the associated instabilities,<cit.> and the occurrence of superconductivity. It is therefore worth examining this TCP phase in elemental Nb in more detail.Diffusionless transition paths of martensitic transformations among these phases have been studied, particularly the transition from bcc to fcc (Bain path),<cit.> and from bcc to hcp.<cit.>Complex indirect transition paths have been reported from hcp and bcc to intermetallic phases, such as Laves phases,<cit.> and by means of kinetic Monte Carlo simulations from bcc to A15.<cit.> For Nb, however, other metastable phases are lower in energy and the possible transition paths are yet unknown.A peculiarity of Nb experimental works is that Nb crystals are usually contaminated with Ta. The reason is that Nb and Ta have common natural occurrences and that their separation is costly. Both elements are chemically similar, including their crystal structure.<cit.> Similar metastable phases occur in Nb and Ta, e.g., Pnma.<cit.> Such additions of Ta have been found to lower the formation energy of metastable TCP phases of Nb <cit.> while the influence on other known metastable phases of Nb is still unknown.Within this work, we systematically revisit and compare the energies of metastable phases and the details of the diffusionless transition paths connecting these to bcc, both for Nb and the solid solution of Nb_(1-x)Ta_x. We find that Pnma, A15 and σ phase are lower in energy than the bct phase and also the energy barrier for the bcc to Pnma transition is 0.2eV lower than the transition barrier from bct to fcc. Furthermore, the bcc to ω path shows an additional local energy minimum if extrapolated to larger values of strain. This configuration turns out to correspond to an easy deformation, but it is not stable against the relaxation to a bcc structure strained along [111].§ METHOD§.§ Technical detailsDensity-functional theory (DFT) simulations are performed with the abinit package.<cit.> Allcalculations are done with primitive cells and Ta substitution is determined with the smallest possible cell up to a size of 2× 2 × 2. The results are determined with the generalized gradient approximation (GGA) using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional <cit.> in connection with optimized norm-conserving Vanderbilt pseudopotential (ONCVPSP) from PseudoDojo <cit.> with the valence electron configuration 4d^45s^1 for Nb and 4f^14 5d^3 6s^2 for Ta. The stopping criterion for the self-consistent calculations is a difference in total energy of 2.72· 10^-7 eV. The volume and ionic positions of all structures are relaxed simultaneously with a tolerance on the maximal force of 2.5e-3eV/Åand a smearing parameter for the total energy of 95eV, considering a temperature of smearing of 0.272eV, which is equivalent to 3.15K. The k-mesh is 14× 14 × 14 and a cutoff energy of 1088.45eV results in an energy convergence of 0.172eV. Density-functional perturbation theory (DFPT) is used to determine phonon spectra and the phonon density of states. As an insufficient sampling of q-space for bcc Nb results in the extrapolation to imaginary phonon modes,<cit.> we chose agrid of 8× 8 × 8 which is sufficient to correctly reproduce the experimental and theoretical data from literature. Furthermore, we raise the threshold to a difference in potentials up to 10^-18, the k-mesh to 16× 16 × 16 and decreased the temperature of smearing to 0.136eV (corresponds to 1.57K).The k-mesh and q-grid for the other structures are scaled according to their lattice vectors.The phonon contribution to Helmholtz free energy F_phon is calculated according to Lee et al.<cit.>F_phon(T)=3nN k_BT∫_0^ω_Lln(2 sinhħω/2k_BT)g(ω)dωwith the number of atoms per unit cell n, the number of unit cells N, the Boltzmann constant k_B and temperature T. ω_L is the largest frequency in the phonon spectra. Without anharmonic effects and thermal expansion, the total free energy F_total(T) is approximately given as F_total(T)=E_tot(T=0 K)+F_phon(T). For comparison we compute selected properties also with the VASP package <cit.> using the high-throughput environment from Ref. hammerschmidt_topologically_2013. We use the PBE functional <cit.> as in the abinit calculations but the projector-augmented wave method <cit.> and pseudo-potentials with s semicore states for Nb and p semicore states for Ta. With a planewave cut-off energy of 500eV and a k-point density of 0.018^3 we achieve similar convergence of the total energy differences as in our calculations with abinit.Besides bcc, bct, ω, Pnma, A13, A15, and σ phases shown in Fig. <ref>, we also consider the Laves phases C14, C15, C36 and the structures μ and χ. Ordered binary structures based on fcc or hcp are excluded due to the expected comparably high formation energy. <cit.> For the considered phases, all occupations of Nb and Ta on the Wyckoff sites are included in the DFT calculations, e.g., 2^5=32 DFT calculations for the σ phase with five Wyckoff sites. We asses the relative stability of the different structures and stoichiometries based on the heat of formationΔ H_f=E_tot -N_NbE_Nb-N_Ta E_Ta/N,with N_Nb (N_Ta) and E_Nb (E_Ta) the number of Nb (Ta) atoms and their energies in the bcc phases. For pure systems this equation reduces to the energy difference of a structure to the bcc ground state, i.e., Δ E in Tab. <ref>. §.§ Transformation paths We study potential martensitic transition paths from bcc to fcc, ω and Pnma using a linear interpolation of the lattice constants a_ito the final state as a_i(Δ)=(1-Δ) · a_i^initial+Δ· a_i^finalwith varying Δ from 0 to 1, see Fig. <ref>.The bcc to fcc transition is fully characterized by the lattice constants (see Fig. <ref> (a)). For the bcc to Pnma and ω transition path, we additionally interpolate the internal atomic degrees of freedom linearly.The interpolation along the bcc-ω path (Fig. <ref> (c)), corresponds to an anti-parallel shift of two of three atoms along z-direction from Δ_z=0 for bcc to Δ_z=± 1/6 for ω phase. Only in the ω phase, both these atoms are in the same z-plane of the hexagonal lattice (i.e. the same [111] plane of bcc) and the symmetry is P6/mmm while the symmetry is reduced to P3̅m1 on the path.<cit.> Note, that the transition from bcc to Pnma contains two unit cells of bcc, and a_bcc consequently must be doubled, see Fig. <ref> (b). Two atoms shift by Δ_x and one atom each shift by Δ_x and Δ_z1 andanother by Δ_z2. For the relaxed Pnma structure we find values of Δ_x=0.01, Δ_z1=-0.2, Δ_z2=0.04 , respectively. The atomic environment of the Pnma phase is thereby similar to that of the bcc state, but the 8-fold coordination (with a distance of 2.86) splits into four nearest neighbours with average distances about 2.80 and four nearest neighbours with average distance of 2.97. Further we extrapolate the range of Δto smaller or larger values to explore the energy landscape around the given states and analyse the A15 phase under tetragonal distortion in the range of 0.8<c/a<1.2.§ RESULTS §.§ Comparison of meta-stable phasesWe consider all Nb structures listed in Tab. <ref>. Figure <ref> (a) shows their energy differences to the ground state, Δ E, as a function of volume V per atom.In agreement to literature we find bcc as the ground state structure while fcc is the least favourable of all tested configurations and about 330meV higher in energy.Still this phase has been observed in experiments underlining the importance of meta-stable Nb phases with lower energies. Although the energy of the fcc phase is reduced by tetragonal distortion to the bct phase, A15 and Pnma structures are even more favourable with energy differences of about 105meV and 119meV relative to bcc only, respectively. The ω, A13, and hcp phases are energetically between bct and fcc.Our calculations show that the σ phase is even more favourable being only slightly less than 80meV above the ground state. As we discuss in Sec. <ref>, the bcc to ω-phase shows an additional local minimum, which we have added as ω' for completeness. This configuration shows a minimum of the energy vs. volume curve only 121meV higher in energy than bcc. As discussed below the structure is however not stable against atomic relaxation. The same data is reduced to the minima of the E(V) curves in Fig. <ref> (b). The energy of the metastable phases scales approximately linearly with the volume. Although, Pnma and ω have been predicted as high-pressure phases their ground state volumes without pressure is larger than the ground state volume of bcc. The bcc structure is not only the energetic ground state for its equilibrium volume but also in the studied volume range of 15 to 22^3. Under lattice expansion between 21^3 and 22^3 the energy differences between bcc and σ and A15 are however systematically reduced. Furthermore, in this volume range ω and A13 phases as well as Pnma and bct phases are close in energy. On the other hand, none of the phases comes close to bcc for reduced volumes while the energy differences between ω', Pnma, A15 and σ phases are reduced. §.§ Transformation paths To depict possible diffusionless phase transitions of Nb, we study theenergy landscape for a continuous deformation from bcc to meta-stable structures. The energy maxima on theses paths give an upper bound for the energy barriers of the transitions. Note, that the real energy barriers can be smaller due to their dependence on temperature or more complex transition paths. As reference we start with the classical Bain path from bcc to fcc (solid lines in Fig. <ref>). In agreement with literaturewe find that fcc is a local maximum on the transformation path and the second minimum, the bct structure with c/a=1.768is 143meV higher in energy than bcc, see Tab. <ref>. In the following we restrict ourselves to the low energy structures shown in Fig <ref>: (a) bcc, (d) Pnma and (f) ω. The dotted lines in Fig. <ref> show the bcc to Pnma path.On this path, only the two energy minima related to bcc and Pnma occur and both states are separated by an energy barrier of 234meV which is about 89meV lower than the fcc state. Surprisingly, the ω phase is not even a local minimum of energy on the bcc to ω path but rather a local energy maximum. By extrapolation of Δ we find an energy minimum (ω') with Δ z=0.28 and c/a=0.832 only 121meV higher in energy than the bcc ground state. Note that this anomaly has not been found for a variation of Δ z with fixed tetragonal ratio<cit.> and the structure differs from the modulated ω structure with vacancies discussed in the supplementary material from Ref. lee_stress-induced_2022.Although this monoclinic configuration also shows the typical energy volume curve of a meta-stable state, see Fig. <ref>, the atomic positions are not protected against atomic relaxation by symmetry. Only the ω phase with P6/mmm symmetry is a meta-stable state, while the atomic positions relax to the bcc-like structure with z=0 for all other initial values of z.The distorted bcc state at c/a=0.832, which we call bcc* in the following, is only 85meV higher in energy than bcc and may thus be a favorable distortion of the bcc phase. For this reason we sample the energy for the tetragonal distortion of the bcc*to the bcc state as shown in Fig. <ref>.Our calculations show that the energy penalty for the distortion along the [111] direction is considerably smaller than for the bcc to fcc path.Given the lack of information on transition paths among the metastable states, we additionally verified if tetragonally distorted structures arepossible for other metastable phases of Nb. Particularly the cubic A15 phase is low in energy and we also study its tetragonal distortion, see Fig. <ref>. But even with a fine resolution of Δ c/a=2· 10^-4 we could not observe any additional local minima or higher-order extrema under tetragonal distortion. The increase of energy with tetragonal distortion is similar to the classical Bain path. §.§ Phonon spectraFor a comprehensive picture of the low-energy phases, we analyze the phonon spectra of bcc, bcc* (bcc phase distorted along [111] with c/a=0.832), ω' and Pnma. While the phonon spectra of the bcc and Pnma structures calculated by us correspond to those published in literature,<cit.>, we are not aware of calculated phonon spectra of (distorted) ω phases which are summarized in Fig. <ref> (a). As reference we also added the phonon spectra of bcc in the same representation. Indeed, the ground state, bcc, only shows stable phonon modes. However, there are indications, that the structure is close to an instability. Particularly, we can reproduce the Kohn anomaly at (0.142,-0.142,0.142) predicted by Landa et al.<cit.> Note, that in the representation in Fig. <ref> (a) this point is located on the Γ→ M path. Furthermore we can reproduce the decrease of the transversal acoustic branch in the phonon spectraat (1/3,0,1/3) in the [111] direction associated with the bcc to ω transition.<cit.>The metastable Pnma phase also shows no soft phonons, see Fig. <ref> (a). Compared to bcc the change in slope on the Γ→ Z path is reduced. Furthermore, due to the lower symmetry of Pnma one has to distinguish X, Y and Z direction and for the former two we see no change in slope on the corresponding paths with the given resolution. Analogous to bcc and Pnma, also bcc* does not show soft phonon modes. Under the hexagonal distortion the change of the slope on the Γ→ M path vanishes, both for bcc* and ω', while the lowest Γ→ A branch shows a similar feature. Moreover, for all high-symmetry points except M, the transversal branches are lowered in energy if going from bcc toω' and bcc*. As discussed in Sec. <ref> the ω' structure, although being a local energy minimum on the bcc–ω path is not a stable structure and thus the phonon spectrum shows negative frequencies at A (0,0,1/2).Figure <ref> (b) compares the resulting phonon density of states of all four structures normalized with the number of atoms in the system. Over a large frequency range from 125cm^-1 to 200cm^-1, the bcc phase shows the largest density of states with two pronounced peaks around 135cm^-1 and 190cm^-1.With decreasing symmetry going from bcc to bcc* and ω' the degeneracies of the modes in [100] direction are lifted and the peaks in the DOS are broadened. The distortion of the structure to ω' furthermore results in two additional peaks at 80cm^-1 and 230cm^-1 and a low-frequency tail. Below frequencies of 125cm^-1, the ω' phase thus exhibits the highest density of states. A lifting of degeneracies in the modes in [100] direction can also be seen for the Pnma phase. Also here, we find higher frequencies in the spectrum and less pronounced peaks in the DOS. For the Pnma phase the largest DOS is found at about 210cm^-1 and the increase of the weight of the low-frequency tail of the DOS is slightly larger than for ω'.However, none of the phases has a substantially larger phonon DOS in a suitable frequency range and the low-frequency tails of the DOS are not sufficient to reduce the free energy and to stabilize one of the phases relative to bcc at finite temperatures. Note that we calculated the free energieswithin the harmonic approximation and thus higher order effects are not accounted in this estimate. §.§ Ta-substitution In order to enable a close comparison of our results with experimental works, we additionally consider the influence of Ta on the metastable phases of Nb, the most common impurity in Nb samples for experiments.Figure <ref> compares the energy landscapes of pure Nb (black) and Ta (light green) along the classical Bain path (a) andthe paths connecting bcc (b)and Pnma(c) and ω phases. In all cases, the energetic ground state is the bcc state with c/a=1 and the other local energy minima are not considerably lowered by Ta.The ratio of the lattice constants c/a is smaller for Ta compared to Nb (-0.07 for bct, and -0.13 for Pnma), but slightly increased for ω' by 0.02. For bct (panel a) and Pnma (panel b), the energy barrier for the transformation is 91meV and 24meV smaller for Ta compared to Nb.However, the changes of the energy landscapes under partial substitution are small. Exemplary results for 25% Ta are added in panels (a) and (b) in dark green.Even for this large concentration, the energy differences between the pure and substituted materials are below 16meV and 4meV at the transition barrier.For the bcc to ω transition, the barrier is not smaller for Ta compared to Nb and for both elements the structure at the second minimum is not stable against atomic relaxation.The influence of Ta on the structural stability of all phases of interest across the complete range of chemical compositions is shown in Fig. <ref> in terms of the formation energies Δ H_f. The relative stability for pure Nb is identical to the sequence of minima observed in the energy-volume curves in Fig. <ref>.The variation across the Nb-Ta chemical range is consistent with previous DFT calculations <cit.> using an LDA exchange-correlation functional although PBE shows slightly lower formation energies Δ H_f.Comparing pure Ta to Nb, the phase sequence bcc, σ, A15, Pnma, bct and ω is still present. However, the A13, σ and A15 phases are considerably lowered in energy. The σ phase is very close to bcc, in line with the experimental characterization of β-Ta as σ phase. <cit.>The formation energies of the Pnma phase depend onlyweakly on the Ta concentration. For completeness also the formation energies of the Laves (C14, C15, C36), χ and μ phases are shown in Fig. <ref>.For Nb all these phases are higher in energy than bct. There is no sizeable stabilisation of the Lavesand μ phases by Ta, while the χ phase becomes more favourable than the Pnma phase. The throughout positive values of Δ H_f indicate that there is no stable ordered structure, in line with the bcc solid-solution region in the phase diagram.Thus Ta indeed reduces the energy barrier for the bcc to bct or Pnma transitions but quantitatively the effect is small. We would furthermore expect that alloying Nb with small amounts of Ta may foster the formation of A13 or σ phases while an enhanced formation of Pnma is unlikely. § SUMMARY AND CONCLUSIONSThe question of potential metastable phases in Nb was raised anew by high-resolution experimental data suggesting a martensitic phase transition.<cit.> To better understand the energy landscape of Nb, we determined the ground states for chosen metastable phases using DFT and analysed possible transition paths connecting these with the bcc ground state. We find that the metastable σ and A15 phases are lowest in energyfollowed by Pnma, bct and A13. Both the bcc to Pnma and bcc to ω path are more favourable than the more commonly discussed Bain path to fcc. Additionally, straining bcc along the hexagonal [111] direction, we find a potential deformed state ω' being low in energy.Since Ta impurities are common in Nb, we also investigated the role of Ta on the energy landscape of the metastable phases. The energy barriers for the bcc to bct and Pnma transition are reduced. In pure Ta, the σ phase is practically as low in energy as the bcc phase. Thus, for high Ta concentrations, this is important and should be further investigated in the future.Otherwise, considering our DFT study, we suggest that effects that could come from the microstructure not detected here - like stabilization of a metastable phase by twinning - may trigger phase transitions or modify the atomic structure in large parts of an experimental sample, explaining the experimentally found martensitic phase transition. We thank Anna Böhmer and Ralf Drautz for fruitful discussion. unsrt | http://arxiv.org/abs/2310.18111v1 | {
"authors": [
"Susanne Kunzmann",
"Thomas Hammerschmidt",
"Gabi Schierning",
"Anna Grünebohm"
],
"categories": [
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.mtrl-sci",
"published": "20231027125220",
"title": "Ab initio study of transition paths between (meta)stable phases of Nb and Ta-substituted Nb"
} |
University of Groningen Groningen Netherlands 0000-0002-2272-315X [email protected] value of neuromorphic computers depends crucially on our ability to program them for relevant tasks. Currently, neuromorphic computers are mostly limited to machine learning methods adapted from deep learning. However, neuromorphic computers have potential far beyond deep learning if we can only make use of their computational properties to harness their full power. Neuromorphic programming will necessarily be different from conventional programming, requiring a paradigm shift in how we think about programming in general.The contributions of this paper are 1) a conceptual analysis of what `programming' means in the context of neuromorphic computers and 2) an exploration of existing programming paradigms that are promising yet overlooked in neuromorphic computing. The goal is to expand the horizon of neuromorphic programming methods, thereby allowing researchers to move beyond the shackles of current methods and explore novel directions. Concepts and Paradigms for Neuromorphic Programming Steven Abreu Accepted XXX. Received YYY; in original form ZZZ ====================================================§ INTRODUCTION Computing technology is steering toward impasses, with Dennard scaling ending and Moore's law slowing down <cit.>. These impasses give rise to innovation opportunities for specialized hardware in computer architecture <cit.> as well as in software <cit.>.This `Golden Age' of innovation has lead many researchers to investigate neuromorphic computers. Taking inspiration from how the brain computes has a rich history going back at least six decades <cit.> and the recent success of deep learning has demonstrated the power of neural information processing convincingly <cit.>.The development of event-based sensors <cit.>, large-scale neuromorphic processors <cit.>, and brain-computer interfaces <cit.> indicates that neuromorphic computers will play an important role in the future of computing.An increased diversity of specialized hardware can revive old research ideas or programming paradigms on novel hardware, similar to how the GPU revived research on neural networks for machine learning <cit.>. In light of novel neuromorphic hardware, it is worth re-evaluating overlooked programming paradigms <cit.>.Neuromorphic computers take inspiration from the brain, both in the way that information is processed and in the fact that the physical dynamics of the underlying substrate are exploited for computation <cit.>.Research in neuromorphic computing is diverse and happening on many levels: different materials are investigated for basic components in novel computers <cit.>, different architectures for assembling these components into a computing system are investigated <cit.>, different domains are considered to move beyond electronics into optical <cit.> or chemical domains <cit.>. A neuromorphic computer is composed of neurons and synapses which model biological neural networks at some level of detail, and they are often implemented directly in the physics of the device <cit.>. Although artificial neural networks (ANNs) are also often considered neuromorphic, this paper focuses on spiking neural networks (SNNs) because they offer a radically different paradigm for computing (see Section <ref>), making them an interesting topic for research on programming methods. All this requires new theories to describe the computations in such novel devices, along with new theories and methods of programming that can make these devices useful. The former has been outlined in a recent review <cit.> whereas the latter is constrained to an as-yet limited set of neuromorphic algorithms <cit.>.In Section <ref> of this paper, concepts for a more general way of programming neuromorphic computers are analyzed and clarified. To fully harness the potential of neuromorphic computers, algorithm design is not enough. Ultimately, general programming methods must be developed to enable a large group of `neuromorphic programmers' to harness the power of neuromorphic computers for real-world problems beyond machine learning and research benchmarks <cit.>.Neuromorphic computers presently cannot be programmed in ways comparable to the rich programming methods of digital computers with instruction set architectures, high-level programming languages, and compilation hierarchies. Schuman et al. <cit.> argue that progress on neuromorphic programming requires a paradigm shift in how to think about programming. Herein, it is assumed that there may not be a single paradigm for neuromorphic programming, just as there is no single paradigm for conventional programming. Section <ref> stakes out the landscape of programming paradigms to make this body of knowledge available to the neuromorphic community and to identify promising directions for future research.§ CONCEPTS §.§ Dimensions of Computing The brain works quite differently from a digital computer <cit.>. While these differences make it challenging to use conventional programming methods, they simultaneously provide opportunities for novel programming models that are not supported by conventional computers.In the following, key differences between conventional and neuromorphic computers are outlined. Stochasticity Neurons are unreliable and noisy <cit.>, with neural spike output changing from trial to trial in identical experiments <cit.>. Yet, the brain is able to generate reliable behavior from unreliable components. This has fascinated the research community for over six decades and led to models of computing with probabilistic logic <cit.>, stochastic computing <cit.> where information is represented and processed in probability distributions, and hyperdimensional computing where high-dimensional random vectors are used for distributed data representation and computation <cit.>. Robustness The theory of digital computation can be realized robustly in physics through the strong dynamical robustness of bi-stable switching dynamics. The physics of digital computing is very robust, but the theory is brittle in that a single bit flip can lead to catastrophic failure.In contrast, the brain works reliably despite the ongoing death and re-generation of neurons. Natural systems like the brain use robust adaptive procedures to work well in unknown and changing environments <cit.>. Mechanisms that provide the physical and functional robustness that natural systems exhibit are only beginning to be understood <cit.>. Distributedness In neuromorphic systems, information representation and processing are distributed spatially and possibly also temporally. This is a classical property of neural networks <cit.> which stands in contrast to the localized information in binary transistor states and the sequential execution of elementary instructions in digital hardware. Unobservability While in digital computers every bit of information can, in principle, be addressed, the same is not true in many neuromorphic systems which can only be configured and observed through a limited interface. This prevents the implementation of algorithms that require information which is simply not accessible in neuromorphic systems. Physical time In many neuromorphic computers time represents itself. In contrast, classical theories of symbolic computation are decoupled from real physical time and simulated through a discrete global clock signal.Such decoupling may not be possible (nor desirable) in neuromorphic computers, thus current theories of computation are unsuited for describing neural computation <cit.>. Multi-scale dynamics Neuromorphic computers operate on multiple temporal scales with no global synchronization, and are often described at multiple different spatial scales: from local learning rules to neural circuits all the way to the global behavior of the network as a whole. Often, the only way to decide what network-level behavior emerges from a local learning rule is to let the network run. This undecidability of global behavior from local rules may be a fundamental property of physical systems that can act as computers <cit.>. The difficulty of reasoning about global behavior from elementary operations is solved in digital computing by designing software systems as decomposable hierarchical structures <cit.> but this is not presently possible in neuromorphic programming.Analog The merits of analog computation in terms of energy efficiency and inherent parallelism are well-known <cit.>. But analog computing is more sensitive to device mismatch and noise which limits the computational depth (number of operations performed in series) <cit.>. They may also be susceptible to parameter drift, aging effects, and changes in temperature. No hardware/software separation When programming digital computers, one may neglect physical properties of the underlying hardware.In neuromorphic computers, such hardware-agnostic programming is not generally possible, as these devices are designed to exploit their underlying physical properties and dynamics. The connection of physical systems and computers has been investigated for decades in the field of unconventional computing <cit.>, though a general theory of such computation is still missing <cit.>.§.§ Physical ComputingAlthough classical theories of computing are non-physical, all computations must ultimately be physically instantiated <cit.>. Digital computing was first developed as an abstract model which was later physically realized. Neuromorphic computers do not follow the same pattern. There is no universally accepted model of neuromorphic computation and many different physical instantiations are explored <cit.>. As such, abstract models of computation are co-developed with physical implementations.From a physical perspective, the key difference between conventional computing and neuromorphic computing lies in the set of physical phenomena that are harnessed for computation. While digital computing only uses bi-stable switching dynamics, neuromorphic computers use stochasticity, real-valued states in continuous time, and more <cit.>.Horsman et al. <cit.> provide a general framework for computation with arbitrary physical systems which was further refined by Jaeger and Catthoor <cit.>. Therein, a computer is a physical machine ℳ which can be stimulated by an input signal u_ℳ and from which an output signal y_ℳ can be read out.The computation 𝒞 is specified by an abstract function from input u to output y. The machine ℳ then implements the computation 𝒞 if an encoding procedure E and decoding procedure D is known such that the machine ℳ will produce y_ℳ with D(y_ℳ) ≈ y when stimulated with the input signal E(u)=u_ℳ.This leads to the general form of the abstract computer model shown in Figure <ref> (right): the physical machine ℳ receives input u_ℳ and produces output y_ℳ, thereby implementing the abstract computation 𝒞 from input u to output y.Hardware and Software Using physics for computation in neuromorphic computers makes it difficult to separate hardware and software in the same way as in digital computers.This separation is practically useful because hardware and software are developed on different timescales; it takes many months to design and manufacture a computer chip, while algorithms can be designed and tested within a single day. Hardware is generally considered to be anything that cannot be changed without significant effort, such as the numbers and types of physical components in the computer. The set of all possible computations that a machine ℳ can implement is fixed by the hardware. Considering the hardware to be fixed provides a programmer with firm, stable ground whereon rich, complex programs can be built. Software denotes malleable behavioral aspects of the computation 𝒞 implemented by the machine ℳ. Obviously, this behavior is ultimately manifested in the physical state and dynamics of the machine, but it is useful to think of the machine's behavior at an abstract level <cit.>. Configuration In reconfigurable hardware, one must consider the role of a machine's configuration. A reconfiguration of the computer usually requires a reset, effectively breaking the operation of the program. Thus, the computer's configuration is fixed over the lifetime of a program, but not necessarily fixed over the lifetime of the computer.The configuration can be considered part of the hardware, whereby changing it effectively instantiates a different physical system. But it can also be considered part of the software, whereby changing it simply runs a different program on the same physical system. The chosen view is a design decision by the programmer.§.§ Computations and Programs A computation 𝒞 specifies what is being computed while a program 𝒫 specifies how the computation is implemented. There may be many different programs 𝒫_1,…,𝒫_n that implement the same computation 𝒞.As such, the computation gives a specification of what is being computed while the program gives a recipe, or mechanism, for how this computation is implemented.It is noted that the concept of a `program' herein includes algorithms as Turing machines as well as programs that learn <cit.> and interactive programs, both of which cannot be implemented by Turing machines <cit.>.In classical computing, a function on natural numbers is implemented by a program which can be represented by a Turing machine.In neuromorphic computing, functions that operate on (real-valued) time series are computed. The computation is implemented by a program represented as a neural network, often with designated input and output neurons. A computation 𝒞 is described by a formal specification which specifies the function that is being implemented. The specification formalizes the informal intention of the computation (Figure <ref>, left). The specification of a computation is expressed in some mathematical formalism. In digital computing, this can be done using formalisms from logic. In analog computing, there are a variety of formalisms that describe the computation, for example qualitative geometrical constructs like attractors and bifurcations <cit.>. A program 𝒫 is described in another formalism. In digital computing, programs are expressed in some programming language, see Section <ref>. In analog computing, one typically uses differential equations to describe the program. When programs interact with another, one may also speak of each individual program as a process and the ensemble of all processes as the program, whose behavior emerges from the interaction of the interacting processes (see Section <ref> on distributed programming).Operationally, a program is defined by the data flow and control flow.The data flow specifies how signals that carry computationally relevant information are propagated through the machine.The control flow specifies what operations or transformations are done on these signals. For example, in a field-programmable gate array (FPGA) the data flow is configured through its routing elements while the control flow is defined by the function implemented in each logic block.In a CPU, data flows between registers and memory according to the program's data instructions while the control flow is defined by its logic instructions. In a neuromorphic chip, the data flow is defined by the connectivity of the neural network while the control flow is defined by the synapse and neuron models, learning rules, synaptic weights, time constants, thresholds, and more. §.§ ProgrammingTreating hardware as fixed and software as malleable helps to separate the different timescales on which hardware and software are designed. Programming is a software matter and therefore assumes that a physical system already exists which allows to be programmed or configured.This does not, however, prevent the programmer from thinking about what properties a physical system should have in order to be effectively programmable for some task. On the contrary, this separation of concerns allows clear communication of what hardware constraints(there will be constraints!)are more or less desirable from a programming perspective, thereby contributing to successful hardware-software co-design.It has already been mentioned that the physical computing machine is designed and configured before it can be programmed. In the following, some processes which have been called `programming' are delineated, their meanings clarified and a general programming framework is outlined. Designing Every computing machine must be designed and manufactured before it can be used. Such machines can be programmable to varying extents. An application-specific computer is not programmable in any way - it physically implements a single program. A reconfigurable computer is configurable but may not be extensibly programmable. A programmable computer is fully programmable. The difference between the latter two depends on their usage and a clear a priori separation may not be possible.Configuring Many computing machines can be configured in a way that was defined in the design of the computing machine.A configuration can modify the interconnects of an FPGA <cit.>, the time constants and gains in a spiking neuromorphic chip <cit.>, or the tunable beam couplers in a photonic circuit <cit.>. As defined in Section <ref>, the configuration is constant for the lifetime of the program.Configuring is close to the hardware and amounts to selecting a configuration from a pre-defined set of configurations that were designed into the hardware, and is analogous to setting the control systems in a dynamical system. This limits the expressivity and creativity of a programmer constrained to configuration space.The configuring is often done through trial-and-error, or automatically through a search procedure if a well-defined objective exists.Programming As opposed to configuring, programming is not strictly constrained by the machine's physical design. The set of all possible programs is typically infinite, providing programmers with an unbounded creative medium for realizing their ideas.This infinitude originates in the compositionality of programs. Moreover, programs have a temporal component; while a configuration is fixed for the program's entire lifetime, a program can change its behavior over time.The key to this expressivity is a programming language in which programs are expressed (see Section <ref>). Optimizing / Training / LearningPrograms need not be written manually, but can also be searched for automatically. Such a search often has a desired program and can therefore be viewed as an optimization problem in which the `distance' between the implemented program and the desired program is minimized.The optimization can be done on-device or off-device with a (digital) computer and it can be done either offline in a training phase when the computer is not being used or online while the computer is being used.Training and learning often use optimization methods. In neuromorphic computing, one can train a neural network to approximate a desired program through some optimization procedure. A neural network can also learn autonomously how to adapt its weights to achieve some global objective, in a self-supervised or unsupervised way. Or it can simply mechanistically apply a learning rule with no clear global objective, like cellular automata <cit.>. Furthermore, using evolutionary algorithms, one may evolve a neural network, or a neuromorphic device, to implement some computation. These approaches are further detailed in Sections <ref> and <ref>. Instructing An existing learning algorithm can be further `programmed' through curated interactions with the environment or the user.This interactive training is common for personalized AI systems. For example, every Twitter user has a personalized Twitter feed which is learned by the user's behavior but can also be explicitly shaped by hiding or liking certain content.Self-organizationA popular paradigm for on-chip learning is self-organization.Local learning and adaptation mechanisms lead the neural network to self-organize and thereby implement a desired computation, for example with self-organized maps or plasticity rules in SNNs <cit.>.As is common with multi-scale dynamics (Section <ref>), it may be undecidable which local rules yield a particular global behavior. Thus, programming with self-organization can be exploratory to investigate what behavior emerges from different local rules, or it can be goal-driven when local rules are optimized to satisfy some behavioral constraints. Self-organization can also take place directly in physics to grow computing devices where the device is not explicitly designed <cit.>. Figure <ref> illustrates the general process of programming. Programming begins with some informal intention of what computation the program should implement. This intention can be formalized into a specification, or the programmer may directly come up with an idea for a program that implements the intended computation, expressed in some formalism. This program is then communicated to the physical computer through a pre-defined programming interface. Finally, the system executing this program can be controlled or instructed to remain within the specification.§.§ Languages and ParadigmsConventionally, programming amounts to coding (writing source code) in some formal language. Herein, `programming language' is used in an unconventionally wide sense to include any formal language that can be communicated to a physical system.This includes programming languages like Python but also extends to other formalisms like differential equations describing dynamical systems, or block diagrams describing signal processing systems.In any case, the `programming language' must be compatible with the elementary instructions that the computer's programming interface provides. Given this compatibility, the programmer is free to explore the infinite space of programs. Work on elementary instruction sets for non-digital computers goes back at least to the 1940s and continues to the present day <cit.> but there is still no universally accepted model <cit.>.Consequently, it is not clear what a neuromorphic programming language may look like <cit.>; will it require new syntax such as visual representations, or will a program be represented by a string of symbols in some formal language? Since the goal is to improve the “general practice of programming” neuromorphic computers, Floyd <cit.> argued that it is more effective to turn to programming paradigms rather than to languages.A programming paradigm is an approach to programming “based on a mathematical theory or a coherent set of principles” <cit.> and a programming language implements one or more programming paradigms. Centering the discussion on programming paradigms shifts the focus away from syntactical issues to the way programs are conceived and designed.§.§ Programming TrendsBeyond languages and paradigms, there is a set of well-developed tools for computer programming without which most modern software systems would not exist. Such tools will be necessary if neuromorphic programming is to develop into a mature discipline.Efficiency Modern programming is done on computers running a powerful integrated development environment (IDE). This is essentially an interface between the computer and the programmer, which enables a fast feedback loop between designing and testing programs. The keyboard-and-mouse interface to modern computers now seems trivial, but its success confirms its efficiency. Programming is interactive, with many intermittent compilation runs to check the program's semantics and syntax, where the syntax is often checked by the IDE directly without compiling. Teamwork Much has been invested into the coordination of large software teams <cit.>, resulting in some of the most complex and valuable computing systems in the world <cit.>. Collaborative version control systems are used by corporations, organizations, and open-source communities alike, enabling collaboration on large codebases with multiple programmers working on overlapping parts. Agile development and management are commonly used to efficiently coordinate large software projects <cit.>. Automation High-level programming languages elevate the level of abstraction and automate much of the work that was previously done explicitly <cit.>. Furthermore, automated programming techniques are in full force, with inductive programming and machine learning leading the way toward programs that are automatically generated from data (see Section <ref>). Robustness As software systems increase in complexity, much work has been invested to make them robust to failures. Automated testing, continuous integration, and containerization all contribute to making large-scale software development more robust to different kinds of failures <cit.>. Modularization and structured programming have been key to managing large, interactive, distributed software systems. But despite significant advances in programming tools, software complexity remains an obstactle for achieving robust systems with no silver bullet in sight <cit.>.Software engineering Everything above has focused on only one aspect of programming, namely the design of programs. Software engineering can be thought of as “programming integrated over time” <cit.> in that it goes beyond the design of programs to also include maintenance, testing, validation, integration, and organization of large software-intensive systems <cit.>. § PROGRAMMING PARADIGMS §.§ Conventional Programming Instruction-based The most common way of writing sequential, instruction-based programs uses the imperative paradigm, as implemented in C.Imperative programming was augmented with objects, which can contain instructions as well as data, to yield the object-oriented paradigm, as implemented in C++ or Java. With the advent of multi-core microprocessors came the need to use resources on different cores simultaneously. This led to the development of parallel programming techniques, in which multiple processes are carried out simultaneously on different cores <cit.>. This is not to be confused with concurrent programming where the lifetime of multiple computing processes overlap and may interact with another <cit.>. Concurrency introduces issues of synchronization such as deadlocks and race conditions.Distributed programming deals with programs that are executed on multiple networked computers which interact to achieve a common goal. Emergent programming uses multiple interacting sub-programs whose collective behavior constitutes the desired program. The individual instructions are typically not explicitly informed of the program to be created <cit.>. This approach has been used to design programs that exhibit some creativity <cit.>. This is reminiscent of local learning rules in neuromorphic computers (see Section <ref>).Declarative Instead of describing the control flow of a program, declarative programs describe the logic of the program. A declarative program describes what the program does rather than how it does it. This makes reasoning about programs easier and simplifies parallel programming <cit.>. Declarative programming is done in database query languages like SQL, functional programming languages like Haskell, or logic programming languages like Prolog. In dataflow programming, a program is modeled as a graph of data flowing between operations. This is a natural model for neuromorphic computers where data flows between neurons, and has been used for neuromorphic compilation <cit.> (see Section <ref>).Spatial programming can be used to program reconfigurable hardware into dataflow engines <cit.>.Automated programming In meta-programming, it is possible for a program to write or modify programs, by simply treating the program as data.In reflective programming, a program modifies its own behavior whereas in automatic programming, a program generates another program.If a formal specification of the desired program is given, program synthesis can be used to generate a program that provably satisfies this specification <cit.>. If exact adherence to a formal specification is not required, but only the satisfaction of given constraints, constraint programming may be used <cit.>. If an incomplete specification is available, such as input-output examples, then inductive programming can be used to generate a suitable candidate program <cit.>.An inductive programming approach coupled with probabilistic programs has been proposed as a model for human-level concept learning <cit.>. Recently, deep learning (see below) has been used for inductive programming, under the name of neural program synthesis <cit.>. As already mentioned in Section <ref>, it is possible to instruct an interactive program and direct it to implement a desired computation. End-user programming allows users to obtain programs from a small set of examples, like the flashfill feature in spreadsheet programs which infers a formula from table manipulations done by the user <cit.>.Probabilistic While classical programs are deterministic, the execution of a probabilistic program depends on random numbers, for example by calling a (pseudo) random number generator. Such a program can be viewed as sampling from a probability distribution. In probabilistic programming, the program itself is considered to be a distribution, and the programmer can analyze this distribution and condition the distribution on observations <cit.>. Indeed, the goal of probabilistic programming is not simply the execution of a program, but also the analysis thereof.By expressing a statistical model as a probabilistic program, statistical inference on such a model can be done automatically by the compiler through general-purpose inference schemes.Probabilistic programming has been used for state-of-the-art generative vision models with very compact programs of only about 50 lines <cit.>. LearningIn classical programming, a human programmer defines the program that specifies how input data is processed. Machine learning constructs programs that learn from the input data, in ways that may not have been anticipated by any human. Machine learning has deep roots in probability theory and overlaps significantly with probabilistic programming <cit.>.In supervised machine learning, a mapping from inputs to outputs is learned from a set of examples.In reinforcement learning, a policy of how to act in some environment is learned from rewards and punishments.Both the learned mapping in supervised learning and the learned policy in reinforcement learning can be used as programs. This makes machine learning a new paradigm for (automated) programming <cit.>.Machine learning uses tools from optimization theory; the learning task is often partly framed as an optimization problem where some surrogate of the true performance metric is optimized, for example the average error over a set of input-output examples. In reservoir computing, a neural network consists of an input layer which feeds into the high-dimensional recurrently-connected reservoir network from which the output is obtained through a readout layer. Only this final readout layer is trained while the reservoir is randomly initialized and typically not modified (see Section <ref>). Deep learning uses multi-layered ANNs for machine learning. The connectivity of such an ANN is usually fixed and then the weights are learned, typically in a supervised fashion using gradient descent to minimize the error on given input-output examples. In differentiable programming, programs are written in a way that they are fully differentiable with respect to some loss function, thereby allowing the use of gradient-based optimization methods to find better-performing programs. Deep learning is a special case of this, where programs are artificial neural networks that are differentiated using backpropagation. These techniques have also been adapted for spiking neural networks <cit.>. Differentiable programming has been employed to merge deep learning with physics engines in robotics <cit.>, it has been applied to scientific computing <cit.>, and even towards a fully differentiable Neural Turing Machine <cit.>. Optimization As already mentioned, machine learning relies heavily on tools from optimization theory. In pure optimization, the minimization of some cost function J is a goal in itself.In machine learning, a core goal is good generalization to unseen examples. This is expressed as some performance measure P which is intractable and therefore one minimizes some cost function J which will in turn also increase the performance measure P. As such, if generalization is not needed then one may use optimization as a programming paradigm in which the result of the optimization is the desired program or the optimization process itself. Evolutionary programming uses population-based evolutionary optimization algorithms to find programs. In order to find a program that solves some problem is to define a fitness function that is maximized by a program that solves this problem. Evolutionary algorithms have been used to generate rules for a cellular automaton to solve computational problems that are difficult to solve by manually designing a learning rule <cit.>. Evolutionary optimization is also a popular approach for neuromorphic devices, see Section <ref>. Some dimensions of neuromorphic computing (Section <ref>) are exploited by paradigms in this section. Dataflow programming, distributed programming and deep learning harness distributedness in computation. Probabilistic programming uses stochasticity, as do optimization methods and machine learning methods. Emergent programming works with at least two different spatiotemporal scales as well as learning and optimization where the optimization loop operates on a slower timescale than the actual program. In some machine learning and optimization methods like reservoir computing or evolutionary optimization, a complete description of the program is not necessary, potentially accommodating some unobservability.§.§ Unconventional ProgrammingThe present section investigates paradigms for programming physical systems. Computational models, and therefore programming methods, must ultimately be based in physics and resulting hardware constraints <cit.>.Current programming methods are adapted to clocked digital hardware but with the forthcoming diversity of computer hardware and architectures <cit.> it is time to widen the set of hardware constraints that can be programmed with. Cellular programming As mentioned previously, cellular automata (CA) are a standard model of massively parallel computation. A CA is programmed by choosing its update rule and the program is executed on some initial configuration of the CA's lattice. Inspired by CAs, cellular architectures of neuromorphic devices have been proposed <cit.>. For over two decades, amorphous computing has been developing programming techniques inspired by the cellular cooperation in biological organisms <cit.>. An amorphous computer is a system of irregularly placed, asynchronous, locally interacting computing elements that are possibly faulty, sensitive to the environment, and may generate actions <cit.>. This line of research brought space-time programming <cit.> as a way of programming to control large networks of spatially embedded computers.Although not directly focused on neuromorphic computing, amorphous programming methods can provide a good starting point for robust programming methods in cellular architectures. Analog programming Neuromorphic hardware often contain analog components, which are difficult to work with because programming methods for analog computers are not at the same level of maturity as those for digital computers. Ulmann <cit.> argues that the development of reconfigurable analog computers will advance the state of analog computer programming and efforts to develop such hardware is in progress <cit.>. Nevertheless, methods from control engineering, signal processing and cybernetics have been developed and used for decades and can be adapted for neuromorphic systems.While digital computing was originally formulated as computing functions on the integers <cit.>, signal processing can be seen as computing functions on temporal signals. For analog neuromorphic computers, signal processing provides a rich framework for computing with temporal signals <cit.>.Control theory has developed a rich repertoire of methods to drive a dynamical system into a mode of operation that is robust, stable, and implements some desired dynamics. These methods can be used to keep analog computers within a desired regime of operation to implement a desired computation. It can be expected that analog computers can benefit from cross-fertilization between computer science and control theory <cit.>. A promising direction is data-driven control where a model of the system to be controlled is learned from experimental data using machine learning techniques <cit.>. Historically rooted in ideas from cybernetics and ultrastable systems <cit.>, autonomic computing aims to design systems that are able to adapt themselves in order to stay within a high-level description of desired behavior <cit.>. The field takes inspiration from the autonomic nervous system, which is able to stay within a stable `dynamic equilibrium' without global top-down control.Programming physical systemsBuilding on evolutionary optimization, evolution in materio <cit.> was proposed to harness material properties for computation. It is argued that natural evolution excels in exploiting the physical properties of materials, and artificial evolution emulates this. Evolution has been applied widely in unconventional computing <cit.>, for example with a disordered dopant-atom network for digit classification <cit.>. As already mentioned in the preceding section, physical reservoir computing can be used to harness the dynamics of physical systems for computation by modeling the physical system as a high-dimensional reservoir on top of which an output map is trained <cit.>. §.§ Neuromorphic ProgrammingNeuromorphic co-design As neuromorphic computers exploit physical phenomena of their underlying hardware, manually designed neuromorphic programs will necessarily be close to physics. Therefore, although not strictly a paradigm for `programming', it is instructive to consider neuromorphic co-design as a paradigm for designing neuromorphic systems. The field is rooted in the original vision of neuromorphic computing <cit.> and designs application-specific <cit.> as well as reconfigurable <cit.> mixed-signal neuromorphic chips in sub-threshold CMOS technology which may also include on-chip learning.This approach uses tools from signal processing and computational neuroscience to implement a desired behavior in networks of silicon neurons <cit.>.Similar to analog computing, the field may benefit from a set of computational primitives to simplify the design of neuromorphic systems. Compilation Given a neural network, it is necessary to communicate this network to the hardware.Neuromorphic compilation <cit.> was proposed as a general framework to (approximately) compile neural networks into different hardware systems, automatically adapting to physical constraints.Such compilation can be done statically to exactly implement the specified network architecture <cit.>, or adaptively to further optimize the network after compilation <cit.>. In any case, it is important to consider the hardware constraints in this compilation <cit.>.To compile a neural network into hardware, it is necessary to first design the neural network's architecture. Deep learning has accumulated a plethora of well-performing network architectures for ANNs which can rapidly be converted into equivalent spiking neural networks (SNNs) through ANN-to-SNN conversion <cit.>.The conversion to SNNs offers significant advantages in energy efficiency while often maintaining similar levels of performance. However, this conversion is not optimal because it typically does not leverage the computational power of spiking neurons and instead limits the richer dynamics of SNNs to the same less powerful domain of ANNs <cit.>.Compilation and conversion are promising directions, though descriptions at the level of neural network architectures may not provide a high enough abstraction for implementing programs that realize arbitrary computations.Learning Given the success of deep learning, learning is a natural paradigm for neuromorphic computers.While it would be naïve to ignore the deep learning literature, it is also unrealistic to expect deep learning methods to work for SNNs as well as they do for ANNs since these methods were optimized for ANNs <cit.>. Backpropagation, the workhorse of deep learning, can be implemented directly in SNNs using surrogate gradients <cit.> or other neuromorphic adaptations. Simplifications of the backpropagation algorithm such as the random backpropagation algorithm <cit.> were also demonstrated in neuromorphic systems <cit.>.It is also possible to create a surrogate model of the physical device, then optimize the surrogate model in simulation with deep learning methods and transfer the optimized model back to the device <cit.>.For recurrent neural networks, reservoir computing avoids the need to backpropagate information through the network to compute gradients. Instead, the reservoir is kept fixed and only a readout map from the reservoir to the output is trained. This training procedure requires the reservoir states to be read out and stored, which may not be possible given limited observability of some devices or limited data storage. Reservoir computing is a popular paradigm for neuromorphic computing, with dedicated frameworks for hardware implementation <cit.>.Neural network training is often done off-device with external hardware. Frequent re-training creates a large overhead, limiting the performance and applicability of neuromorphic computers. As a result, on-device learning methods are an active topic of research <cit.>.Plasticity is a popular paradigm for on-device learning where local learning rules are used to modify the connectivity (structural plasticity) and connection strengths (synaptic plasticity) of a SNN. Parallels to emergent programming may be drawn here as the resulting behavior of the SNN emerges from the interaction of local rules. It is not clear what local rules will yield a particular network-level behavior, but evolutionary search <cit.> and meta-learning <cit.> have been used to (re-)discover desirable plasticity rules.Evolution A key advantage of evolutionary approaches is that they can jointly optimize the network's architecture and weights, thus simultaneously designing and training the network. Moreover, evolutionary methods do not require differentiability of activation functions, nor do they place any constraints on the network's architecture.Evolutionary approaches can find a SNN by randomly choosing an initial population of candidate SNNs, selecting the highest-performing candidates according to some performance metric, and then creating new candidates through recombining and mutating the selected candidates <cit.>.However, evolutionary approaches can be slower to converge than other training methods <cit.> and the resulting architectures are not easily understandable or reusable for different tasks <cit.>. Neuromorphic algorithms With the increased availability of neuromorphic hardware, a number of handcrafted spiking neuromorphic algorithms (SNA) have been proposed. SNAs implement computations using temporal information processing with spikes, often to implement well-defined computations such as functions on sets of numbers <cit.>, functions on graphs <cit.>, solving constraint satisfaction problems or solving a steady-state partial differential equation using random walks <cit.>. SNAs are being actively developed and many application domains are yet to be explored <cit.>. Neurocomputational primitives A variety of neurocomputational primitives have been proposed in the neuromorphic community. Such primitives can be useful for simple tasks and typically allow for composability to create more complex neuromorphic systems at a higher level of abstraction <cit.>.Dynamic neural fields (DNFs) are a modern framework for neural attractor networks <cit.>. The stable states provided by attractor dynamics help with the intrinsic variability of analog neuromorphic circuits and have been shown to be a promising abstraction for neuromorphic programming <cit.>. Each DNF is a network of neurons that is, under some constraints, computationally equivalent to a winner-take-all (WTA) network <cit.>. The WTA is a common circuit motive in the neocortex <cit.>. The neural state machine (NSM) <cit.> also builds on WTA networks to implement finite state machines in SNNs, and has been shown to run robustly on mixed-signal neuromorphic hardware.The spiking phase-locked loop (sPLL) <cit.> was designed for frequency detection as part of a neuromorphic tactile sensor.The temporal difference encoder (TDE) <cit.> is a spiking model that was designed to compute the time difference between two consecutive input spikes. The number of output spikes and the time between them is inversely proportional to the time difference. This has been used for motion estimation and obstacle avoidance <cit.>.Neural oscillators generate rhythmic activity that can be used for feature binding and motor coordination, for example as a central pattern generator <cit.>.Other primitives are scattered around the literature and shared libraries of neurocomputational primitives are only starting to be assembled <cit.>. Neuromorphic synthesis <cit.> may provide a systematic way of programming complex high-level behavior into neuromorphic chips. This was demonstrated for functions that can be described by finite state machines, but it may be promising to extend this work to a larger set of computational primitives for higher abstractions in neuromorphic programming. Higher abstractionsThe neural engineering framework <cit.> raises the level of abstraction beyond the level of neural networks. This allows dynamical systems to be distilled automatically into networks of spiking neurons that can then be compiled down to mixed-signal spiking neuromorphic accelerators like Braindrop <cit.> using the Nengo programming environment <cit.>.Intel recently launched Lava[<https://github.com/lava-nc/lava>], an open-source neuromorphic programming framework for the Loihi chips. Support for other neuromorphic hardware is planned. Lava is a multi-paradigm framework and includes libraries of neuromorphic algorithms for optimization, attractor networks, deep learning methods for SNNs, VSAs, and plans to include more paradigms. Aimone et al. <cit.> proposed Fugu, a hardware-independent mechanism for composing different SNAs. In Fugu, a program is specified as a computational graph reminiscent of dataflow programming, where nodes represent SNAs and connections represent dataflow between the SNAs. This program can then be compiled into different hardware-specific configurations. The focus of this work is on digital neuromorphic processors and support for mixed-signal hardware is not discussed. § OUTLOOKWithout a guiding theory that unites physics with computation, it is difficult to program computers that harness their underlying physical dynamics for computation. Building on decades of research in neuromorphic computing and engineering, initial features of neuromorphic programming methods can be identified. As the field is moving toward general programming methods, it is important to clarify concepts and establish an efficient separation of concerns to allow effective cross-disciplinary collaboration and communication.Shared benchmarks and user-friendly tools will further boost progress <cit.>. Moreover, for neuromorphic systems to scale in the landscape of large heterogeneous computing systems, community-wide standards and protocols must be defined for the communication between neuromorphic systems.The structure of large-scale neuromorphic programs is yet to be explored. It is assumed that a digital computer has a clearer architecture with fewer modules whereas the brain has a larger breadth of ongoing computations <cit.>. It remains to be seen if neuromorphic programs allow for the kinds of `crisp abstractions' <cit.> that enable the deep hierarchies in digital programming as observed in compilation hierarchies, function call hierarchies, and class inheritance hierarchies.If such abstractions are not possible, hierarchies in neuromorphic programs will necessarily be wide and shallow, leading to many interacting components and only a few different levels of abstractions. It is hoped that neuromorphic programmers can leverage the work outlined in this paper to build large-scale neuromorphic programs to tackle real-world tasks, and to further develop guiding principles and paradigms for neuromorphic programming. I wish to thank Herbert Jaeger for helpful comments. This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement No. 860360 (POST DIGITAL).acm/ACM-Reference-Format | http://arxiv.org/abs/2310.18260v1 | {
"authors": [
"Steven Abreu"
],
"categories": [
"cs.NE",
"cs.ET"
],
"primary_category": "cs.NE",
"published": "20231027164811",
"title": "Concepts and Paradigms for Neuromorphic Programming"
} |
The zero forcing numbers and propagation times of gear graphs and helm graphs Sara Anderton, Rilee Burden, McKenzie Fontenot, Noah Fredrickson, Alexandria Kwon, Sydney Le, Kanno Mizozoe, Erin Raign, August Sangalli, Houston Schuerger, and Andrew Schwartz May 15, 2023 ==================================================================================================================================================================================== I discuss the thermodynamics-based derivation of the formula for the entanglement entropy of a system of gluons. The derivation is based on <cit.>, where saturation and the Unruh effect were used to obtain and discuss the entropy of gluons. The formula agrees, in the high-energy limit, up to a numerical factor, with more recent results by <cit.>, where arguments based on the density matrix and bipartition of the proton were used to obtain the formula. Furthermore, I present arguments based on the properties of evolution equations as to why the saturation-based approach, as well as the double leading logarithmic limit of BFKL, agree in the functional form of the expression for entanglement entropy. § INTRODUCTIONIn the arxiv:1103.3654v1 and arxiv: 1103.3654v2 versions of the paper <cit.> it has been shown that if one assumes that saturation scale acts as an effective mass of a system of gluons that populate proton boosted to high rapidity and furthermore as effective temperature then one can obtain thermodynamic entropy which depends linearly on the rapidityS=πλ ywhere, S is entropy of gluonsy is rapidity and λ will be introduced later.The discussion leading to this formula relied on the argumentation that decelerating hadron in the color field of another hadron effectively experiences temperature in its rest frame in accord with the Unruh effect <cit.>. The deceleration is of the order of the saturation scale Q_s where the saturation scale signals the emergence of a dense system of gluons. Furthermore, the motivation comes also from studies of the thermalization problem of nuclear matter where it is argued that Color Glass Condensate <cit.> provides appropriate initial conditions for subsequent thermalization.More recently however there has been substantial progress in understanding from more fundamentalprinciples grounded in quantum mechanics and quantum field theory the origin of entropy production in high energy collisions <cit.>. We will focus on the result obtained in thepapers<cit.> where the entropy has been shown to depend linearly on rapidity.This behavior of entanglement entropy has been shown to be in accord with measured hadronic entropy <cit.>. In particular in the paper <cit.> the authors considered the Deep Inelastic Scattering process where the virtual electron probes only part of the proton's wave function and therefore introduces bi-partition of the target. This necessarily leads to the rising of entanglement of observed and unobserved degrees of freedom and therefore to entanglement entropy.Using theequation that describes the rapidity evolution of probability for n parton state p_n(y)after solving and evaluating von Neuman entropy they obtainS(y) = ln(e^λ y - 1) + e^λ yln(1/1 - e^-λ y)and taking the asymptotics of y→∞ they obtain the expression[The Authors of <cit.> used symbol Δ while I use λ]S=λ yIn the 1+1 dimensional model the λ is interpreted as the BFKL intercept and reads λ=4N_cα_s/πln 2 while in the 3+1 dimensional case it reads λ = N_c α_s/πln(r^2 Q_s^2) where r is the size of the dipole.The similar structure was also obtained within 3+1 dimensional dipole model in the double logarithmic approximation <cit.>.The formula eq. (<ref>) is up to a constant the same as in eq. (<ref>) which results from an asymptotic expansion of the complete expression (the asymptotic expansion here means that one is reaching a maximally entangled state <cit.>). This is also consistent with the thermodynamic vs. statistical-based approach where quantities tend to match after a long time passes, the role of time is played here by rapidity. The λ is the speed of growth of low x or moderate x gluons. § ENTROPY FORMULAOne can reconcile eq. (<ref>) with eq. (<ref>) by rescaling λ in the equation that connects the saturation scale with temperature, eq. (<ref>), through the introduction of a constant factor c = π, as expressed byT=c Q_s/2πWhile this is arbitrary one should keep in mind that the eq. (<ref>) with c=1 is based on qualitative arguments that the deceleration is equal to the saturation scale. Because of that the formula the formula (<ref>) is approximate.The more fundamental derivation of (<ref>) is presented in <cit.>.Now we use the thermodynamic relation between energy and entropy:dE=TdSand set dE=dM gives:dM=TdSUsing the argument that the saturation scale acts as an effective mass of a system of gluons we havedM=dQ_s(x)In the next step, we use eq.(<ref>) which allows us to link the saturation scale to entropy:dQ_s(x)/Q_s(x)=cdS/2πwhich leads to:S=π/cln (Q_s^2(x)/Q_0^2) and we set the lowest entropy state to zero. Now using that saturation scale is approximately Q_s^2=Q_0^2(x_0/x)^λ <cit.> and define rapidity as y=ln(x_0/x) (x_0 and Q_0 are constants) we obtain:S=λ ywhere the integration constants have been chosen to match the formulas. The basic observation that allowed the derivation of this formula within the thermodynamic approach is that the saturated system of gluons is characterized by only one scale the saturation scale Q_s. This feature can be used to express the entropy formula in terms of a number of gluons in analogy to <cit.> (see also discussion along this lines in <cit.>). We will use the GBW gluon density that readsℱ(x, k^2) =N_c S_⊥/α_s 8π^2k^2/Q_s^2 e^-k^2/Q_s^2After integrating over k^2 we obtain xg(x)=∫^∞_0 dk^2 ℱ(x,k^2) = N_c S_⊥/α_s 8π^2Q_s^2Using (<ref>) and (<ref>) that we may writeS=lnxg(x)+constwhere the constant can be absorbed in the xg(x).The expression above was obtained assuming a specific form of unintegrated gluon density. However, the crucial point is that we work in saturation-dominated regions of phase space. One could use any other low xdipole gluon density with saturation as they behave as F∼ k^2 and integrate it up to saturation scale to arrive at a result that would differ by a constant. Another derivation of the equation (<ref>) in double leading logarithmic approach (DLL) that however allowed to account for hard scale dependence wasobtained in <cit.>. The interesting feature of the formula above is that it is obtained here under the assumption of saturation while the result obtained in the first part of <cit.> does not assume saturation, as well as saturation, is not considered in a double logarithmic limit that was taken in <cit.>.Saturation is however taken approximately into account in the second part of reference <cit.>. The crucial point is that the functional dependence on rapidity matches. One can see that from the reference<cit.> where saturation was accounted for too. The approach differs from the one used in <cit.>; however, the functional dependence on rapidity remains linear, although the coefficient is different (it is 1/2 λ instead of λ).We could have of course chosen c factor to match the expression in <cit.>.This similar behavior of entropy obtained within DLL and saturation based approximations can be understood better with the help of momentum space versions of Balitsky-Fadin-Kuraev- Lipatov <cit.> and Balitsky-Kovchegov <cit.> evolution equations. As it is well known the BFKL equation for unintegrated gluon density F(x,k^2)F(x,k) =F^(0)(x,k)+α_s ∫_x^1 dz/z∫dk'^2[F(x/z,k')/|k^2-k'^2|-k^2/k'^2 F(x/z,k)/|k^2-k'^2| +k^2/k'^2 F(x/z,k)/√(k^4+4k'^4)] .is infrared sensitive because of the presence of the anticollinear pole i.e. configurations where k^' 2≫ k^2 and unordered emissions in the transverse momentum.The equation can be solved in diffusive approximation which is far from both collinear and anticollinear region but the resulting solution is not in accord with KNO scaling found in <cit.>.The BK equation which accounts for recombination of gluons and therefore models saturation has this feature that the triple pomeron vertex is dominated by the anticollinear pole which as evolution progresses is subtracted from the BFKL kernel therefore overall its contribution diminishes. This can be seen from the structure of integrals in the BK equation as shown below <cit.> (see also <cit.>).F(x,k^2) =F^(0)(x,k^2) +α_s ∫_x^1 dz/z∫dk'^2[F(x/z,k'^2)/|k^2-k'^2|-k^2/k'^2 F(x/z,k^2)/|k^2-k'^2| +k^2/k'^2 F(x/z,k^2)/√(k^4+4k'^4)]+-2α_s^2π^3/N_c^2 R^2∫_x^1dz/z{[∫_k^2^∞dk'^2/k'^2 F(x/z,k'^2)]^2+F(x/z,k^2)∫_k^2^∞dk'^2/k'^2ln(k'^2/k^2)F(x/z,k'^2) }As one can see the integral over k'^2 in the nonlinear part has the lower limit set by k^2. Furthermore, the diffusion behavior of the linear part of the equation is tamed by the nonlinearity <cit.>.To some extent, such features can be mimicked by the double leading logarithmic approximation of the BFKL equation where the anticollinear pole is neglected. Furthermore, this approximation gives gluon density far from the diffusive region.F(x,k^2) =F^(0)(x,k^2) + α_s ∫_x^1 dz/z∫^k^2_k_min^2dk'^2 F(x/z,k'^2)/k^2.We expect that the features of saturation and DLL approximation lead effectively to a similar mechanism for the generation of entropy as the BK equation as both of the equations have limited phase space as compared to the BFKL evolution. These features are possibly responsible for giving linear rapidity dependence of entropy. We should also note that in the DLL limit, BK reduces to eq. <ref> as well. § CONCLUSIONSIn the paper, we revisited the thermodynamics-based derivation of the entanglement entropy formula. The formula agrees in functional form with the asymptotic limit of the expression obtained by using the dipole cascade model <cit.>. By appropriately matching numerical factors, the formulas can be made to take the same form. Furthermore, we have presented arguments as to why the functional forms of entropy, as derived within the saturation-based approach and the double logarithmic limit, agree. The findings of this paper demonstrate that in QCD, one can, in principle, calculate the same quantity using both a thermodynamic and a fine-grained quantum theory-based approach. This stands in contrast to the current state of black hole physics, where calculating the entropy of black holes in a 3+1 D case within a realistic theory remains a significant challenge. From this perspective, QCD may play a role in testing ideas for a better understanding of quantum gravity problems (through various mappings between QCD and gravity <cit.>), as it has regimes in which it is nearly classical and by construction unitary. Questions along these lines and concrete ideas were formulated in <cit.>. § ACKNOWLEDGEMENTS I would like to thank Martin Hentschinski, Dmitri Kharzeev,Kong Tu for interesting and stimulating discussions. Furthermore, I would like to thank Krzysztof Golec-Biernat for many useful comments. JHEP | http://arxiv.org/abs/2310.18510v2 | {
"authors": [
"Krzysztof Kutak"
],
"categories": [
"hep-ph",
"hep-th",
"quant-ph"
],
"primary_category": "hep-ph",
"published": "20231027220024",
"title": "Entanglement entropy of proton and its relation to thermodynamics entropy"
} |
Lipschitz and Hölder continuity in Reproducing Kernel Hilbert Spaces[Preprint, currently under review.] Christian FiedlerInstitute for Data Science in Mechanical Engineering (DSME)RWTH Aachen UniversityEmail <[email protected]>January 14, 2024 ============================================================================================================================================================ Reproducing kernel Hilbert spaces (RKHSs) are very important function spaces, playing an important role in machine learning, statistics, numerical analysis and pure mathematics.Since Lipschitz and Hölder continuity are important regularity properties, with many applications in interpolation, approximation and optimization problems, in this work we investigate these continuity notion in RKHSs. We provide several sufficient conditions as well as an in depth investigation of reproducing kernels inducing prescribed Lipschitz or Hölder continuity. Apart from new results, we also collect related known results from the literature, making the present work also a convenient reference on this topic. Keywords Reproducing kernel Hilbert spaces, Lipschitz continuity, Hölder continuity, integral operators MSC2020 46E22, 51F30, 47B34, 47G10 § INTRODUCTIONReproducing kernel Hilbert spaces (RKHSs) are Hilbert function spaces in which evaluation of functions is continuous with respect to (w.r.t.) the Hilbert space norm.These function spaces play an important role in machine learning <cit.>, statistics <cit.>, numerical analysis <cit.> (including inter alia function interpolation and approximation problems, numerical solution of partial differential equations, and numerical integration), signal processing <cit.> and pure mathematics <cit.>.The theory of RKHSs is by now very well-developed, and there are many excellent expositions available, for example, <cit.>. In particular, the connection between properties of the reproducing kernel of an RKHS and properties of the functions in an RKHS has been thoroughly investigated, with a good overview provided in <cit.>. This connection is important since an RKHS is generated by its reproducing kernel (see <Ref> for the details), and the latter is user-defined in most applications of RKHSs. By choosing or constructing an appropriate reproducing kernel, tailored function spaces can be created, which can then be used in interpolation, approximation, optimization and related problems.Particularily relevant for many applications, especially in constructive approximation problems, are regularity properties of function spaces. In the case of RKHSs, continuity and differentiability of functions is fully determined by the corresponding reproducing kernel, cf. <cit.>. Furthermore, there is a close connection between certain Sobolev spaces and RKHSs, cf. <cit.>.Another important regularity notion, which is in between mere continuity and differentiability, is Lipschitz continuity, or more generally Hölder continuity. Recall that if (X,d_X) and (Y,d_Y) are two metric spaces, and f: X→Y a function, we call f Lipschitz continuous if there exists L∈ such that for all x,x'∈X we have d_Y(f(x),f(x'))≤ d_X(x,x'). Each such L∈ is called a Lipschitz constant for f, we sometimes we say that f is L-Lipschitz continuous. Similarly, if there exists α∈ and L_α∈ such that for all x,x'∈X we have d_Y(f(x),f(x'))≤ L_α d_X(x,x')^α, then f is called α-Hölder continuous, and each such L_α is called a Hölder constant for f. In particular, 1-Hölder continuity is Lipschitz continuity.Lipschitz and Hölder continuity are classic notions that appear prominently for example in the theory of ordinary differential equations <cit.> and partial differential equations <cit.>, respectively. Hölder continuity is also frequently used in the theory of nonparametric statistics <cit.>. Moreover, there is now a considerable and well-developed theory of spaces of Lipschitz continuous functions, cf. <cit.>. Finally, Lipschitz continuity (and to a lesser extent also Hölder continuity) is used as the foundation of practical algorithms. For example, Lipschitz continuity (and a known Lipschitz constant) is a core assumption in many global optimization approaches <cit.>. Lipschitz continuity also forms the basis for many non-stochastic learning algorithms, especially in the context of systems identification <cit.>. Recently, Lipschitz assumptions have also been used successfully in the context of kernel methods, for example, for Bayesian optimization with safety constraints <cit.>, or function approximation and regression problems with bounded noise <cit.>, the latter motivated by the stringent requirements of learning-based robust control, cf. <cit.> and <cit.> for an in depth discussion of this issue.All of this forms a strong motivation to investigate Lipschitz and Hölder continuity in RKHSs. In particular, a central question is how (if at all) the Lipschitz or Hölder continuity of the reproducing kernel of an RKHS influences the corresponding continuity properties of RKHS functions. To the best of our knowledge, there is no systematic investigation into these questions, despite the importance of RKHSs and Lipschitzand Hölder continuity, respectively, and the considerable effort that went into investigating the connection between kernel properties and RKHS function properties.That RKHS functions are always Lipschitz continuous w.r.t. the kernel metric, as reviewed in <Ref>, is well-known. The more interesting question of Lipschitz and Hölder continuity w.r.t. an arbitrary metric seems to have been barely covered in the literature. The only previous work we are aware of that explicitly addressing this question, is <cit.>. In the present work, we are closing this gap in the literature.Outline and contributionsWe provide a comprehensive account on Lipschitz and Hölder continuity in RKHSs. On the one hand, this includes a collection of (the relatively few) known results, and on the other hand a systematic investigation of this issue, including characterization and converse results.In <Ref>, we recall fundamental results on RKHSs and introduce our notation.<Ref> is concerned with Lipschitz continuity w.r.t. the kernel metric induced by the unique reproducing kernel of an RKHS. Most of the results there are known, however, since we are not aware of a systematic exposition thereof, we provide all the details for ease of future reference.In <Ref>, we investigate Hölder and Lipschitz continuity w.r.t. a given metric.First, some preliminary facts regarding bivariate Hölder and Lipschitz continuous functions are provided, some of these seem to have been not noticed before.We then investigate which continuity properties in an RKHS are induced by Hölder continuous kernels. While the principle arguments are contained already in <cit.>, our results are more general and easier to state. Finally, the converse question is tackled: If all RKHS functions fulfill a Hölder continuity property, what does this mean for the reproducing kernel? To the best of our knowledge, this problem has not been dealt with before.One key take-away of <Ref> is the fact that a Lipschitz continuous kernel does not directly lead to Lipschitz continuous RKHS functions. Since Lipschitz continuous functions are desirable in many applications, it would be interesting to construct kernels that induce such RKHS functions. <Ref> is concerned with this problem.First, we give a characterization of kernels that induce Hölder continuous RKHS functions, a result which is completely new. Next, we give sufficient conditions in terms of certain integral operators, extending a result from <cit.>. Finally, we give a very general construction based on feature mixtures, vastly generalizing a method from <cit.>.We close in <Ref> with a summary and discussion of our results, as well as an outlook to applications and future research directions.§ PRELIMINARIES AND BACKGROUND We cover the real and complex case simultaneously, using the symbolforor . Unless noted otherwise, X will be a non-empty set. We call κ:X×X→ Hermitian if for all x,x'∈X, we have κ(x,x')=κ(x',x). Note that if κ is Hermitian, then κ(x,x)∈ for all x∈X. If =, then κ is Hermitian if and only if it is symmetric in its two arguments.Let us recall some important definitions and facts about RKHSs, following mostly <cit.>. Consider a function k:X×X→, and let H⊆^X be a Hilbert space of functions on X.We call k a kernel (or -kernel) on X if there exists a -Hilbert spaceand a map :X→ such thatk(x,x') = ⟨Φ(x'), Φ(x)⟩_∀ x,x'∈X.In this case, we calla feature space anda feature map for k.The function k is called positive semidefinite[The terminology is not uniform in the literature. Other common terms are of positive type and positive definite.] if for all N∈ and x_1,…,x_N∈X, the matrix (k(x_j,x_i))_i,j=1,…,N is positive semidefinite in the sense of linear algebra.We call H a reproducing kernel Hilbert space (RKHS), if for all x∈X the evaluation functionals δ_x: H→, f ↦ f(x), are continuous w.r.t. the topology induced by the scalar product of H.The function k is called a reproducing kernel for or of H, if for all x∈X, k(·,x)∈ H, and for all f∈ H, x∈X, it holds that f(x)=⟨ f, k(·,x)⟩_H.Let us recall some basic facts about RKHSs. The function k is a kernel if and only if it is positive semidefinite.The Hilbert space of functions H has a reproducing kernel if and only if H is an RKHS. In this case, the reproducing kernel is unique and a kernel (and hence also positive semidefinite). Furthermore, H is a feature space for k, and Φ_k: X→ H, Φ_k(x)=k(·,x) is a feature map for k, called the canonical feature map of k. Finally, a positive semidefinite k is the reproducing kernel of a uniquely determined Hilbert space of functions, which we denote by (H_k,_k), and the latter is an RKHS. In particular, the terms kernel, reproducing kernel, and positive semidefinite are equivalent in the context of RKHSs.Given a positive semidefinite k and its associated RKHS H_k, define the pre-RKHSk = { k(·,x) | x ∈X} = {∑_n=1^N α_n k(·, x_n) |α_1,…,α_N∈,x_1,…,x_N∈X}.It is well-known that for f,g∈k with representations f= ∑_n=1^N α_n k(·, x_n), g= ∑_m=1^M β_m k(·, y_m),⟨ f, g⟩_k = ∑_n=1^N ∑_m=1^M α_n β_m k(y_m, x_n),and k is dense in H_k.Let k be a kernel on X, and (,) a corresponding feature space-feature map pair, thend_: X×X→,d_(x,x')=(x)-(x')_is a semimetric on X. If (,)=(H_k,Φ_k), we set d_k=d_Φ_k and call this the kernel (semi)metric.The next result is well-known, but rarely explicitly stated.Let k:X×X→ be a kernel on X≠∅. Then for allfeature space-feature map pairs (,), we have d_=d_k.When working with d_k, this result allows us to work with d_ instead, whereis any feature map, and vice versa.Let (,) be a feature space-feature map pair, and x,x'∈X be arbitrary. We then haved_(x,x') = (x)-(x')_ = √(⟨(x)-(x'), (x)-(x')⟩_) = √(⟨(x), (x)⟩_ + ⟨(x), (x')⟩_ + ⟨(x'), (x)⟩_+ ⟨(x'), (x')⟩_) = √(k(x,x) + k(x,x') + k(x',x) + k(x',x')) = √(⟨ k(·,x) - k(·,x'),k(·,x) - k(·,x') ⟩_k) = Φ_k(x) - Φ_k(x')_k= d_k(x,x'),establishing the claim.Since we state several results for bounded kernels or bounded RKHS functions, we recall the following characterization of boundedness in RKHSs. Let X≠∅ be some set and k:X×X→ a kernel on X. The following statements are equivalent. * k is bounded* k_∞ := sup_x∈X√(k(x,x)) < ∞* There exists a feature space-feature map pair (,) such thatis bounded* For all feature space-feature map pairs (,),is bounded* All f∈ H_k are bounded If any of the statements is true, then for all feature space-feature map pairs (,), we have k_∞=sup_x∈X(x)_, and |f(x)|≤f_k k_∞, for all f∈ H_k and x∈X. Let (,) be any feature space-feature map. For x,x'∈X we have|k(x,x')|=|⟨(x'), (x) ⟩_| ≤(x')_(x)_ = √(k(x',x'))√(k(x,x)),and the equivalence of the first four items is now clear. The equivalence between the first and last item is provided by <cit.>.Finally, since for any feature space-feature map pair (,), and all x∈X, we have √(k(x,x))=(x)_, and for all f∈ H_k we have |f(x)|=|⟨ f, k(·,x)⟩_k|≤f_k √(k(x,x)), the last assertion follows.Finally, we recall the following result on Parseval frames in an RKHS, which corresponds to <cit.>, and is called Papadakis Theorem there. Let X≠∅ be a set and k:X×X→ a kernel on X. * If (f_i)_i∈ I is a Parseval frame in H_k, then for all x,x'∈Xk(x,x')=∑_i ∈ I f_i(x) f_i(x'),where the convergence is pointwise.* Consider a family of functions (f_i)_i∈ I, where f_i ∈^X for all i∈ I, such thatk(x,x')=∑_i ∈ I f_i(x) f_i(x')for all x,x'∈X, where the convergence is pointwise. Then f_i∈ H_k for all i∈ I, and (f_i)_i∈ I is a Parseval frame in H_k.§ LIPSCHITZ CONTINUITY AND THE KERNEL METRIC We just saw that a kernel k on an arbitrary set X≠∅ metrizes this set through the kernel (semi)metric d_k. Note that this holds for any set X, no matter whether it has additional structure on it or not. It is therefore natural to investigate Lipschitz continuity of RKHS functions w.r.t. the kernel metric. We start with the following classic result, which seems to be folklore.Let X≠∅ be some set, k:X×X→ a kernel on X, and d_k the corresponding kernel (semi)metric. For all f∈ H_k, we have that f is Lipschitz continuous w.r.t. d_k with Lipschitz constant f_k.In other words, RKHS functions are always Lipschitz continuous w.r.t. the kernel (semi)metric, and their RKHS norm is a Lipschitz constant. This reinforces the intuition that the RKHS norm is a measure of complexity or smoothness of an RKHS function w.r.t. a kernel: The smaller the RKHS norm, the smaller the Lipschitz bound of an RKHS function w.r.t. to the kernel (semi)metric.Let f∈ H_k and x,x'∈X be arbitrary, then|f(x)-f(x')| = |⟨ f, k(·,x) - k(·,x')⟩_k| ≤f_k k(·,x)-k(·,x')_k = f_k d_k(x,x') The next result seems to be less well-known. Parts of it can be found for example in <cit.>.Let X≠∅ be some set, and k:X×X→ a kernel on X. * The function k(·,x)∈ H_k is Lipschitz continuous w.r.t. d_k with Lipschitz constant √(k(x,x)), for all x∈X.* For all x_1,x_1',x_2,x_2'∈X,|k(x_1,x_2)-k(x_1',x_2')| ≤min{max{√(k(x_2,x_2)), √(k(x_1',x_1'))},max{√(k(x_1,x_1)), √(k(x_2',x_2'))}}(d_k(x_1,x_1')+ d_k(x_2,x_2')). If k is bounded, then it is Lipschitz continuous w.r.t. the product metric on X×X with Lipschitz constant k_∞.* For all x,x'∈X,|k(x,x)-k(x',x')| ≤ 2max{√(k(x,x)), √(k(x',x'))}d_k(x,x').If k is bounded, then x ↦ k(x,x) is Lipschitz continuous w.r.t. d_k with Lipschitz constant 2k_∞.* The function x ↦√(k(x,x)) is Lipschitz continuous w.r.t. d_k and 1 is a Lipschitz constant.* If (,) is any feature space-feature map-pair, then Φ is Lipschitz continuous w.r.t. d_k with Lipschitz constant 1.The first item follows immediately from <Ref> clear since k(·,x)_k=√(k(x,x)).To show the second item, let x_1,x_1',x_2,x_2'∈X, then|k(x_1,x_2)-k(x_1',x_2')|≤ | |k(x_1,x_2)-k(x_1',x_2)| + |k(x_1',x_2) - k(x_1',x_2')|= |k(x_1,x_2)-k(x_1',x_2)| + |k(x_2,x_1') - k(x_2',x_1')| ≤√(k(x_2,x_2))d_k(x_1,x_1') + √(k(x_1',x_1'))d_k(x_2,x_2') ≤max{√(k(x_2,x_2)), √(k(x_1',x_1'))}(d_k(x_1,x_1')+ d_k(x_2,x_2')).Repeating this computation with x_1, x_2' instead of x_2,x_1' establishes the claim.The next item is now an immediate consequence.For the second to last item, let x,x'∈X, then the converse triangle inequality (in H_k) leads to|√(k(x,x))-√(k(x',x'))| = |k(·,x)_k - k(·,x')_k| ≤ k(·,x)-k(·,x') = d_k(x,x'),so x ↦√(k(x,x)) is indeed 1-Lipschitz w.r.t. d_k.The last item is clear.§ LIPSCHITZ AND HÖLDER CONTINUITY ON METRIC SPACES As we recalled in the preceding section, RKHS functions are always Lipschitz continuous w.r.t. the kernel (semi)metric.However, this metric is in general independent of any additional structure on the input set. In particular, if the input set is already a metric space, then this structure is essentially ignored by the kernel (semi)metric.In many applications, we are given a metric space as input set, and we would like to have Lipschitz or Hölder continuity of RKHS functions w.r.t. to the existing metric on the input space. We will now investigate this question in depth.§.§ PreliminariesSince kernels are special bivariate functions, we present some preliminary material on Hölder and Lipschitz continuity of general functions of two variables. Everything in this subsection is elementary and probably known, but we could not locate explicit references, hence we provide all the details.Let (X,d_X) be a metric space and κ:X×X→ some function. Assume that there exist a constant α∈, some function L_α: X→, and for all x∈X a set U_x⊆X with x∈ U_x, such that for all x_1,x_1',x_2,x_2'∈X we have|κ(x_1,x_2)-κ(x_1',x_2')| ≤ L_α(x)(d_X(x_1,x_1')^α + d_X(x_2,x_2')^α).* For all x_2∈X and all x_1,x_1'∈ U_x_2, we have that|κ(x_1,x_2)-κ(x_1',x_2)| ≤ L_α(x)d_X(x_1,x_1')^α. * Assume furthermore that κ is Hermitian. We then have for all x∈X and x'∈ U_x with x∈ U_x' that|κ(x)-κ(x')|≤ (L_α(x)+L_α(x')) d_X(x,x')^α,where we defined κ(x):=κ(x,x).The first claim is trivial. For the second, let x∈X and x'∈ U_x be arbitrary, then we have|κ(x)-κ(x')| = |κ(x,x)-κ(x',x')| ≤ |κ(x,x)-κ(x',x)| + |κ(x',x)-κ(x',x')|= |κ(x,x)-κ(x',x)| + |κ(x,x')-κ(x',x')| ≤ (L_α(x) + L_α(x'))d_X(x,x')^α,where we used |κ(x',x)-κ(x',x')|=|κ(x,x')-κ(x',x')|=|κ(x,x')-κ(x',x')| in the second equality.Assume that there exist a constant α∈, some function L_α: X→, and for all x∈X a set U_x⊆X with x∈ U_x, such that for all x_1,x_1'∈X we have|κ(x_1,x)-κ(x_1',x)| ≤ L_α(x)d_X(x_1,x_1')^α.If κ is Hermitian, then we have for all x_1,x_1',x_2,x_2'∈X with x_1,x_1'∈ U_x_2 and x_2,x_2'∈ U_x_1' that|κ(x_1,x_2) - κ(x_1',x_2')| ≤ L_α(x_2) d_X(x_1,x_1')^α + L_α(x_1') d_X(x_2,x_2')^α.Let x_1,x_1',x_2,x_2'∈X such that x_1,x_1'∈ U_x_2 and x_2,x_2'∈ U_x_1', then we get|κ(x_1,x_2) - κ(x_1',x_2')|≤ |κ(x_1,x_2) - κ(x_1',x_2)| + |κ(x_1',x_2) -κ(x_1',x_2')|=|κ(x_1,x_2) - κ(x_1',x_2)| + |κ(x_2,x_1') - κ(x_2',x_1')|= |κ(x_1,x_2) - κ(x_1',x_2)| + |κ(x_2,x_1') - κ(x_2',x_1')| ≤ L_α(x_2) d_X(x_1,x_1')^α + L_α(x_1') d_X(x_2,x_2')^α. We now consider the special case of Lipschitz continuity, corresponding to α=1 in the preceding results.We call κ Lipschitz continuous in the first argument with Lipschitz constant L∈, or L-Lipschitz continuous in the first argument, if for all x_1,x_1',x_2∈X we have|κ(x_1,x_2)-κ(x_1',x_2)| ≤ L d_X(x_1,x_1').Similarly, we define L-Lipschitz-continuity in the second argument. Finally, we call κ separately L-Lipschitz continuous if it is L-Lipschitz continuous in the first and the second coordinate.Let κ be Hermitian, then the following statements are equivalent. * κ is L-Lipschitz continuous (w.r.t. the product metric on X×X)* κ is L-Lipschitz continuous in the first argument* κ is L-Lipschitz continuous in the second argument* κ is separately L-Lipschitz continuousBy definition, if κ is separately L-Lipschitz continuous, it is L-Lipschitz continuous in the first and second argument. Since κ is Hermitian, the equivalence of items 2 and 3 are clear, so any one these two items implies the fourth item. <Ref> shows that item 1 implies item 4. Finally, <Ref> shows that item 2 implies item 1.Since kernels are always Hermitian, <Ref> immediately leads to the following result. Let k: X×X→ be a kernel, and L∈. k is L-Lipschitz continuous if and only if it is separately L-Lipschitz continuous.Why is <Ref> interesting? Let X be a topological space and k a kernel on X. It is well-known that k is continuous if and only if it is separately continuous, i.e., k(·,x) is continuous for all x∈X, and x↦ k(x,x) is continuous, cf. <cit.>. In particular, separate continuity of k is not enough for k to be continuous. For example, there exists a kernel on X=[-1,1] that is bounded and separately continuous, but not continuous, cf. <cit.>. <Ref> asserts that in contrast to continuity, Lipschitz continuity is equivalent to separate Lipschitz continuity for kernels. §.§ RKHS functions of Hölder-continuous kernels We now investigate how Hölder continuity of the kernel induces Hölder continuity of RKHS functions. We start with the following very general result, which covers essentially all potentially relevant forms of Lipschitz and Hölder continuity. It is a generalization of <cit.>. Let (X,d_X) be a metric space and k:X×X→ a kernel. Let α∈ and assume that there exist a function L_α:X→ and for each x∈X a set U_x⊆X with x∈ U_x, such that for all x_1,x_1'∈ U_x we have|k(x_1,x)-k(x_1',x)| ≤ L_α(x)d_X(x_1,x_1')^α.* Let (,) be an arbitrary feature space-feature map-pair for k. For all x,x'∈X with x' ∈ U_x we haveΦ(x)-Φ(x')_≤√(2L_α(x))d_X(x,x')^α/2. * For all f∈ H_k and x,x'∈X with x' ∈ U_x we have|f(x)-f(x')| ≤√(2 L_α(x))f_k d_X(x,x')^α/2. Let x,x'∈X with x'∈ U_x be arbitrary. If (,) is a feature space-feature map-pair for k, then we getΦ(x)-Φ(x')_= d_Φ(x,x') = d_k(x,x')= √(k(x,x)+k(x',x')-k(x,x')-k(x',x))≤√(|k(x,x)-k(x',x)| + |k(x,x')-k(x',x')|)≤√(2L_α(x) d_X(x,x')^α),where we used in the last inequality that x'∈ U_x.Let now f∈ H_k, then we have|f(x)-f(x')|≤f_k k(·,x)-k(·,x')_k ≤√(2L_α(x))f_kd_X(x,x')^α/2,where we used that (H_k,Φ_k) is a feature space-feature map-pair for k.For convenience, we record the following special case.Let (X,d_X) be a metric space and k:X×X→ a kernel that is separately L-Lipschitz continuous, then for every f∈ H_k and x,x'∈X we have|f(x)-f(x')| ≤√(2L)√(d_X(x,x')).Consider the situation of <Ref>. * If α∈(0,1), δ∈, U_x=_δ(x) and L_α≡ L_k for some L_k∈, then we recover <cit.>. * If α∈(0,1), U_x=X for all x∈X, L_α≡ L_k for some L_k∈, then we get that for f∈ H_k and x,x'∈X that|f(x)-f(x')| ≤√(2 L_k)f_k d_X(x,x')^α/2We can describe this as "A separately α-Hölder continuous kernel leads to RKHS functions that are α/2-Hölder continuous".§.§ Converse resultsIn <Ref> we saw that every RKHS function f∈ H_k is Lipschitz continuous w.r.t. d_k with Lipschitz constant f_k. Furthermore, in <Ref> results were presented that ensure that RKHS functions are Hölder continuous w.r.t. a given metric on the input set, if the kernel fulfills a certain continuity condition. But what about the converse? Assume we have a Hilbert function space H such that all f∈ H are Lipschitz continuous (or Hölder continous) w.r.t. a given metric and Lipschitz (or Hölder) constant f_H. What can we say about H? And if H is an RKHS, what can we say about the kernel? To the best of our knowledge, these questions have not been addressed so far.In this subsection, let (X,d_X) be a metric space and H ⊆^X a Hilbert space of functions. There exists α∈ such that all f∈ H are α-Hölder continuous with Hölder constant f_H. Suppose <Ref> holds, and that H is an RKHS. Furthermore, let k be the uniquely determined kernel with H_k=H. * For all x∈X, k(·,x)∈ H is α-Hölder continuous with Hölder constant √(k(x,x)). If k is bounded, then k(·,x) isα-Hölder continuous with Hölder constant k_∞, for all x∈X.* For all x_1,x_1',x_2,x_2'∈X,|k(x_1,x_2)-k(x_1',x_2')| ≤min{max{√(k(x_2,x_2)), √(k(x_1',x_1'))},max{√(k(x_1,x_1)), √(k(x_2',x_2'))}, }(d_X(x_1,x_1')^α+ d_X(x_2,x_2')^α). If k is bounded, then |k(x_1,x_2)-k(x_1',x_2')| ≤k_∞ (d_X(x_1,x_1')^α+ d_X(x_2,x_2')^α)for allx_1,x_1',x_2,x_2'∈X.* For all x,x'∈X,d_k(x,x') ≤√(√(k(x,x))+√(k(x',x')))d(x, x')^α/2. If k is bounded, then d_k(x,x') ≤√(2k_∞)d(x, x')^α/2.* If (,) is any feature space-feature map-pair, and k is bounded, thenis α/2-Hölder continuous with Hölder constant √(2k_∞).The first claim follows immediately from <Ref> and the fact that k(·,x)_k=√(k(x,x)) for all x∈X, and the definition of k_∞.Let x_1,x_1',x_2,x_2'∈X be arbitrary. Using <Ref> leads to|k(x_1,x_2)-k(x_1',x_2')|≤√(k(x_2,x_2))d_X(x_1,x_1') + √(k(x_1',x_1'))d_X(x_2,x_2') ≤max{√(k(x_2,x_2)), √(k(x_1',x_1'))},and repeating this computing with x_1,x_2' instead of x_2,x_1' establishes the second assertion. Additionally,d_k(x,x') = √(k(x,x)-k(x,x')-k(x',x)+k(x',x'))≤√(|k(x,x)-k(x',x)| + |k(x,x')-k(x',x')|)≤√(√(k(x,x)) + √(k(x',x'))d_X(x,x')^α),showing the third claim. This also establishes the last assertion, since for any feature space-feature map pair (,) and all x,x'∈X we have (x)-(x')_=d_k(x,x'). Assume that all f∈ H are Lipschitz continuous with Lipschitz constant f_H, that H is an RKHS, and that the uniquely determined kernel k with H_k=H is bounded. Then k is Lipschitz continuous with Lipschitz constant k_∞.The following result provides a simple condition for H to be an RKHS, if H fulfills <Ref>. Suppose <Ref> holds, and that there exists x_0∈X such that f(x_0)=0 for all f∈ H. In this case, H is an RKHS. Furthermore, √(k(x,x))≤ d_X(x,x_0) for all x∈X, where k is the uniquely determined reproducing kernel of H. Let x∈X and consider the corresponding evaluation functional δ_x: H →, δ_x f = f(x). We then have for all f∈X that|δ_x f| = |f(x)|=|f(x)-f(x_0)| ≤f_H d_X(x,x_0),which shows that δ_x is continuous, and δ_x≤ d_X(x,x_0). Therefore, H is an RKHS. Let k be its uniquely determined reproducing kernel, then√(k(x,x))=k(·,x)_H = δ_x≤ d(x,x_0),since k(·,x) is the uniquely determined Riesz representer of δ_x in H.Combining <Ref> with <Ref> leads to the following result.Assume that all f∈ H are bounded and Lipschitz continuous with Lispchitz constant f_H. Then H is an RKHS with a bounded and Lipschitz continuous kernel k having Lipschitz constant k_∞. In RKHSs, <Ref> can be relaxed.Let k:X×X→ be a kernel and H_k its RKHS. Let D⊆ H_k be dense, and assume that there exists α∈ such that all f∈ D are α-Hölder continuous w.r.t. d_X with Hölder bound f_k. Then all f∈ H_k are α-Hölder continuous with Hölder bound f_k. Let f∈ H_k and x,x'∈X be arbitrary. Since D is dense in H_k, there exists (f_n)_n∈⊆ D such that f_n → f (in H_k). We then have|f(x)-f(x')| = |⟨ f, k(·,x) - k(·,x')⟩_k|= |⟨lim_n→∞ f_n, k(·,x) - k(·,x')⟩_k|=lim_n→∞ |⟨f_n, k(·,x) - k(·,x')⟩_k|= lim_n→∞ |f_n(x) - f_n(x')| ≤lim_n→∞f_n_k d(x,x')^α = f_k d(x,x')^α. Finally, under an additional assumption on d_X, <Ref> implies the existence of an RKHS on H. The construction is classical, cf. <cit.>, but has not been used in this context before.Suppose that<Ref> holds and that d_X is a Hilbertian metric, i.e., there exists a -Hilbert spaceand a map :X→, such that d_X(x,x')=(x)-(x')_.Define _0 = {(x) | x ∈X}⊆, and for f∈ H set ℓ_f: _0 → by ℓ_f(Φ(x))=f(x).For all f∈ H, ℓ_f as above is a well-defined, linear and continuous map. Let f∈ H be arbitrary. In order to show that ℓ_f is well-defined, let x,x'X such that (x)=(x'). We then have|ℓ_f(Φ(x)) - ℓ_f(Φ(x'))| = |f(x)-f(x')| ≤f_H d_X(x,x')^α= f_H Φ(x)-Φ(x')_^α =0,so ℓ_f(Φ(x))=ℓ_f(Φ(x')), and ℓ_f is indeed well-defined. Linearity and continuity are now clear.Given f∈ H, we can now extend ℓ_f linearly to ℓ̃_̃f̃: _0 →, and the resulting map is still well-defined, linear and continuous. Define now _X=_0^·_, then by construction _0 is dense in _X. This means that for all f∈ H, there exists a unique linear and continuous extension ℓ_f: _X→ of ℓ̃_̃f̃. Note that this means that for all f∈ H, ℓ_f∈_X' (the topological dual of _X). Since _X is itself a Hilbert space (because it is a closed subset of a Hilbert space), for each f∈ H, there exists a unique Riesz representer R(ℓ_f)∈_X. Define for all f_1,f_2∈ Hk(f_1,f_2)=⟨ R(ℓ_f_2), R(ℓ_f_1)⟩__X,then k is a kernel on H with feature space _X and feature map H∋ f ↦ R(ℓ_f) ∈_X. The corresponding RKHS of k is given byH_k = { f ↦ℓ_f h | h ∈_X},cf. <cit.>.§ LIPSCHITZ AND HÖLDER CONTINUITY INDUCING KERNELS Essentially, the results in <Ref> ensure that RKHS functions of α-Hölder continuous kernels are α/2-Hölder continuous. In particular, these results do not guarantee that RKHS functions of Lipschitz continuous kernels are themselves Lipschitz continuous. However, for many applications the regularity properties (here Lipschitz and Hölder continuity) of RKHS functions matter most, and a kernel should be chosen that enforces the desired regularity properties for the induced RKHS functions. This motivates the investigation of kernels that induce prescribed Hölder continuity of its RKHS functions.§.§ Series expansionsWe start by characterizing all kernels on a given metric space that have RKHS functions with prescribed Hölder continuity. To the best of our knowledge, this result is new. Let (X,d_X) be a metric space, k a kernel on X, and α∈. The following statements are equivalent. * There exists C∈ such that all f∈ H_k are α-Hölder continuous with Hölder constant Cf_k.* There exists a Parseval frame (f_i)_i∈ I in H_k, such that for all i∈ I, f_i is α-Hölder continuous with Hölder constant L_i∈, and sup_i∈ I L_i < ∞.* There exists a family of functions (f_i)_i∈ I, f_i: X→, such that for all i∈ I, f_i is α-Hölder continuous with Hölder constant L_i∈, and sup_i∈ I L_i < ∞, and for all x,x'∈Xk(x,x') = ∑_i∈ I f_i(x)f_i(x'),where the convergence is pointwise.2 ⇒ 1 Let (f_i)_i∈ I be a Parseval frame in H_k, such that for all i∈ I, f_i is α-Hölder continuous with Hölder constant L_i∈, and sup_i∈ I L_i < ∞. Let f∈ H_k and x,x'∈X be arbitrary, then we have|f(x)-f(x')| = |∑_i∈ I⟨ f, f_i⟩_k f_i(x) - ∑_i∈ I⟨ f, f_i⟩_k f_i(x') |= |∑_i∈ I⟨ f, f_i⟩_k (f_i(x) -f_i(x')) | ≤∑_i∈ I |⟨ f, f_i⟩_k| |f_i(x) - f_i(x')| ≤∑_i∈ I |⟨ f, f_i⟩_k| L_i d_X(x,x')^α≤(∑_i∈ I |⟨ f, f_i⟩_k|) (sup_i∈ I L_i ) d_X(x,x')^α≤√(∑_i∈ I |⟨ f, f_i⟩_k|^2)(sup_i∈ I L_i ) d_X(x,x')^α = f_k (sup_i∈ I L_i ) d_X(x,x')^α.In the first inequality we used that (f_i)_i∈ I is a Parseval frame, and that norm convergence (in H_k) implies pointwise convergence. For the first inequality, we used the triangle inequality, and for the second inequality we used the assumption that f_i is α-Hölder continuous with Hölder constant L_i. In the last inequality, we used ∑_i∈ I |⟨ f, f_i⟩_k| = (⟨ f, f_i⟩_k)_i∈ I_ℓ_1(I)≤(⟨ f, f_i⟩_k)_i∈ I_ℓ_2(I) = √(∑_i∈ I |⟨ f, f_i⟩_k|^2). 2 ⇒ 1 Let (e_i)_i∈ I be an ONB of H_k, so e_i_k=1 for all i∈ I. By assumption, all e_i are α-Hölder continuous with Hölder constant 1, and since an ONB is a Parseval frame, the claim follows.2 ⇒ 3 This implication follows immediately from <Ref>.3 ⇒ 2 Let (f_i)_i∈ I be a family of function as given in the third item. By <Ref>, f_i∈ H_k for all i∈ I, and (f_i)_i∈ I forms a Parseval frame, so this family of functions fulfills the conditions in the second item.Since orthonormal bases (ONBs) are Parseval frames, we get immediately the following result.Let (X,d_X) be a metric space, k a kernel on X, and α∈. The following statements are equivalent. * All f∈ H_k are α-Hölder continuous with Hölder constant f_k.* There exists an ONB (e_i)_i∈ I in H_k such that for all i∈ I, e_i is α-Hölder continuous with Hölder constant 1.* For all ONB (e_i)_i∈ I in H_k, and all i∈ I, e_i is α-Hölder continuous with Hölder constant 1.* For all x,x'∈X,k(x,x') = ∑_i∈ I e_i(x)e_i(x'),where the convergence is pointwise, and (e_i)_i∈ I is an ONB (e_i)_i∈ I in H_k such that for all i∈ I, e_i is α-Hölder-continuous with Hölder constant 1.§.§ Ranges of integral operatorsIt is well-known that there is a close connection between the theory of RKHSs and integral operators. For example, for RKHSs defined on measure spaces and under suitable technical assumptions, Mercer's theorem allows a spectral decomposition of the reproducing kernel, and an explicit description of the RKHS in terms of eigenfunctions of a related integral operator. For details, we refer to <cit.>. Moreover, integral operators defined using the reproducing kernel of an RKHS can have ranges contained in the RKHS under suitable assumptions, cf. <cit.>. This motivates the study of Hölder continuity properties for functions in the image set of integral operators.A general result Before embarking on this task, we present a result for rather general integral maps. It is essentially a direct generalization of <cit.>. Let (Y,A,μ) be a measure space, (X,d_X) a metric space, 1 < p,q < ∞ with 1/p+1/q=1, and k:X×Y→ a function such that the following holds. * For all x∈X, the function k(x,·) is measurable.* For all g ∈ L^q(Y, A,μ,) and all x∈X, k(x,·)· g ∈ L^1(Y, A,μ,).* There exists α∈, L_α∈^p(Y, A,μ,), such that for μ-almost all y∈Y, the function k(·,y) is α-Hölder continuous with Hölder constant L_α(y).In this case,S_k:L^q(Y, A,μ,) →^X,(S_k g)(x) = ∫_Y k(x,y)g(y)dμ(y)is a well-defined linear mapping, and for all g∈ L^q(Y, A,μ,), the function f=S_k g is α-Hölder continuous with Hölder constant L_α_^pg_L^q. Since for all g ∈ L^q(Y, A,μ,) and all x∈X the function k(x,·)g∈L^1(Y, A,μ,), the mapping S_k is well-defined. The linearity is now clear.Let g ∈ L^q(Y, A,μ,), define f=S_k g, and let x,x'∈X be arbitrary, then|f(x)-f(x')| = | ∫_Y (k(x,y) - k(x',y))g(y)dμ(y)| ≤∫_Y |k(x,y) - k(x',y)| |g(y)| dμ(y) ≤∫_Y L_α(y) |g(y)| dμ(y)d_X(x,x') ≤L_α_^pg_L^q d_X(x,x'),so f is indeed α-Hölder continuous with Hölder constant L_α_^pg_L^q.Example To illustrate <Ref>, we consider the rather general class of integral operators described in <cit.>. Let (X,A_X,μ) and (Y,A_Y,ν) be measure spaces, 1 < p,q < ∞ with 1/p+1/q=1, and k: X×Y→ be measurable. Assume that for all g∈ L^q(Y,A_Y,ν) and μ-almost all x∈X, k(x,·)g∈ L^1(Y,A_Y,ν), and that by defining (μ-almost all) x∈X(T_k g)(x) = ∫_Y k(x,y)g(y)dν(y)we get T_kg ∈ L^p(X,A_X,μ). Under these conditions, T_k: L^q(Y,A_Y,ν) →L^p(X,A_X,μ) is a well-defined, linear and bounded operator.Assume furthermore that (X,d_X) is a metric space, and that there exists α∈ andL_α∈^p(Y, A,μ,), such that for μ-almost all y∈Y, the function k(·,y) is α-Hölder continuous with Hölder constant L_α(y).Let g∈L^q(Y,A_Y,ν), then there exists a μ-nullset N_g such that (setting for brevity X_g=X∖N_g) f: X_g →, f(x)=(T_k g)(x) is well-defined. <Ref> now ensures that f is α-Hölder continuous with Hölder constant L_α_^pg_L^q, though f is only defined on the restricted metric space (X_g, d_X|_X_g ×X_g).In particular, each element[Recall that this is an equivalence class of functions on X.] of the image set of T_k contains a μ-almost everywhere defined function that is α-Hölder continuous.We can strengthen this result. Let A_X be the Borel σ-algebra on X, and assume that μ(U)>0 for all open nonempty U⊆X. In this case, X_g is dense in X, since otherwise N_g contains a nonempty open set U, and hence μ(N_g) ≥μ(U) >0, a contradiction to the fact that N_g is a μ-nullset. Since f is defined on a dense subsetset of X, and it is continuous (since it is α-Hölder continuous on X_g), there exists a unique extension f̅: X→ that is also α-Hölder continuous. Defining T̅_k g := f̅, we thus arrived at a linear operator from L^q(Y,A_Y,ν) into (X,A_X,μ) with its range space consisting of α-Hölder continuous functions.Integral operators into RKHSs Let us return to the setting of RKHSs. If an RKHS is defined on a measure space, and the kernel fulfills an integrability condition, then the RKHS consists of integrable functions, and the kernel allows the definition of a related integral operator with range contained in the RKHS. The next result provides a sufficient condition for Hölder continuity of RKHS functions in the range of this integral operator.Let (X,d_X) be a metric space, (X,A,μ) a σ-finite measure space,[A can, but does not have to be the Borel σ-algebra on the metric space X.] 1<p,q<∞ with 1/p+1/q=1, and k:X×X→ a measurable kernel such that H_k is separable andk_L^p = (∫ (k(x,x))^p/2dμ(x))^1/p < ∞.Assume that there exist α∈, L_α∈^p(X,A,μ,) such that for μ-almost all x∈X the function k(·,x) is α-Hölder continuous with Hölder constant L_α(x).Under these conditions, S_k: L^q(X,A,μ,) → H_k,(S_k g)(x) = ∫_X k(x,x')g(x')dμ(x')is a well-defined, bounded linear operator, and for all g∈ L^q(X,A,μ,), the function f=S_k g ∈ H_k is α-Hölder continuous with Hölder constant L_α_^pg_L^q.Finally, all functions in H_k are p-integrable,[This means that for all f∈ H_k, ∫_X |f(x)|^pdμ(x) <∞.] and if the inclusion : H_k → L^p(X,A,μ,) is injective, then the image of S_k is dense in H_k. That S_k is well-defined, linear and bounded, follows from <cit.>. The statement on the Hölder continuity of the functions in the images of S_k is a direct consequence of <Ref>. The last claim follows again from <cit.>. §.§ Feature mixture kernels<Ref> characterizes Hölder continuity inducing kernels via series expansion. However, these might be difficult to work with, so an alternative description of such kernels can be useful. The next result presents a very general construction which is based on a mixture of feature maps. It vastly generalizes a method apparently introduced in <cit.>. Let (Ω,A) be a measurable space, μ a finite nonnegative measure on (Ω,A), (X,d_X) a metric space, anda -Hilbert space. Furthermore, let (x,·)∈^2(Ω,A,μ,) for all x∈X. Finally, assume that there exist α, L_∈ such that for μ-almost all ω∈Ω, (·,ω) is α-Hölder continuous with Hölder constant L_. Thenk(x,x') = ∫_Ω⟨(x',ω), (x,ω) ⟩_dμ(ω)is a well-defined kernel on X, and all f∈ H_k are α-Hölder continuous with Hölder constant L_√(μ(Ω))f_k. First, we show that k is well-defined. Let x,x'∈X, then Φ(x,·)_, Φ(x',·)_ are square-integrable, so we get ∫_Ω | ⟨(x',ω), (x,ω) ⟩_| dμ(ω)≤∫_Ω(x,ω)_(x',ω)_dμ(ω) ≤( ∫_ΩΦ(x,ω)_^2 dμ(ω))^1/2( ∫_ΩΦ(x',ω)_^2 dμ(ω))^1/2 < ∞,where weused Cauchy-Schwarz first in , then in ^2.Next, we show that k is kernel by verifying that it is positive semidefinite. Let x_1,…,x_N∈X and c_1,…,c_N∈ be arbitrary, then∑_i,j=1^N c_i c_j k(x_j,x_i) = ∫_Ω∑_i,j=1^N c_i c_j⟨(x_j,ω), (x_i,ω) ⟩_dμ(ω)= ∫_Ω⟨∑_i=1^N c_i (x_i,ω), ∑_j=1^N c_j (x_j,)⟩_dμ(ω)= ∫_Ω∑_i=1^N c_i (x_i,ω) _^2dμ(ω) ≥ 0,so k is indeed positive semidefinite.Finally, let f∈ H_k and x,x'∈X be arbitrary, then |f(x)-f(x')| ≤f_k d_k(x,x'). Observe now thatd_k(x,x')^2 = k(x,x) + k(x,x') + k(x',x) + k(x',x')= ∫_Ω⟨(x,ω), (x,ω)⟩_ + ⟨(x,ω), (x',ω)⟩_ + ⟨(x',ω), (x,ω)⟩_ + ⟨(x',ω), (x',ω)⟩_dμ(ω)= ∫_Ω⟨(x,ω)-(x',ω),(x,ω)-(x',ω)⟩_dμ(ω)=∫_Ω(x,ω)-(x',ω)_dμ(ω) ≤∫_Ω L_^2 d_X(x,x')^2αdμ(ω) = L_^2 μ(Ω) d_X(x,x')^2α,so we get|f(x)-f(x')| ≤f_k d_k(x,x')≤ L_√(μ(Ω))f_k d_X(x,x')^α. If the nonnegative measure in the preceding result is a probability measure, we get the following result as a special case.Let (X,d_X) be a metric space,a -Hilbert space, and ((x))_x∈X a family of square-integrable -valued random variables. Assume that there exist α,L_∈ such that Φ is almost surely α-Hölder continuous with Hölder constant L_. Thenk(x,x') = [⟨(x'), (x)⟩_]is a well-defined kernel on X, and all f∈ H_k are α-Hölder continuous with Hölder constant L_f_k.The importance of this result is the fact that the kernel k described there is a random feature kernel in the sense of <cit.>. In particular, in practice k(x,x') can be approximated by sampling from the random variables Φ(x),Φ(x').Finally, we can formulate another special case, which recovers the approach from <cit.>.Let(X,d_X) be a metric space, P a Borel probability measure on X, φ: → an α-Hölder-continuous function with Hölder-constant L_φ, and define ϕ: X×X→ by ϕ(x,z)=φ(d_X(x,z)). If ϕ(x,·) ∈^2(X,P) for all x∈X, thenk(x,x') = ∫_Xϕ(x',z) ϕ(x,z)dP(z)is a well-defined kernel on X, and all f∈ H_k are α-Hölder continuous with Hölder constant L_φf_k. We show that for all z∈X, the function ϕ(·,z) is α-Hölder continuous with Hölder constant L_φ. For this, let x,x'∈X be arbitrary, then|ϕ(x,z)-ϕ(x',z)| = |φ(d_X(x,z)) - φ(d_X(x',z))| ≤ L_φ |d_X(x,z)^α - d_X(x',z)^α| ≤ L_φ d_X(x,x')^α,where we used the inverse triangle inequality for the metric (x,x')↦ d_X(x,x')^α in the last step.The result follows now from <Ref> by choosing Ω=X, μ=P, =, and =ϕ, and the fact that P(X)=1. § CONCLUSION We presented a comprehensive discussion of Lipschitz and Hölder continuity of RKHS functions. Starting with the well-known Lipschitz continuity w.r.t. the kernel (semi)metric, we then investigated Hölder-continuity w.r.t. a given metric, including converse results, i.e., consequences of Hölder continuity in function spaces related to RKHSs. Finally, we provided characterizations as well as sufficient conditions for kernels inducing prescribed Lipschitz and Hölder continuity of their RKHS functions w.r.t. a given metric, an important aspect for applications.The results presented here can be used to construct tailored kernels ensuring Lipschitz or Hölder continuous RKHS functions, or to check that existing kernels have such RKHS functions. Furthermore, because the results are quantitative, they can be used in numerical methods. In particular, we are currently investigating their application in methods like <cit.> and <cit.>.Finally, we would like to point out three interesting questions for future work. First, the Lipschitz and Hölder continuity in RKHS that we have been concerned with here, are of a strong uniform nature, since the corresponding Lipschitz or Hölder constants are proportional to the RKHS function of the respective function, cf. the developments in <Ref>. It would be interesting to investigate whether there exist kernels that enforce weaker, nonuniform Lipschitz or continuity properties.Second, we investigated sufficient conditions for Lipschitz and Hölder continuity of RKHS functions via integral operators. However, all statements are restricted to the range space of the involved integral operators. Under some conditions, these range spaces are dense in RKHSs, so it would be interesting to investigate whether the Lipschitz and Hölder continuity properties transfers to the whole RKHS. Note that this is not trivial since in the Hölder constant in <Ref> involves the L^q-norm of the preimage function, not the RKHS norm of the image function.Finally, the results in <Ref> provide Lipschitz or Hölder constants involving the RKHS norm. However, it is unclear how conservative these results are, i.e., how much larger the Lipschitz or Hölder constants are compared to the best possible constants. Intuitively, it is clear that for generic RKHS functions there will be some conservatism. It would be interesting to investigate how big this conservatism is, and how it depends on properties of the kernel. plain | http://arxiv.org/abs/2310.18078v1 | {
"authors": [
"Christian Fiedler"
],
"categories": [
"math.FA",
"cs.LG",
"46E22 (Primary), 51F30, 47B34, 47G10 (Secondary)"
],
"primary_category": "math.FA",
"published": "20231027115643",
"title": "Lipschitz and Hölder Continuity in Reproducing Kernel Hilbert Spaces"
} |
Preventing Language Models From Hiding Their Reasoning Fabien Roger^*Ryan Greenblatt Redwood Research July 2022 ========================================================== We find a succinct expression for computing the sequence x_t = a_t x_t-1 + b_t in parallel with two prefix sums, given t = (1, 2, …, n), a_t ∈ℝ^n, b_t ∈ℝ^n, and initial value x_0 ∈ℝ. On n parallel processors, the computation of n elements incurs (log n) time and (n) space. Sequences of this form are ubiquitous in science and engineering, making efficient parallelization useful for a vast number of applications. We implement our expression in software, test it on parallel hardware, and verify that it executes faster than sequential computation by a factor of n/log n.[Source code for replicating our results is available online at https://github.com/glassroom/heinsen_sequencehttps://github.com/glassroom/heinsen_sequence.]§ SUMMARY Sequences of the form x_t = a_t x_t-1 + b_t are ubiquitous in science and engineering. For example, in the natural sciences, such sequences can model quantities or populations that decay or grow by a varying rate a_t > 0 between net inflows or outflows b_t at each time step t. In economics, such sequences can model investments that earn a different rate of return a_t = (1 + r_t) between net deposits or withdrawals b_t over each time period t. In engineering applications, such sequences are often low-level components of larger models, e.g., linearized recurrent neural networks whose layers decay token features in a sequence of tokens.Given a finite sequence x_t = a_t x_t-1 + b_t with n steps, t = (1, 2, …, n), where a_t ∈ℝ^n, b_t ∈ℝ^n, and initial value x_0 ∈ℝ, it's not immediately obvious how one would compute all elements in parallel, because each element is a non-associative transformation of the previous one. In practice, we routinely see software code that computes sequences of this form one element at a time.The vector log x_t is computable as a composition of two cumulative, or prefix, sums, each of which is parallelizable: log x_t = _t + log( x_0 + _t ) where _t and _t are the two prefix sums: _t = ∑_tlog a_t_t = ∑_t e^log b_t - _t. The operator ∑ computes a vector whose elements are a prefix sum, i.e., a cumulative sum.We obtain x_t with elementwise exponentiation: x_t = e^_t + log (x_0 + _t). Prefix sums are associative,[ Given a sequence a, b, c, ∑( a, ∑( b, c ) ) = ∑( ∑( a, b ), c ).] making it possible to compute them by parts in parallel. Well-known parallel algorithms for efficiently computing the prefix sum of a sequence with n elements incur (log n) time and (n) space on n parallel processors <cit.> <cit.>. Prefix sums generalize to any binary operation that is associative, making them a useful primitive for many applications <cit.> and data-parallel models of computation <cit.>. Many software frameworks for numerical computing provide efficient parallel implementations of the prefix sum.The computation of two prefix sums has the same computational complexity on n parallel processors as a single prefix sum: (log n) time and (n) space. The computation of n elementwise operations (e.g., logarithms and exponentials) on n parallel processors incurs constant time and, if the computation is in situ, no additional space.If any a_t < 0, any b_t < 0, or x_0 < 0, one or more of the logarithms computed in the interim will be in ℂ, but all elements of x_t will always be in ℝ, because they are defined as multiplications and additions of previous elements in ℝ, which is closed under both operations. § COMPARED TO BLELLOCH'S FORMULATION Blelloch's formulation for computing first-order linear recurrences as a composition of prefix sums <cit.> is more general, expressed in terms of a binary operator ⊕ that is associative and a second binary operator ⊗ that either is associative or can be transformed into an associative one via the application of a third binary operator.Our formulation applies only to the most common case, real numbers, with scalar sum and multiplication as the first and second operators, making each step non-associative. We find a succinct, numerically stable expression that is readily implementable with widely available, highly-optimized implementations of the prefix sum. § PROOF We are computing x_t = a_t x_t-1 + b_t, for t = (1, 2, …, n), with a_t ∈ℝ^n, b_t ∈ℝ^n, and initial value x_0 ∈ℝ. Expand the expression that computes each element, x_1, x_2, …, x_n, to make it a function of x_0 and all trailing elements of a_t and b_t, and factor out all trailing coefficients: =0.5mu=0.5mu=1mu x_1 = a_1 x_0 + b_1= a_1 ( x_0 + b_1/a_1) x_2 = a_2 x_1 + b_2= a_1 a_2 ( x_0 + b_1/a_1 + b_2/a_1 a_2) ⋮ x_n = a_n x_n-1 + b_n= ( ∏_t a_t ) (x_0 + b_1/a_1 + b_2/a_1 a_2 + … + b_n/∏_t a_t).Combine all expressions in (<ref>) into one expression that computes all elements of vector x_t: =0.5mu=1mu=2mu x_t = ( ∏_t a_t ) ⊙( x_0 + ∑_tb_t/∏_t a_t)= ( ∏_t a_t ) ⊙( x_0 + ∑_texp( logb_t/∏_t a_t) )= ( ∏_t a_t ) ⊙( x_0 + ∑_t e^log b_t - ∑_t log a_t ), where the operators ∏ and ∑ compute vectors whose elements are, respectively, a cumulative product and sum, and ⊙ denotes an elementwise or Hadamard product.Taking the logarithm on both sides, we obtain: =1mu=1mu=1mu log x_t = ∑_tlog a_t __t + log( x_0 + ∑_t e^log b_t - ∑_t log a_t ^_t__t), which is the same as (<ref>). § IMPLEMENTATION We implement (<ref>) in software. For numerical stability and slightly improved efficiency, we modify the computation of x_t as follows: x_t = e^_t + ((( log x_0, log b_t - _t ))), where (·) denotes concatenation, (·) removes its argument's first element, and (·) := log∑exp(·), commonly provided as the “LogCumSumExp” function by software frameworks for numerical computing, applying the familiar log-sum-exp trick as necessary for numerical stability, and delegating parallel computation of the internal prefix sum to a highly-optimized implementation. We test our implementation on parallel hardware and verify that it executes faster than sequential computation by a factor of n/log n (Figure <ref>).main | http://arxiv.org/abs/2311.06281v4 | {
"authors": [
"Franz A. Heinsen"
],
"categories": [
"cs.DS",
"cs.LG"
],
"primary_category": "cs.DS",
"published": "20231027215855",
"title": "Efficient Parallelization of a Ubiquitous Sequential Computation"
} |
Evans et al. C. EvansEuropean Space Agency (ESA), ESA Office, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA [email protected]. Marcolino Universidade Federal do Rio de Janeiro, Observatório do Valongo,Ladeira Pedro Antônio, 43, CEP 20.080-090, Rio de Janeiro, BrazilJ.-C. Bouret Aix-Marseille Univ, CNRS, CNES, LAM, Marseille, FranceM. Garcia Centro de Astrobiología, CSIC-INTA, Crtra. de Torrejón a Ajalvir km 4, E-28850, Torrejón de Ardoz (Madrid), Spain A near-UV reconnaissance of metal-poor massive starsChris Evans Wagner MarcolinoJean-Claude Bouret Miriam GarciaAccepted: 8 August 2023 ============================================================================= We use synthetic model spectra to investigate the potential of near-ultraviolet (3000-4050 Å) observations of massive O-type stars. We highlight the He I 3188 and He II 3203 pair as a potential temperature diagnostic in this range, supported by estimates of gravity using the high Balmer series lines. The near-ultraviolet also contains important metallic lines for determinations of chemical abundances (oxygen in particular) and estimates of projected rotational velocities for O-type spectra. Using the model spectra we present performance estimates for observations of extragalactic massive stars with the Cassegrain U-Band Efficient Spectrograph (CUBES) now in construction for the Very Large Telescope. The high efficiency of CUBES will open-up exciting new possibilities in the study of massive stars in external galaxies. For instance, CUBES will provide new insights into the physical properties of O-type stars, including oxygen abundances, in metal-poor irregular galaxies at ∼1 Mpc from integrations of just 2-3 hrs. Moreover, CUBES will bring quantitative spectroscopy of more distant targets within reach for the first time, such as the O-type star (V ∼ 21.5 mag) in Leo P (at 1.6 Mpc) in only half a night of observations. § INTRODUCTIONGround-based spectroscopy of OB-type stars has traditionally been obtained in the 3950–4750Å range for spectral classification (e.g. <cit.>) and estimates of physical parameters (temperatures, gravities), and of the Hα line to investigate their stellar winds (e.g. <cit.>). In contrast, the shorter wavelengths accessible from the ground, down to the atmospheric cut-off, have received significantly less attention, partly due to the challenges of the reduced atmospheric transmission combined with the limited efficiency of available instrumentation.Development of the Cassegrain U-Band Efficient Spectrograph (CUBES) instrument for the Very Large Telescope (VLT) provided us with the motivation to investigate the potential of observations of massive stars in the near ultraviolet (UV).The CUBES design offers a potential tenfold gain in end-to-end efficiency at< 3400 Å (incl. the telescope and atmosphere) compared to the existing Ultraviolet and Visible Echelle Spectrograph (UVES). In brief, the CUBES design provides a spectral resolving power of R ≥ 20,000 over the 3000-4050 Å range, with provision of a second, lower-resolution option with R ∼ 7,000 <cit.>.Here we investigate the potential performance of CUBES for studies of massive stars. In Sect. 2 we briefly review past studies of massive stars shortwards of 4000 Å.Motivated by the presence of a relatively strong He line (3203) and a range of He lines, in Sect. 3 we use synthetic model spectra to qualitatively investigate the sensitivity of near-UV lines to the physical parameters of massive stars. In Sect. 4 we focus on the possibility of estimating oxygen abundances in massive stars from near-UV observations. In Sect. 5 we investigate the potential performance of CUBES to study massive stars in Local Group galaxies and beyond, with concluding remarks given in Sect. 6.§ NEAR-ULTRAVIOLET SPECTROSCOPY OF MASSIVE STARS An early ground-based study of massive stars at< 4000 Å used photographic observations of ϵ Ori (B0 Ia), which included a detailed linelist that extended as far bluewards as 3550 Å <cit.>.A first quantitative investigation of the He 3188 and He 3203 lines employed photographic observations from the 2.2-m telescope on Mauna Kea of 19 O- and B0.5-type stars <cit.>. This included equivalent width-measurements of both lines in the sample of stars, and comparisons with the predictions of non-LTE model atmospheres for He 3889 and He 3203, finding generally good agreement for the trend of the He lines and the values for the He line (except for the two hottest stars). Given the strong response to temperature of both lines, this was a first indication of the potential of these two lines to be used in tandem as a temperature diagnostic. The only other example known to us of observations in this region prior to the use of digital detectors is observations of the He 3188 line in ∼30 early-type stars with the Copernicus satellite <cit.>.Digital detectors transformed observational astronomy, but their performance in the near UV has still been a limiting factor compared to longer wavelengths, and so there remain relatively few studies of massive stars in this region.Observations covering 3250-4750 Å of massive O-type and Wolf–Rayet stars in NGC 3603 with the Faint Object Spectrograph on the Hubble Space Telescope (HST) revealed some of the key features in the near UV <cit.>. Shortwards of 4000 Å, the O-type spectra are dominated by the high Balmer series until the Balmer limit, with a He line at 3820. Going to even shorter wavelengths, in the hottest (O3-type) stars in the HST observations, absorption from O 3381-85, 3412 and N 3479-83-85 are also seen, with the N blend displaying a strong P Cygni profile in the hydrogen-rich WN-type spectra. Examples of some of the weak He lines present in the 3800-4000 Å range can also be seen in slightly higher-resolution (∼2 Å) spectroscopy of four late O-type supergiants (from <cit.>).To illustrate some of the spectral lines present in this range, in Fig. <ref> we show the far-blue UVES spectrum of HDE 269896 (taken from <cit.>), smoothed and rebinned to the high-resolution mode of CUBES (Δ = 0.14 Å, sampled by 2.3 pixels, so 0.06 Å/pixel). The region below ∼3400 Å in the UVES data was of limited use because of low signal-to-noise (S/N) in the shortest-wavelength echelle orders, but at longer wavelengths the data give a good example of some of the features present in massive stars in the CUBES range. In practice, degrading the spectrum to the R ∼ 7,000 of the low-resolution mode has little qualitative impact on the final spectrum shown in Fig. <ref> given the (astronomical) broadening of the lines.Classified as ON9.7 Ia+ (and with log(L/L_⊙) ≈ 6) the UVES data of HDE 269896 reveal the high Balmer series and a plethora of He lines, together with weak emission lines from Si (3487, 3590, 3807), Si (3762 and 3773) and Al (3602) produced in its strong stellar wind (see <cit.>). The presence of lines from two ionisation stages of silicon offers a potential temperature diagnostic in the CUBES region, albeit very dependent on the adopted stellar parameters (including the wind) when the lines are in emission.To illustrate the ability of sophisticated model atmospheres and spectral synthesis to reproduce this region in O-type spectra (including the higher-order members of the Balmer series), in red in Fig. <ref> we overplot the adopted cmfgen model (see <cit.>) for this star from <cit.>.The adopted parameters were an effective temperature, T_ eff = 27,500 K, logg = 2.7, and a rotational velocity of vsini = 70(see <cit.> for further details of the model parameters, which necessitated a detailed treatment of the wind in this luminous supergiant star).§ DIAGNOSTIC LINES IN THE NEAR UVTo investigate the sensitivity of different spectral lines in the near UV to physical parameters such as temperature and gravity, we used synthetic spectra from the extensive OSTAR2002 grid[http://tlusty.oca.eu/Tlusty2002/tlusty-frames-OS02.html in which abundances in the OSTAR2002 grid are scaled relative to solar values from <cit.>.]of line-blanketed, non-LTE, plane-parallel, hydrostatic model atmospheres calculated with the tlusty code <cit.>. §.§ LMC metallicity: Temperature trends The metallicity of the Large Magellanic Cloud (LMC) is approximately half solar and we have a good understanding of stellar properties from analysis of data at other wavelengths (e.g. <cit.>). We therefore first considered the 0.5 Z_⊙ models from the OSTAR2002 grid as example sub-solar models with which to investigate the CUBES domain.We initially considered models with logg = 4.0, that are typical of the gravities estimated for dwarfs <cit.>. The tlusty grid has models from T_ eff = 27,500 to 55,000 K (in steps of2,500 K). Using the effective temperature – spectral type relations derivedfrom analysis of O- and B-type dwarfs in the LMC <cit.>,we can map the tlusty models to approximate spectral types. The near-UV and visible ranges are shown for eight model spectra in Figs. <ref> and <ref>, respectively.Each model spectrum in the figures has been convolved with a rotational-broadening function of vsini = 100to mimic real targets compared to the unbroadened models. Note that vsini of order 100then dominates the broadening of the line profiles, such that convolving the models to either the high- or low-resolution modes of CUBES has a limited impact compared to the spectra shown in the figure.Our motivations to show the visible region of the models is twofold. Firstly, to investigate if there are near-UV lines in the same models that can then be used to delineate similar trends as in the temperature (spectral type) sequence in Fig. <ref>.Secondly, the models shown illustrate some of the key classification criteria in O- and early B-type stars (see <cit.>). For instance, the weakening of the He lines when moving to hotter temperatures (earlier spectral types), such that the He 4471 line is very weak by O3 (T_ eff ≈ 45,000 K), and the weakening of the He lines as we move to cooler temperatures (later spectral types), such that the Si triplet is stronger than He 4542 by B0. The T_ eff = 40,000 K model is also a useful classification anchor point in terms of the He /He ratios for the 4026/4200 and 4388/4542 pairs, corresponding to a classification of ∼O6. To complement the UVES data of HDE 269896, in Fig. <ref> we show the near-UV ( < 3450 Å) spectra for the same models as shown in Fig. <ref>.The primary features of interest, in terms of their potential usefulness for spectral classification and determination of physical parameters are the He 3188 and He 3203 lines (previously studied by <cit.>).The He line is absent for the hottest spectra and increases in strength down the temperature sequence, while the He changes in the opposite sense. The lines are closest to being equivalent in the T_ eff = 32,500 model, which would be classified as approximately O9.7 V from comparison of its blue-visible spectrum (Fig. <ref>) to published spectral standards <cit.>, although the point at which the line intensities are equal is slightly cooler.This region is also rich with O absorption lines, as well as pairs of lines from Si and Si and several O lines in the hottest spectra. As at the longer wavelengths shown in Fig. <ref>, the presence of two ionisation stages in this region (from multiple species) offers alternative potential diagnostics of temperature.In this first exploration, with relatively modest rotational broadening (vsini = 100 ), we have followed the general classification approach for digital spectral of considering the central line depths rather than the line intensities (equivalent widths).Once empirical data is available for a broad range of massive stars in this spectral range, a more robust morphological treatment will be required in the future, which e.g. takes into account effects such as rotational broadening (e.g. <cit.>, see also the discussion by <cit.>). §.§ LMC metallicity: Stellar gravities The 3188/3203 line ratio looks potentially interesting in the context of (approximate) spectral classification and estimates of T_ eff, but if we were limited to CUBES observations alone of a given target we must also consider how to constrain its gravity (as well as the possible impact that it has on the 3188/3203 ratio). As demonstrated in Fig. <ref>, the CUBES range contains the high Balmer series lines (plus the H8 and Hϵ lines not shown in the figure). The profile wings of the Balmer series are generally excellent diagnostics of stellar gravity, and the high Balmer lines could be used to constrain the gravities of CUBES targets. With crossed-dispersed instruments such as UVES and X-Shooter, correction of the echelle blaze function and stitching together the different echelle orders can be challenging in the 3650-3900 Å region, where the wings of the Balmer lines can overlap and it is difficult to accurately define a continuum. An advantage of the CUBES design is the continuous spectrum from each of its two arms, provided by having only one dispersing element (in each arm) operating in first order <cit.>, i.e. no echelle orders to combine in the data reduction.To illustrate the diagnostic potential of the Balmer lines in this regard, in Fig. <ref> we show models for T_ eff = 35,000 K (typical of a late O-type star) for three gravities: logg = 3.5 (a typical value for an O-type giant in the LMC, e.g. <cit.>), 4.0 and 4.5, spanning from shortwards of the Balmer limit up to the H8 line. We recognise the challenges of the blending of the higher-series lines (particularly for noisy data), but note that the H8 line is relatively isolated, and the CUBES range (which extends further redwards to 4050 Å) also includes the Hϵ line. While the latter is blended with the interstellar Ca K line at 3968, its redward wing could be used to provide further constraints.The adopted gravity does have an impact on the strengths of the He 3188 and He 3203 lines (as shown in Fig. <ref>), with the former being stronger and the latter being weaker at higher gravities. As such, good S/N across the full CUBES range will be critical to ensure the maximum information is available, but the key point is that there are potential diagnostics available of both temperature and gravity.A caveat of our approach in using the tlusty models is that the effects of stellar winds are not included, which would be expected to modify the appearance of the emergent spectra. However, we note that the winds of O-type giants and dwarfs at sub-solar metallicity are generally weak, and the tlusty models are sufficient for our qualitative objectives here. §.§ Low-metallicity models (Z = 1/30 Z_⊙)The LMC-like models described above will be relevant for future CUBES targets in external galaxies, but the real push is to extend studies to lower metallicities than currently possible (see <cit.>). A significant effort over the past 20 years has gone into quantifying the impact that metallicity has on the evolution of massive stars, so that we can improve stellar-evolution and population-synthesis models to more accurately reproduce the massive-star populations seen in both the local Universe and high-redshift, star-forming galaxies. Necessarily, most efforts have focused on massive stars in the Milky Way, LMC and the Small Magellanic Cloud (SMC), spanning a range of metallicities from that of the solar neighbourhood down to approximately one-fifth solar in the SMC. This puts limits on our ability to test model predictions at lower metallicities, and requires (uncertain) extrapolations.High-efficiency, multi-object spectrographs, such as the FOcal Reducer/low dispersion Spectrograph 2 (FORS2) on the VLT, have provided first insights (at R ∼ 1,000) into limited numbers of massive stars in more distant systems, with nebular oxygen abundances of ∼0.15Z_⊙, namely: IC 1613 at 0.7 Mpc, WLM at 0.9 Mpc, and NGC 3109 at 1.2 Mpc. However, a key motivation to improve our understanding of the physical properties of massive stars, and their contribution to galaxies in the early Universe, is to directly observe stars at even lower metallicites. Example targets in this context include: Sextans A (0.1Z_⊙ at 1.3 Mpc), SagDIG (0.05Z_⊙ at 1.1 Mpc), Leo P (0.03Z_⊙ at 1.6 Mpc) and, ultimately, I Zw18 (0.02Z_⊙ at 18.9 Mpc).To investigate the spectral lines in such extremely metal-poor stars, we have used the 0.03Z_⊙ models from the tlusty OSTAR2002 grid. As a first comparison with the LMC-like models, the near-UV and visible regions for the 0.03Z_⊙ models are shown in Figs. <ref> and <ref>, respectively. Echoing the shift to higher temperatures for a given zero-age main-sequence mass at lower metallicity (e.g.<cit.>, and references therein), note the He /He line ratios in the T_ eff = 40,000 K model compared to that in Fig. <ref>. The T_ eff = 40,000 K model for 0.03Z_⊙ would be classified as a slightly later type than that for the 0.5Z_⊙ model. Similarly, the He 4471 line is stronger in the 0.03Z_⊙ model.In short, a spectrum classified as, e.g. O3 or O6, at very low metallicity (e.g. in Leo P or I Zw18) would have a hotter temperature than its morphological counterpart in the LMC.The 3000-3450 Å near-UV region for the 0.03Z_⊙ models is shown in Fig. <ref>. As expected, the metallic lines are now significantly weaker, but there are still several weak O lines present in the models corresponding to later O-types (T_ eff = 32,500 & 35,000). As with the He /He line ratios in the visible(Fig. <ref>), there is a shift in temperature of when the He 3188 and He 3203 lines have equivalent intensities at lower metallicity.They are roughly equivalent at T_ eff = 32,500 in the 0.03Z_⊙ models (Fig. <ref>), compared to somewhere between T_ eff = 32,500 and 30,000 in the 0.5Z_⊙ models (Fig. <ref>). Again this reflects the hotter models needed to reproduce a given line ratio at lower metallicity (i.e. the temperature for a given spectral type, on the basis of using Galactic morphological criteria, would be higher).§ STELLAR PROPERTIES IN LOCAL GROUP GALAXIES§.§ Oxygen abundancesThe key properties of a (single) massive star are defined by its initial mass and metallicity, but its path in the Hertzsprung–Russell diagram and ultimate fate depend critically on both mass loss and rotation <cit.>. Studies have shown that the effects of rotation are enhanced at low metallicity (e.g. <cit.>), and rotationally-enhanced mixing is predicted to bring CNO-processed material to the stellar surface.To test the various implementations and prediction of rotation and mixing in stellar evolution models we require good empirical constraints of CNO abundances as a function of temperature, luminosity, metallicity and rotation rates.As illustrated in Fig. <ref>, the near UV spectra of O-type stars contain a plethora of oxygen lines, with several O lines in the hottest spectra, a large number of O lines in mid-late types, and weak O lines in the coolest spectra (e.g. 3390).These lines are the best available diagnostics of O abundances in massive stars, because those at other wavelengths are generally saturated and strongly affected by stellar winds, making abundance determination both complicated and uncertain. Moreover, the use of multiple lines is also critical to arrive at accurate abundances with reliable error bars (see Fig. 1 from <cit.>).We add that accurate oxygen abundances are required to use the O 1371 line from far-UV observations to estimate stellar wind densities around the sonic point and to explore the effect of clumping in the wind to arrive at reliable mass-loss rates.There are also useful lines from C and N in the near-UV range (e.g. C 3609 and N III 3354-67-74) that appear to free of wind effects in relevant cmfgen models (e.g. <cit.>), but these warrant further investigation. To illustrate the sensitivity of the O lines to abundance changes and the S/N of the observations, in Fig. <ref> we show a cmfgen model <cit.> with T_ eff = 31,000 K and log(g) = 3.1 for HD 269702 (classified as O8 I(f)p, see <cit.>) in the LMC. The baseline oxygen abundance is 12 + log(O/H) = 8.39 (shown in green), compared with models for 12 + log(O/H) = 8.74 (in red) and 7.78 (comparable to results for metal-poor irregulars, in black). We then introduced model noise to each spectrum to reproduce continuum S/N levels of 50, 100 and 150 as shown in the figure.As expected given past observational studies (e.g. <cit.>), the spectra in Fig <ref> suggest that S/N ≳ 100 is required to estimate O abundances in the LMC and more metal rich targets as the lines becomes saturated. For instance, there is little sensitivity to changes in abundance over this abundance range in the O 3312 line. However, the situation is less challenging at lower abundances – although S/N remains critical, the lines are more responsive to abundance changes. As noted above, the precision on the estimated O abundance can also be improved by analysing multiple lines together (recalling the large number of O lines in the mid-late O-type spectra in Fig. <ref>, and the O lines in the hottest spectra).To date, estimates of O abundances in B-type supergiants have been possible in metal-poor galaxies at the fringes of the Local Group, e.g. 12 + log(O/H) = 7.83 ± 0.12 in WLM <cit.> and 7.76 ± 0.07 in NGC 3109 <cit.>. Similar techniques were also used to estimate light-element abundances (C, N, O, Si, Mg) of B-type supergiants at ∼2 Mpc in NGC 55 <cit.>. However, while the supergiants are a useful reference point for the evolutionary models (and for comparisons with nebular abundances), for true insights into the physical processes on the main sequence, we need similar studies of O-type stars. Even though we have confirmed O stars in WLM and NGC 3109 <cit.>, as well as in IC 1613 <cit.> and Sextans A <cit.>, abundance estimates are out of reach of present observations (e.g. <cit.>). §.§ Stellar rotational velocities Another important observational property of massive stars is their projected rotational velocity (vsini). As noted above, stellar rotation is important in the context of chemical enrichment (e.g. CNO abundances), and observational estimates of vsini are also powerful diagnostics of rapidly-rotating stars that have experienced chemically-homogeneous evolution or that have previously undergone binary interaction.The helium lines in visible spectra are often used to estimate rotational velocities of O-type stars (e.g. <cit.>). Although they are affected by Stark broadening (typically with an equivalent FWHM of 50-100 ), the helium lines can be used to study the overall distribution of projected rotational velocities for samples of stars, while also allowing identification of rapid rotators. However, working with low-resolution (R ∼ 1000) spectroscopy from, e.g. FORS2 or OSIRIS (on the Gran Telescopio Canarias), at S/N ∼ 50 we are unable to put even modest constraints on vsini; the minimum combination required is R > 2000 and S/N > 100.A more robust probe of rotational velocities is provided by metallic lines in the spectra of massive stars. The metallic lines do not suffer from Stark broadening, nor do they suffer nebular contamination which can also hamper the use of helium lines in extragalactic targets. The visible spectra of B-type stars are replete with isolated metal lines that can be used to investigate vsini (e.g. Si , Si , Mg etc) but the only comparably useful probe for O-type stars is the (less commonly observed) O 5591 line. The capabilities of CUBES are particularly compelling in this context. The combination of its sensitivity, improved spectral resolution (compared to e.g. FORS2) and access to the broad range of metallic lines in the near UV (see Fig. 3) will enable robust estimates of vsini for metal-poor stars at the edge of the Local Group. High S/N observations would also enable investigation of the contribution of macroturbulent broadening in (sub-SMC) metal-poor O-type stars for the first time (e.g. <cit.>). For instance, for stars with relative narrow lines, S/N ≳ 100 is sufficient to use Fourier transform analysis to estimate the contribution of macroturbulence <cit.>. §.§ Alternative temperature diagnostics Beyond the O-type stars discussed here, the coverage of the Balmer jump provided by CUBES could also provide valuable constraints on effective temperature for other extragalactic targets, as used for e.g. A-type supergiants <cit.>, B-type supergiants <cit.>, and Be-type stars <cit.>. This technique typically requires accurate determination of the flux levels at either side of the Balmer limit, rather than precise absolute flux calibration. We do not explore this application further here, but note it for completeness and as an example of where flux calibration of the spectra will be important to have a good understanding of the wavelength-dependent properties of the spectra (response function, slit-losses etc). § CUBES PERFORMANCESIn contrast to lower-mass stars, the spectral energy distributions of massive stars peak in the far-UV. This means that observations with CUBES will probe the rising part of the flux distribution, potentially opening-up observations of targets that are otherwise too faint to observe with other facilities.For instance, the intrinsic U-V colour for a mid O-type star is (U-V)_0 ∼ -1.5 mag (e.g. <cit.>), representing a potentially significant gain compared to observations at visible wavelengths.One caveat to this potential gain is the challenge of line-of-sight extinction towards potential targets, as its effects become more significant at shorter wavelengths (with A(U) ≈ 1.5 A(V) <cit.>).Nonetheless, most potential extragalactic targets are sufficiently far from the Galactic plane, such that foreground extinction will not be too much of a limiting factor, although targets in external galaxies might have to be carefully selected to avoid those with a significant local contribution to the line-of-sight extinction.To estimate the potential performance of CUBES in the studies of O-type stars in galaxies such as IC 1613, WLM and NGC 3109 we used some of the model spectra discussed above as inputs to the Exposure Time Calculator (ETC) developed during the conceptual design phase <cit.>. The confirmed O-type stars in these irregular galaxies have V = 19 to 20.5 mag. The S/N predictions from the ETC for 2 hr exposures of three of the tlusty 0.5Z_⊙ dwarf models are summarised in Table <ref>; adopted parameters for each calculation were: airmass = 1.2 (a reasonable assumption for the example galaxies), seeing = 0.8”, spatial binning ×2, spectral binning ×4. Given the similarity in the spectral energy distributions, the same ETC calculations using the 0.03Z_⊙ are nearly identical in the predicted S/N, so the use of the LMC-like models here does not unduly influence the results. The S/N values are quoted for two wavelengths: 3195 Å as representative of the continuum S/N near the He 3188 and He 3203 lines, and at 3640 Å to indicate the S/N shortwards of the Balmer limit.The image slicer for the low-resolution mode generates a wider effective slit such that the resulting spectra are oversampled (with >9 pixels per resolution element), hence the adopted ×4 spectral binning in the calculations to bolster the resulting S/N without loss of resolution. Equally, for the faintest massive stars, we could bin by a further factor of two to improve the S/N (i.e. √(2) × S/N_ ETC), degrading the spectrum to an effective resolving power of R ∼ 3,500 (still more than three times that obtained for 3 hr integrations of massive stars beyond 1 Mpc with FORS2, e.g. <cit.>). In the low-resolution mode the slicer has six 1” slices, so in the ETC calculations we only extracted the central two slices (to optimise the S/N).With additional binning spectrally, the results in Table <ref> demonstrate that it should be possible to obtain sufficient S/N (≳ 100) for studies of the physical parameters and oxygen abundances of O-type stars in galaxies at the edges of the Local Group (i.e. 1.2-1.3 Mpc), with slightly longer (∼2.5-3 hr) exposures of the fainter known O stars in these systems.§.§ `ELT science' with the VLT As an exciting example of where CUBES can bring observations into reach that are beyond our current capabilities, we highlight the case of star LP 26 in the dwarf Leo P galaxy <cit.>. Deep (8 hr) observations with the Multi-Unit Spectroscopic Explorer (MUSE) instrument revealed weak He 4686, 5411 absorption lines, providing the first direct evidence of an O-type star in Leo P. This is particularly interesting as the significantly low oxygen abundance (3% solar) of the H region in Leo P suggests it has a near-primordial composition <cit.>. Although relatively nearby (1.6 Mpc <cit.>) for such a metal-poor system, Leo P is sufficiently far away that further visible spectroscopy of its hot stars is beyond our current capability. For instance, the HST magnitudes for LP 26 are F475W = 21.5 and F814W = 21.8 (which, taking these as proxies for V and I filters compared to the anticipated intrinsic colours for O-type stars, suggest a low line-of-sight extinction). This puts it beyond the reach of (feasible) observing proposals with existing visible spectrographs, and further visible spectroscopy of the stellar population of Leo P was thought to have to wait until the Extremely Large Telescope (ELT) is operational. First constraints on the properties of LP 26 have recently been provided by analysis of far-UV HST spectroscopy, reporting a fast rotational velocity (vsini = 370 ± 90 ) and, as expected given the lower metallicity, weaker wind lines than in comparison templates of SMC stars <cit.>. Using the same OSTAR2002 tlusty grid as in our calculations, the far-UV spectroscopy was combined with multiband photometry to estimate its temperature as T_ eff = 37,500 K, with uncertainties of order ±6 kK. The near-UV lines identified in the CUBES range offer the prospect of characterising the physical properties of LP 26. We used the 3% solar T_ eff = 37,500 K tlusty model to estimate the performance for V = 21.5 mag. with the ETC. The same parameters were used as in Table <ref>, including airmass = 1.4, as this corresponds to the maximum altitude that Leo P (with a declination of +18^∘) reaches from Paranal. Extracting only the central two slices again, the ETC predicts S/N = 34 at 3640 Å in a total integration of 4 hrs; binning this (spectrally) by a further factor of two would provide S/N ∼ 50 in a relatively modest half night of observations; the same calculation for 3195 Å (central slice only, binning the ETC results by a further factor of two yields S/N ∼ 40. Such an observation would enable estimates of the physical parameters (temperature, gravity) of LP 26 in a relatively modest amount of observing time (half a night). This nicely illustrates the potential of CUBES in this scientific area – we expect more candidate O-type stars to be discovered in the coming years at 1 Mpc and beyond, and CUBES could provide a powerful capability to constrain their physical parameters.Direct determination of the oxygen abundance at the low metallicity of Leo P will, however, remain challenging. The metal lines are sufficiently weak that greater S/N (> 100) is required for secure detections. To explore this further, in Fig. <ref> we show the same tlusty model spectrum for the O lines in the 3250-3350 Å region. A continuum S/N in excess of 100 is required to tentatively detect the O 3341 line, with S/N = 150 being a more realistic goal if equivalent widths (or even firm upper limits) were to be measured. From the ETC, recovering a S/N of order 150 at 3350 Å (assuming the same binning as above in the ETC, and a further factor of √(2)) would require a total integration of ∼45 hr. Such an ambitious observation is unlikely to be feasible, particularly given e.g. systematics that might limit performance from combining a large number of exposures together, although we will re-assess this case later in the construction phase of CUBES. §.§ Impact of O_3 absorption Shortwards of ∼3400 Å the extinction due to atmospheric ozone (O_3) becomes a significant factor, with a steep dependence of the extinction as a function of airmass (see e.g. Fig. 2 from <cit.>). Within this region there are also discrete O_3 bands, which can impact on the study of potential stellar features. In the context of the potential diagnostic lines discussed here, it is notable that one of the bands spans the (rest-frame) He 3203 line (e.g. Fig. 1 from <cit.>). Although weaker at the longer wavelengths, careful correction for the O_3 bands will also be required to study the O features discussed above,On this topic we note the study by <cit.> who used historic stellar spectroscopy to recover information on the intensity of past O_3 features; this highlighted that with careful modelling (helped in this particular case by near-UV HST observations) the stellar features could be removed successfully to reveal the O_3 band absorption; the reverse would also be true in terms of recovering the intrinsic stellar spectra.The operational concept for CUBES is under development as the project progresses, but careful subtraction of the O_3 features will be an important consideration. The current plans include observations of flux standards (with high signal-to-noise) to help reconstruct the atmospheric absorption features, combined with using theoretical tools such as molecfit <cit.>.§ SUMMARY We have used model atmospheres to investigate the spectral diagnostics available over the 3000-4050 Å range that will be accessible with the new CUBES instrument now in development for the VLT. There have been relatively few studies of OB-type stars shortwards of the Balmer limit at (ground) near UV wavelengths to date, but CUBES will provide an exciting new capability to study massive stars in metal-poor systems at the edge of theLocal Group and beyond.Following the pioneering work from <cit.>, the He 3188 and He 3203 pair of lines appear to be a compelling diagnostic of stellar temperature, with the high Balmer series of lines providing constraints on gravity. The near UV is also rich with metallic lines (O , O , Si , Si , N ) that can provide further temperature diagnostics (where more than one ionisation stage is present for a given element) as well as estimates of chemical abundances in O-type stars, particularly for oxygen where we lack robust diagnostics at visible wavelengths.Our results from the ETC presented here demonstrate that it should be possible with CUBES to obtain high-quality (S/N > 100, R ∼ 7,000) near-UV spectra of massive O-type stars in metal-poor systems at ∼1 Mpc in ∼3 hr integrations. The CUBES analysis will probably be informed by initial estimates of stellar parameters from observations at other wavelengths, and will provide further constraints on stellar temperatures and gravities, together with estimates of projected rotational velocities and the first oxygen abundances for O-type stars in these extragalactic systems. Such results will enable much needed comparisons with theoretical predictions from low-metallicity evolutionary models and perhaps the first observational evidence for chemically-homogeneous evolution. Furthermore, using very metal-poor model spectra (with Z = 0.03Z_⊙) we have shown that CUBES could obtain spectra with S/N ∼ 50 of the candidate O-type star in Leo P at 1.6 Mpc in approximately half a night of observations.Our qualitative consideration of the models here was intended as a first study of the potential diagnostics available in the CUBES domain. Data from the ongoing XShootU ESO Large Programme (in support of the HST ULLYSES Legacy Survey) will soon enable quantitative investigation of O-type spectra in the CUBES region from high S/N (> 100) X-Shooter observations in the Magellanic Clouds. Analysis of the XShootU data will provide an important test of the temperatures estimated using the He 3188/He 3203 ratio and logg from the high Balmer lines compared to diagnostics in the visible. Our expectation is that for the extragalactic metal-poor dwarfs envisaged as future CUBES targets, the impact of stellar winds on the determination of physical parameters from the near-UV region alone should be relatively minor, and the XShootU data in the Clouds will provide an important test of this.More generally, the greater throughput and resolving power delivered by CUBES compared to X-Shooter will give a unique capability for studies of massive stars in the ground-UV region. Moreover, the absence of cross-dispersion in the CUBES design will avoid the challenges of blaze correction and order recombination that affect the analysis of data from echelle instruments such as UVES and X-Shooter; the CUBES design will give a smooth, continuous spectral response for the two optical channels. This will enable more robust definition of the stellar continuum for normalization, particularly for broad features in the Balmer line series, as well as analysis of weaker absorption lines if they would otherwise be in the reconnection regions. The latter will be important for more rapidly-rotating massive stars (vsini ≳ 100 ), which will also benefit from the greater resolving power compared to X-Shooter.We thank the reviewers for their suggestions on the manuscript, which helped clarify details as well as helping place this study in the wider context of previous work and some of the challenges of ground-based observations at these short wavelengths. MG acknowledges financial support from grants ESP2017-86582-C4-1-R and PID2019-105552RB-C41, and from the Unidad de Excelencia “Marıá de Maeztu” – Centro de Astrobiologıá (CSIC-INTA) project, MDM-2017-0737.§ CONFLICT OF INTEREST The authors declare that they have no conflict of interest.§ DATA AVAILABILITY STATEMENTThe tlusty models used in this article are freely available online. The UVES data of HDE 269896 and the cmfgen models shown in Figs. <ref> and <ref> are available on request. wf00 Walborn, N. R. & Fitzpatrick, E. L.:Contemporary Optical Spectral Classification of the OB Stars: A Digital Atlas. PASP, 102, 379 (1990)puls96 Puls, J., Kudritzki, R.-P., Herrero, A. et al.: O-star mass-loss and wind momentum rates in the Galaxy and the Magellanic Clouds Observations and theoretical predictions. A&A, 305, 171 (1996)zanutta22 Zanutta, A., Cristiani, S., Atkinson, D. et al.: CUBES Phase A design overview. ExA, 55, 241 (2023)lamers72 Lamers, H. J.: The spectrum of the supergiant ϵ Orionis (B0 Ia). I. Identifications, equivalent-widths, line profiles. A&AS, 7, 113 (1972)m75 Morrison, N. D.: The lines He I λ3187 and He II λ3203 in O-type stars ApJ, 202, 433 (1975)dufton Dufton, P. L. & McKeith, C. D.: Copernicus observations of neutral helium lines in early-type stars. A&A, 81, 8 (1980)drissen95 Drissen, L., Moffat, A. F. J., Walborn, N. R. & Shara, M. M.: The Dense Galactic Starburst NGC 3603. I. HST/FOS Spectroscopy of Individual Stars in the Core and the source of Ionization and Kinetic Energy. AJ, 110, 2235 (1995)wh00 Walborn, N. R. & Howarth, I. D.: Digital Spectroscopy of O3-O5 and ON/OC Supergiants in Cygnus. PASP, 112, 1446 (2000)ecfh04 Evans, C. J., Crowther, P. A., Fullerton, A. W. & Hiller, D. J.: Quantitative Studies of the Far-Ultraviolet, Ultraviolet, and Optical Spectra of Late O- and Early B-Type Supergiants in the Magellanic Clouds. ApJ, 610, 1021 (2004)cmfgen Hillier, D. J. & Miller, D. L.: The Treatment of Non-LTE Line Blanketing in Spherically Expanding Outflows. ApJ, 496, 407 (1998)tlusty Lanz, T. & Hubeny, I.: A Grid of Non-LTE Line-blanketed Model Atmospheres of O-Type Stars. ApJS, 146, 417 (2003)vfts Evans, C. J., Lennon, D. J., Langer, N. et al.: The VLT-FLAMES Tarantula Survey. Msngr, 181, 22 (2020)gs98 Grevesse, N. & Sauval, A.: Standard solar composition SSRv, 85, 161, (1998)cssj17 Sabín-Sanjulián, C., Simón-Díaz, S., Herrero, A. et al.: The VLT-FLAMES Tarantula Survey. XXVI. Properties of the O-dwarf population in 30 Doradus. A&A, 601, A79 (2017)trundle07 Trundle, C., Dufton, P. L., Hunter, I. et al.: The VLT-FLAMES survey of massive stars: evolution of surface N abundances and effective temperature scales in the Galaxy and Magellanic Clouds. A&A, 471, 625 (2007)sota11 Sota, A., Maíz Apellániz, J., Walborn, N. R. et al.: The Galactic O-Star Spectroscopic Survey. I. Classification System and Bright Northern Stars in the Blue-violet at R∼2500. ApJS, 193, 24 (2011)markova11 Markova, N., Puls, J., Scuderi, S. et al.: Spectroscopic and physical parameters of Galactic O-type stars. I. Effects of rotation and spectral resolving power in the spectral classification of dwarfs and giants. A&A, 530, A11 (2011)arias16 Arias, J. I., Walborn, N. R., Simón-Díaz, S. et al.: Spectral classification and properties of the O Vz stars in the Galactic O-Star Spectroscopic Survey (GOSSS) AJ, 152, 31 (2016)ora17 Ramírez-Agudelo, O. H., Sana, H., de Koter, A. et al.: The VLT-FLAMES Tarantula Survey . XXIV. Stellar properties of the O-type giants and supergiants in 30 Doradus. A&A, 600, A81 (2017)voyage2050 Garcia, M., Evans, C. J., Bestenlehner, J. M. et al.: Massive stars in extremely metal-poor galaxies: a window into the past. ExA, 51, 887 (2021)IZw18 Szécsi, D., Langer, N., Yoon, S.-C. et al.: Low-metallicity massive single stars with rotation. Evolutionary models applicable to I Zwicky 18. A&A, 581, A15 (2015)puls08 Puls, J., Vink, J. S. & Najarro, F.: Mass loss from hot massive stars. A&ARv, 16 209 (2008)langer12 Langer, N.: Presupernova Evolution of Massive Single and Binary Stars ARA&A, 50, 107, (2012)mm01 Maeder, A. & Meynet, G.: Stellar evolution with rotation. VII. Low metallicity models and the blue to red supergiant ratio in the SMC. A&A, 373, 555 (2001)brott11 Brott, I., de Mink, S. E., Cantiello, M. et al.: Rotating massive main-sequence stars. I. Grids of evolutionary models and isochrones. A&A, 530, A115 (2011)martins15 Martins, F., Hervé, A., Bouret, J.-C. et al.: The MiMeS survey of magnetism in massive stars: CNO surface abundances of Galactic O stars. A&A, 575, A34 (2015)mb22 Marcolino, W. L. F., Bouret, J.-C., Rocha-Pinto, H. J. et al.: Wind properties of Milky Way and SMC massive stars: empirical Z dependence from CMFGEN models MNRAS, 511, 5104 (2022)walborn2010 Walborn, N. R., Howarth, I. D., Evans, C. J. et al.: The Onfp Class in the Magellanic Clouds. AJ, 139, 1283 (2010)bresolin06 Bresolin, F., Pietrzyński, G., Urbaneja, M. A. et al.: The Araucaria Project: VLT Spectra of Blue Supergiants in WLM- Classification and First Abundances. ApJ, 648, 1007 (2006)evans07 Evans, C. J., Bresolin, F., Urbaneja, M. A. et al.: The ARAUCARIA Project: VLT-FORS Spectroscopy of Blue Supergiants in NGC 3109 – Classifications, First Abundances, and Kinematics. ApJ, 659, 1198 (2007)castro55 Castro, N., Urbanejea, M. A., Herrero, A. et al.: The ARAUCARIA project: Grid-based quantitative spectroscopic study of massive blue stars in NGC 55. A&A, 542, A79 (2012)garcia13 Garcia, M. & Herrero, A.: The young stellar population of IC 1613. III. New O-type stars unveiled by GTC-OSIRIS. A&A, 551, A74 (2013)Cal16 Camacho, I., Garcia, M., Herrero, A., & Simón-Díaz, S.: OB stars at the lowest Local Group metallicity. GTC-OSIRIS observations of Sextans A. A&A, 585, A82 (2016)tramper11 Tramper, F., Sana, H., de Koter, A. & Kaper, L.: On the Mass-loss Rate of Massive Stars in the Low-metallicity Galaxies IC 1613, WLM, and NGC 3109. ApJ, 741, L8 (2011)tramper14 Tramper, F., Sana, H., de Koter, A. et al.: The properties of ten O-type stars in the low-metallicity galaxies IC 1613, WLM, and NGC 3109. A&A, 572, A36 (2014)rss13 Ramírez-Agudelo, O. H., Simón-D iaz, S., Sana, H. et al.: The VLT-FLAMES Tarantula Survey. XII. Rotational velocities of the single O-type stars. A&A, 560, A29 (2013)a09 Aerts, C., Puls, J., Godart, M. & Dupret, M. A.: Collective pulsational velocity broadening due to gravity modes as a physical explanation for macroturbulence in hot massive stars. A&A, 508, 409 (2009)ssd10 Simón-Díaz, S., Herrero, A., Uytterhoeven, K. et al.: Observational Evidence for a Correlation Between Macroturbulent Broadening and Line-profile Variations in OB Supergiants. ApJ, 720, L174 (2010)ssd14 Simón-Díaz, S. & Herrero, A.: The IACOB project. I. Rotational velocities in northern Galactic O- and early B-type stars revisited. The impact of other sources of line-broadening. A&A, 562, A135 (2014)johnson Johnson, H. L.: Astronomical Measurements in the Infrared. ARA&A, 4, 193 (1966)cardelli Cardelli, J. A., Clayton, G. C. & Mathis, J. S.: The Relationship between Infrared, Optical, and Ultraviolet Extinction. ApJ, 345, 245 (1989)genoni22 Genoni, M., Landoni, M., Cupani, G. et al.: The CUBES Instrument Model and Simulation Tools. ExA, 55, 301 (2023)leoP Evans, C. J., Castro, N., Gonzalez, O. A. et al.: First stellar spectroscopy in Leo P. A&A, 622, A129 (2019)mcquinn McQuinn, K. B. W., Skillman, E. D., Dolphin, A. et al.: Leo P: An Unquenched Very Low-mass Galaxy. ApJ, 812, 158 (2015)skillman13 Skillman, E. D., Salzer, J. J., Berg, D. A. et al.: ALFALFA Discovery of the nearby Gas-rich Dwarf Galaxy Leo P. III. An Extremely Metal Deficient Galaxy. AJ, 146, 3 (2013)telford21 Telford, G. O., Chisholm, J, McQuinn, K. B. W. & Berg D.: Far-ultraviolet Spectra of Main-sequence O Stars at Extremely Low Metallicity. ApJ, 922, 191 (2021)kud08 Kudritzki, R.-P., Urbaneja, M., Bresolin, F. et al.: Quantitative Spectroscopy of 24 A Supergiants in the Sculptor Galaxy NGC 300: Flux-weighted Gravity-Luminosity Relationship, Metallicity, and Metallicity Gradient. ApJ, 681, 269 (2008)z09 Zorec, J., Cidale, L., Arias, M. L. et al.: Fundamental parameters of B supergiants from the BCD system. I. Calibration of the (_1, D) parameters into T_ eff. A&A, 501, 297 (2009)s18 Shokry, A. Rivinius, Th, Mehner, A. et al.: Stellar parameters of Be stars observed with X-shooter. A&A, 609, A108 (2018)patat11 Patat, F., Moehler, S., O'Brien, K. et al.: Optical atmospheric extinction over Cerro Paranal. A&A, 527, A91 (2011)griffin05 Griffin, R. E.: The detection and measurement of telluric ozone from stellar spectra. PASP, 117, 885 (2005)molecfit Smette, A., Sana, H., Noll, S. et al.: Molecfit: A general tool for telluric absorption correction. I. Method and application to ESO instruments. A&A, 576, A77 (2015) | http://arxiv.org/abs/2310.18081v1 | {
"authors": [
"Chris Evans",
"Wagner Marcolino",
"Jean-Claude Bouret",
"Miriam Garcia"
],
"categories": [
"astro-ph.SR",
"astro-ph.GA",
"astro-ph.IM"
],
"primary_category": "astro-ph.SR",
"published": "20231027120248",
"title": "A near-UV reconnaissance of metal-poor massive stars"
} |
Asymmetric Geometry of Total Grassmannians André L. G. Mandolesi Instituto de Matemtica e Estatstica, Universidade Federal da Bahia, Av. Milton Santos s/n, 40170-110, Salvador - BA, Brazil. E-mail:January 14, 2024v2.2 - notao G_p^n=============================================================================================================================================================================== TorchAudio is an open-source audio and speech processing library built for PyTorch. It aims to accelerate the research and development of audio and speech technologies by providing well-designed, easy-to-use, and performant PyTorch components. Its contributors routinely engage with users to understand their needs and fulfill them by developing impactful features. Here, we survey TorchAudio's development principles and contents and highlight key features we include in its latest version (2.1): self-supervised learning pre-trained pipelines and training recipes, high-performance CTC decoders, speech recognition models and training recipes, advanced media I/O capabilities, and tools for performing forced alignment, multi-channel speech enhancement, and reference-less speech assessment. For a selection of these features, through empirical studies, we demonstrate their efficacy and show that they achieve competitive or state-of-the-art performance. Open-Source Toolkit, Speech Recognition, Audio Processing, Self-Supervised Learning § INTRODUCTION With the rapid advancement and increasing pervasiveness of machine learning technologies, usage of open-source toolkits such as Tensorflow <cit.> and PyTorch <cit.> for developing machine learning applications has grown significantly. Many modern machine learning applications interface with modalities such as vision, text, and audio. Building such applications, however, requires modality-specific functionality not covered by said general-purpose toolkits.To address the need for audio and speech facilities in particular, the TorchAudio library has been developed <cit.>. TorchAudio supplements PyTorch with easy-to-use and performant components for developing audio and speech machine learning models. As a natural extension of PyTorch to the audio domain, TorchAudio embodies many of the same design principles that PyTorch does. Its components support automatic differentiation to facilitate building neural networks and training them end to end. It supports GPU acceleration, which can greatly improve training and inference throughput. It emphasizes composability, simple interfaces shared with PyTorch, and minimal dependencies to allow for easily integrating its components into any application that uses PyTorch.TorchAudio has been widely adopted and actively developed by the PyTorch community, with its Github development statistics having grown substantially since Version 0.10 was presented in <cit.> (Table <ref>). The dramatic increase in the number of repositories that depend on TorchAudio in particular strongly reaffirms TorchAudio's usefulness to the community and success.This paper begins by summarizing TorchAudio's design principles and contents. It then expounds significant new features that have been introduced since Version 0.10 <cit.>, covering self-supervised learning (Wav2Vec 2.0 <cit.>, HuBERT <cit.>, XLS-R <cit.>, WavLM <cit.>), automatic speech recognition (CTC decoder <cit.>, Conformer <cit.>, Emformer <cit.>, audio-visual speech recognition [AV-ASR]), advanced media I/O, CTC-based forced alignment, multi-channel speech enhancement components, and reference-less speech assessment, of which several are technically novel, e.g. real-time AV-ASR, Emformer, CUDA-based CTC decoder, and CUDA-based forced alignment API. It concludes by presenting experimental results for the new features, which demonstrate that they are effective and achieve or exceed parity in run-time efficiency or output quality with public implementations.§ RELATED WORKSeveral popular open-source toolkits implement lower-level audio operations such as I/O, spectrogram generation, and data augmentations. Just as librosa <cit.> is one such library for Numpy <cit.> and tfio.audio for Tensorflow, TorchAudio is one such library for PyTorch. The broad applicability of TorchAudio’s data componentry has made it effective in serving more specialized audio data representation libraries such as Lhotse <cit.>, which provides abstractions and utilities that streamline data preparation for downstream audio tasks.Many higher-level audio and speech machine learning toolkits exist in the PyTorch ecosystem, e.g. ESPnet <cit.>, SpeechBrain <cit.>, fairseq <cit.>, and NeMo <cit.>. These toolkits provide ready-to-use models, training recipes, and components covering audio and speech tasks such as text to speech, speech recognition, speech translation, and speech enhancement. As the aforementioned audio operations are fundamental to such tasks, all of these toolkits rely on TorchAudio.In addition to lower-level audio components, TorchAudio also provides some of the features offered by these higher-level toolkits. For instance, TorchAudio includes task-specific components such as decoders for speech recognition, multichannel functions, and model architectures, as well as ready-to-use models and training recipes. That being said, as far as such features are concerned, TorchAudio is distinguished from many of these other toolkits in its focus on stable and established technologies over the cutting edge. For example, rather than maintaining an extensive model repository and continually updating it with the latest state-of-the-art models, we aim to curate a smaller selection of key models and training recipes to demonstrate the use of TorchAudio's components and serve as reliable references. Ultimately, we intend for TorchAudio to be first and foremost a library of established components, which allows it to complement rather than compete with other toolkits in the PyTorch ecosystem. § LIBRARY PRINCIPLESTorchAudio firmly adheres to several design principles, which we distill from <cit.> and clarify.*Extend PyTorch to audio. TorchAudio aims to be PyTorch for the audio domain. Its components compose PyTorch operations, share the same abstractions and Tensor-based interfaces with PyTorch, and support foundational PyTorch features such as GPU acceleration and automatic differentiation. Moreover, its only required dependency is PyTorch. As a consequence, it behaves as a natural extension of PyTorch, and its components integrate seamlessly with PyTorch applications. *Be easy to use. TorchAudio is intuitively designed. Each component is implemented closely following C++, Python, and PyTorch best practices.It is easy to install. TorchAudio’s binaries are distributed through standard Python package managers PyPI and Conda and support major platforms Linux, macOS, and Windows. Optional dependencies are similarly installable via standard package managers. For users who want to use their own custom logic, building TorchAudio from source is straightforward[6.55<https://github.com/pytorch/audio/blob/main/CONTRIBUTING.md>].It is extensively documented. TorchAudio’s official website[<https://pytorch.org/audio/>] comprehensively covers installation directions and the library’s public APIs. Moreover, a wide array of tutorials covering basic and advanced library usages are available on the website and Google Colab. Such resources educate users of all levels of familiarity with audio and speech technologies on how to best use TorchAudio to address their needs. *Favor stability. TorchAudio tends towards mature techniques that are broadly useful. It offers implementations of models and operations that are or will soon become standards in the field. New features are released following a prototype-beta-stable progression to allow users to preview them without disrupting the official releases. Backwards compatibility breaking changes are released after a minimum of two releases to give users ample time to adapt their use cases. 12,000+ test cases and continuous integration workflows run through Github Actions ensure that the APIs work as expected. *Promote accessibility. TorchAudio is an open source library. Its entire source code is available on Github[<https://github.com/pytorch/audio>], where contributions and feedback are encouraged from all. To enable usage in as many contexts as possible, TorchAudio is released under the permissive BSD-2 license.§ NEW FEATURES Relative to Version 0.10 <cit.>, TorchAudio 2.1 includes many significant new features. We elaborate on several of these below. Note that some of these features are technically novel and the first of their kind, e.g. the first AV-ASR model to be capable of real-time inference on CPU, the first public implementation of streaming-capable transformer-based acoustic model Emformer, the first CUDA-based CTC beam search decoder, and the first CUDA-based forced alignment API. *Self-supervised learning.Self-supervised learning (SSL) approaches have consistently improved performance for downstream speech processing tasks. While S3PRL <cit.> focuses on supporting downstream tasks and benchmarking, TorchAudio focuses on upstream models by providing reliable and production-ready pre-trained models and training recipes. TorchAudio now provides models and pre-trained pipelines for Wav2Vec 2.0 <cit.>, HuBERT <cit.>, XLS-R <cit.>, and WavLM <cit.>. Each pre-trained pipeline relies on the weights that the corresponding original model uses and thus produces identical outputs. Moreover, each pipeline is easy to use, simply expecting users to call a single method to retrieve a pre-trained model. To facilitate production usage, TorchAudio's model implementations support TorchScript and PyTorch-native quantization and leverage PyTorch 2.0's Accelerated Transformers[6.55<https://pytorch.org/blog/accelerating-large-language-models/>] to speed up training and inference.TorchAudio also provides end-to-end training recipes that allow for pre-training and fine-tuning HuBERT models from scratch. The training recipes have minimal dependencies beyond PyTorch and TorchAudio and are modularly implemented entirely in imperative code, which makes them conducive to customization and integration with other training flows, as their adoption by other frameworks such as ESPnet <cit.> demonstrates.*CTC decoder. Beam search is an efficient algorithm that has been used extensively for decoding speech recognition (ASR) model outputs and remains a fast and lightweight alternative to model-based decoding approaches. We have added a CTC beam search decoder that wraps Flashlight Text’s <cit.> high performance beam search decoder in an intuitive and flexible Python API. The decoder is general purpose, working for both lexicon and lexicon-free decoding as well as various language model types, including KenLM <cit.> and custom neural networks, and is easily adaptable to different model outputs.We have also introduced a CUDA-based CTC beam search decoder. By parallelizing computation along the batch, hypothesis, and vocabulary dimensions, it can achieve much higher decoding throughputs than the CPU-based implementation, which we demonstrate in Section <ref>. To our knowledge, the implementation is the first and only publicly available CUDA-compatible CTC decoder. *Conformer. Conformer is a transformer-based acoustic model architecture that has achieved state-of-the-art results for ASR <cit.>. We have developed a PyTorch-based implementation of Conformer and published an RNN-Transducer ASR training recipe that uses it. Using the recipe, we produced a model that achieves word-error-rate (WER) parity with comparable open-source implementations, which will be discussed in Section <ref> *Emformer. Emformer is a streaming-capable efficient memory transformer-based acoustic model <cit.>. For on-device streaming ASR applications, it has demonstrated state-of-the-art performance balancing word error rate, latency, and model size. Moreover, because it applies a novel parallel block processing scheme for training, it can be trained very efficiently. We have introduced an implementation of Emformer matching that described in <cit.> along with an Emformer transducer ASR training recipe and pre-trained inference pipeline. Our implementation is the first to be publicly available, and it has been adopted and extended by icefall[<https://github.com/k2-fsa/icefall/tree/master/egs/librispeech/ASR>]. *Streaming AV-ASR. AV-ASR involves transcribing text from audio and video. The vast majority of work to date <cit.> has focused on developing non-streaming AV-ASR models; studies on streaming AV-ASR, i.e. transcribing text from audio and video streams in real time, are comparatively limited <cit.>. Auto-AVSR <cit.> is an effective approach to scale up audio-visual data, which enables training more accurate and robust speech recognition systems. We extend Auto-AVSR to real-time AV-ASR and provide an example Emformer transducer training pipeline that incorporates audio-visual input. As far as we know, the AV-ASR model is the first to be capable of real-time inference on CPU.*Advanced media I/O. We have added advanced media processing capabilities to TorchAudio. Class StreamReader can decode not only audio but also images and videos to PyTorch tensors. Similarly, StreamWriter can encode tensors as audio, images, and videos. Both support streaming processing as well as applying transforms such as resampling and resizing. They are capable of interfacing with numerous sources and destinations, including file paths and objects, network locations, and devices, e.g. microphones and webcams. Using these features, one can for instance stream audio chunk by chunk from a remote video file and process the corresponding tensors in an online fashion.We convey the simplicity and versatility of the API via code samples. Figure <ref> instantiates StreamReader specifying the data source to be a network location, configures output audio and video streams, and iterates over tensors representing chunks of audio and video streamed from the output. Appendix <ref> provides additional examples that illustrate how to read from media devices and write to a Real-Time Messaging Protocol server. Furthermore, StreamReader and StreamWriter can leverage hardware video processors available on NVIDIA GPUs to greatly accelerate decoding and encoding. *CTC-based forced alignment.We have added support for forced alignment generation, which computes frame-level alignments between audio and transcripts using a CTC-trained neural network model <cit.>. The function forced_align is compatible with both CPU <cit.> and GPU <cit.>, providing flexibility to users. The GPU implementation is highly scalable and enables efficient processing of long audio files, and represents the first publicly available GPU-based solution for computing forced alignments.We provide a tutorial that demonstrates how to effectively use the API. The tutorial also explains how to perform forced alignment for more than 1000 languages using the CTC-based alignment model from the Massively Multilingual Speech (MMS) project <cit.>. *Multi-channel speech enhancement. Multi-channel speech enhancement aims to remove noise and interfering speech from multi-channel audio by leveraging spatial properties. Relative to single-channel speech enhancement, multi-channel speech enhancement can produce higher-quality outputs and further enhance the performance of downstream tasks such as ASR <cit.>.Estimating time-frequency masks and applying them to Minimum Variance Distortionless Response (MVDR) beamforming is an established technique capable of robustly improving multi-channel speech enhancement <cit.>. To support such work, we have implemented a time-frequency mask prediction network and an MVDR beamforming module along with a corresponding training recipe in TorchAudio. *Reference-less speech assessment. Speech assessment is essential for developing speech enhancement systems. Existing metrics require either human listening tests, e.g. Mean Opinion Score (MOS), which are expensive and unscalable, or reference clean speech, e.g. Short-Time Objective Intelligibility (STOI), Perceptual Evaluation of Speech Quality (PESQ), scale-invariant signal-to-distortion ratio (Si-SDR), which are impractical for real-world usage.To address the limitations of such metrics, we have introduced TorchAudio-Squim <cit.> — TorchAudio-Speech QUality and Intelligibility Measures — which comprises two neural network based models: one for predicting objective metrics (STOI, wideband PESQ, Si-SDR), and one for predicting subjective metrics (MOS), without reference clean speech. Broadly speaking, this speech assessment feature establishes a protocol for evaluating speech enhancement without needing any reference signals. We present a case study of its effectiveness in Section <ref>.§ EMPIRICAL EVALUATIONSWe demonstrate the utility of TorchAudio’s new features via studies. §.§ Self-supervised learningFor the HuBERT recipes, we follow the methodology described in <cit.> of first pre-training a model and then fine-tuning it. To pre-train the model, we run two iterations of training. The first iteration trains a HuBERT model on the 960-hour LibriSpeech dataset for 250K steps, with the output targets being clusters mapped from masked frames by a 100-cluster k-means model trained on MFCC features extracted from the dataset. The second iteration trains another HuBERT model on the dataset for 400K steps, with the output targets being clusters assigned by a 500-cluster k-means model trained on intermediate feature representations generated by the first iteration’s model. Then, we fine-tune this final pre-trained model on the 10-hour LibriLight dataset with CTC loss. Table <ref> shows WERs produced in <cit.> and <cit.> by evaluating the fine-tuned "Base" model described in the original publication <cit.> on LibriSpeech's test subsets, alongside WERs produced using the same model trained via our recipe and the same decoding strategies. The results validate that our HuBERT training recipes are capable of producing models of quality similar to those described in <cit.> and <cit.>. These along with the aforementioned modularity and usability benefits make the models and training recipes particularly promising for users to build upon. Indeed, Chen et al. <cit.> adopt TorchAudio’s HuBERT implementation and fine-tuning recipe applying slightly different training approaches, e.g. different k-means training strategies and mixed-precision training with brain floating-point (bfloat16), and achieve better performance than the original (7.6% and 7.4% relative WER improvement on test-clean and test-other subsets) while consuming far fewer GPU hours. §.§ CTC decoder *CPU CTC decoder. The experiments in Figure <ref> are conducted on LibriSpeech's test-other set on a Wav2Vec 2.0 base model trained on 100 hours of audio. Decoding uses the official KenLM 4-gram LM and takes place on a single thread Intel® Xeon® E5-2696 v3 CPU. Because different decoder libraries support different parameters and have different underlying implementations, we first do a sweep for each decoder library for its baseline hyperparameters, and then run decoding with increasing beam sizes for additional data points. We display the wall-clock-time-WER relationship with pyctcdecode[<https://github.com/kensho-technologies/pyctcdecode>] and NeMo <cit.>, where the time in seconds is for decoding the entirety of the test-other dataset. The results show that, for a given target WER, TorchAudio's decoder runs in less time than the baselines. TorchAudio also supports a wider variety of customizable parameters for better hyperparameter tuning and overall WERs. *CUDA CTC decoder.The experiments in Table <ref> are conducted on LibriSpeech's test-other set using a single V100 GPU and Intel® Xeon® E5-2698 v4 CPU. For both recipes, a batch size of 4 and a beam size of 10 were applied. The CUDA CTC Decoder uses a CUDA kernel to implement the blank collapse method in <cit.>. By setting the blank frame skip threshold to 0.95, the decoding speed can be increased by 2.4 times without sacrificing accuracy. Since the CPU decoder does not support blank collapsing, the CPU decoder's effective blank frame skip threshold is 1.0. For comparability's sake, then, we include results for the CUDA decoder configured with a blank frame skip threshold of 1.0. By way of CUDA's parallelism, the CUDA decoder allows for performing beam search on all tokens at every step. Thus, the CUDA decoder's effective max tokens per step is the vocabulary size, which is 500 in this experiment. Accordingly, we include results for the CPU decoder configured with a max tokens per step of 500 to mimic the CUDA decoder's behavior. Our experimental results show that, compared to the CPU decoder, the CUDA decoder achieves a lower WER and N-best oracle WER while increasing decoding speed by a factor of roughly 10. §.§ Conformer*Model architecture. Rather than pursuing state-of-the-art performance, our primary goal is to validate TorchAudio's implementations of Conformer, RNN-T loss, and data operations. Accordingly, we adopt an architecture similar to that used in the baseline Conformer transducer recipes in the ESPnet[<https://github.com/espnet/espnet/tree/master/egs2/librispeech/asr1#conformer-rnn-transducer>] and icefall[<https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/RESULTS.md#2022-04-19>] toolkits. To ensure comparability, we configure the model architecture to be as similar as possible to those of the baselines. As in the baselines, the encoder has a 512-d output, a subsampling rate of 4, a kernel size of 31, and 8 attention heads with a 2048-d feed-forward unit. In total, our model has 87.4M parameters, with the encoder owning 92% of them. In contrast, ESPnet's has 94.3M parameters, while icefall's has 84.0M. The differences in parameter count stem mostly from small differences in model architecture, which empirically do not significantly impact performance. For instance, whereas the baselines' encoders use positional embeddings, ours does not. *Training strategy. Building upon TorchAudio's base LibriSpeech Conformer transducer recipe, we create two training recipes, with one reproducing ESPnet's recipe and the other icefall's. Both apply online speed perturbation with factors uniformly sampled from {0.9, 1.0, 1.1}. The former applies SpecAugment <cit.> with parameters (T, m_T, F, m_F) = (40, 2, 30, 2) and omits additive noise. The latter applies SpecAugment with parameters (100, 10, 27, 2) and includes additive noise, which entails sampling a waveform from MUSAN's “noise” and “music” subsets <cit.> and adding it to a training sample with probability 0.5 and signal-to-noise ratio (SNR) uniformly sampled from (15, 30) dB.Both recipes use the Adam optimizer with weight decay factor 2e-6. Our learning rate scheduler is similar to Noam <cit.> in warmup (up to 40 epochs) and annealing (starting from epoch 120 with factor 0.96) steps, with the addition of an 80-epoch plateau at value 8e-4. *Results. With comparable model architectures and training setups, our Conformer transducer recipe performs similarly to or better than ESPnet's and icefall's.Table <ref> shows that the performance of our recipe lies between those of the two ESPnet baselines. Compared with the baseline without CTC auxiliary loss, our recipe produces a model that achieves a 4.3%/7.8% relative improvement on test-other/clean. We note, however, that including the auxiliary loss allows the ESPnet recipe to achieve a 10.8%/9.7% relative improvement on test-other/test-clean. With the same SpecAugment policy and usage of additive noise, our model performs similarly to icefall's on test-clean and outperforms it by 9.1% on test-other (Table <ref>).§.§ Streaming AV-ASR*Datasets[All data collection and processing performed at Imperial College London.]. In this study, we use the LRS3 dataset <cit.>, which consists of 151,819 TED Talk video clips totaling 438 hours. Following <cit.>, we also include English-speaking videos from AVSpeech (1,323 hours) <cit.> and VoxCeleb2 (1,307 hours) <cit.> as additional training data along with automatically-generated transcriptions. Our model is fed raw audio waveforms and face region of interests (ROIs). We do not use mouth ROIs as in <cit.> or facial landmarks or attributes during both training and testing. *Model architecture and training. We consider two configurations: Small with 12 Emformer blocks and Large with 28, with 34.9M and 383.3M parameters, respectively. Each AV-ASR model composes frontend encoders, a fusion module, an Emformer encoder, and a transducer model. We use convolutional frontends <cit.> to extract features from raw audio waveforms and facial images. The features are concatenated to form 1024-d features, which are then passed through a two-layer multi-layer perceptron and an Emformer transducer model <cit.>. The entire network is trained using RNN-T loss. *Results. Non-streaming evaluation results on LRS3 are presented in Table <ref>. Our audio-visual model with an algorithmic latency <cit.> of 800 ms (160 ms+1280 ms× 0.5) yields a WER of 1.3%, which is on par with those achieved by state-of-the-art offline models such as AV-HuBERT, RAVEn, and Auto-AVSR. We also perform streaming evaluation adding babble acoustic noise to the raw audio waveforms at various signal-to-noise ratios. With increasing noise level, the performance advantage of our audio-visual model over our audio-only model grows (Table <ref>), indicating that incorporating visual data improves noise robustness. Furthermore, we measure real-time factors (RTFs) using a laptop with an Intel® Core™i7-12700 CPU running at 2.70 GHz and an NVIDIA 3070 GeForce RTX 3070 Ti GPU. To the best of our knowledge, this is the first AV-ASR model that reports RTFs on the LRS3 benchmark. The Small model achieves a WER of 2.6% and an RTF of 0.87 on CPU (Table <ref>), demonstrating its potential for real-time on-device inference applications.§.§ Multi-channel speech enhancement*Datasets. To validate the efficacy of TorchAudio's MVDR beamforming module, we use the L3DAS22 3D speech enhancement task (Task1) dataset <cit.> which contains 80 and 6 hours of audio for training and development, respectively. Each sample in the dataset comprises a far-field mixture recorded by two four-channel ambisonic microphone arrays and the corresponding target dry clean speech and transcript. *Model architecture and training. Experiments are conducted following the mask-based MVDR beamforming methodology described in <cit.>. First, a Conv-TasNet-based mask network is applied to compute the complex-valued spectrum and estimate the time-frequency masks for speech and noise. The mask network consists of a short-time Fourier transform (STFT) layer and a Conv-TasNet model with its feature encoder and decoder removed. Then, the MVDR module is applied to the masks and multi-channel spectrum to produce the beamforming weights. Finally, the beamforming weights are multiplied with the multi-channel STFT to produce a single-channel enhanced STFT from which the enhanced waveform is derived via inverse STFT. We use Ci-SDR <cit.> as the loss function since dry clean signals are generally not aligned with multi-channel inputs in real-world scenarios. Model configurations and training details can be found in <cit.>. *Results. We evaluate the impact of the mask-based MVDR beamforming model alongside various baselines on downstream ASR performance. First, we evaluate each model on the test set of the L3DAS22 dataset to generate the corresponding enhanced speech. Then, we evaluate the Conformer transducer model presented in Section <ref> on the enhanced speech and compute the WER between the generated transcriptions and the true transcriptions. Separately, we also evaluate a Wav2Vec-2.0-based ASR model on the enhanced speech and dry clean speech and compute the WER between the two sets of generated transcriptions, per the L3DAS22 Challenge's WER metric. The results (Table <ref>) imply that the mask-based MVDR model significantly improves ASR performance compared to other methods, validating the efficacy of TorchAudio's MVDR module. §.§ Reference-less speech assessment As discussed in Section <ref>, it can be challenging to compute signal-level speech enhancement metrics (e.g., Si-SDR) in real-world scenarios since obtaining aligned dry clean signals is difficult. Here, we conduct a case study of TorchAudio-Squim's utility in evaluating enhanced signals assuming such scenarios. Using TorchAudio-Squim, we estimate STOI, PESQ, and Si-SDR for the enhanced speech generated in Section <ref>. Table <ref> suggests that the scores predicted by TorchAudio-Squim are consistent with actual speech quality and intelligibility.By jointly leveraging TorchAudio's mask-based MVDR beamforming model, Conformer transducer model, and TorchAudio-Squim, we show that one can perform multi-channel speech enhancement, ASR, and speech quality assessment all within TorchAudio.§ CONCLUSIONTorchAudio 2.1 offers many compelling audio and speech machine learning components. Not only are its components well-designed and easy to use, but they are also effective and performant, as corroborated by our empirical results. Consequently, the library establishes a sound basis for future work in alignment with its ultimate goal of accelerating the advancement of audio technologies, and we look forward to seeing what its incredible community of users will achieve with it next. § ACKNOWLEDGEMENTSWe thank all of TorchAudio’s contributors on Github, including Grigory Sizov, Joel Frank, Kuba Rad, Kyle Finn, and Piotr Bialecki. We thank Andrey Talman, Danil Baibak, Eli Uriegas, Nikita Shulga, and Omkar Salpekar for helping with TorchAudio’s releases. We thank Abdelrahman Mohamed, Buye Xu, Daniel Povey, Didi Zhang, Donny Greenberg, Hung-yi Lee, Laurence Rouesnel, Matt D’Zmura, Mei-Yuh Hwang, Soumith Chintala, Thomas Lunner, Wei-Ning Hsu, and Xin Lei for the many valuable discussions.IEEEbib§ USAGE EXAMPLES | http://arxiv.org/abs/2310.17864v1 | {
"authors": [
"Jeff Hwang",
"Moto Hira",
"Caroline Chen",
"Xiaohui Zhang",
"Zhaoheng Ni",
"Guangzhi Sun",
"Pingchuan Ma",
"Ruizhe Huang",
"Vineel Pratap",
"Yuekai Zhang",
"Anurag Kumar",
"Chin-Yun Yu",
"Chuang Zhu",
"Chunxi Liu",
"Jacob Kahn",
"Mirco Ravanelli",
"Peng Sun",
"Shinji Watanabe",
"Yangyang Shi",
"Yumeng Tao",
"Robin Scheibler",
"Samuele Cornell",
"Sean Kim",
"Stavros Petridis"
],
"categories": [
"eess.AS",
"cs.SD"
],
"primary_category": "eess.AS",
"published": "20231027030051",
"title": "TorchAudio 2.1: Advancing speech recognition, self-supervised learning, and audio processing components for PyTorch"
} |
[email protected] Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025, USA Department of Physics, Northeastern University, Boston, MA 02115, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025, USA Department of Materials Science and Engineering, Stanford University, Stanford, CA 94305, USA Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025, USAA lightly doped single-band Hubbard model on a two leg ladder exhibits a Luther-Emery phase, while the three-band Hubbard ladder behaves as a Luttinger liquid upon hole doping. In order to understand this discrepancy,we present a systematic density-matrix renormalization group study of the three-band Hubbard model on two-leg cylinders with further-neighbor particle hoppings. The inclusion of the longer-range hopping is motivated by the studies of the single-band Hubbard model in which the further-neighbor hopping terms are suggested to be crucial for the unconventional superconductivity. When the longer-range hopping parameters are small, the ground state is a Luttinger liquid having mutually commensurate superconducting, charge and spin density wave correlations. Increasing the longer-range hopping drives a transition into a Luther-Emery phase with quasi-long ranged superconducting and charge orders but short-ranged spin-spin correlations. By down-folding the three-band Hubbard model into an effective t-t'-J-J' model, we find that in the Luther-Emery phase, both the nearest and second neighbor kinetic energies are enhanced due to an effective increase of copper-oxygen hybridization. Amplifying inter-cell oxygen orbital hopping mirrors the benefits of reducing the charge transfer energy, causing doped holes to favor oxygen orbitals and strengthening superconducting pairing.Recovery of a Luther-Emery phase in the three-band Hubbard model with longer-range hopping Hong-Chen Jiang 31 May 2023 ==========================================================================================§ INTRODUCTION The three-band Hubbard model, which is first proposed by Emery <cit.>, can depict the lattice structure of copper and oxygen orbitals in the cuprate superconductors. This model is of particular interest because 1) it takes into account the charge-transfer energy and describes the structure of cuprate materials better than the single-band Hubbard model, and 2) it provides geometrically decoupled spin and charge degrees of freedom in some parameter range. These characters make it a potentially suitable candidate that favors the pairing order. Besides, recent studies on the three-band Hubbard ladder suggest the presence of a pair density wave (PDW) ground state<cit.>, which is a rare find in the microscopic realization of PDW <cit.>.In the context of high temperature superconductivity, extensive studies have been conducted on the single-band Hubbard model <cit.>. There are some deep relations between the single-band and three-band Hubbard models: the similarity in the fundamental excitations is illustrated by the Zhang-Rice singlet picture <cit.>; under certain circumstances, the three-band Hubbard model can be down-folded to an effective single-band Hubbard model or t-J model <cit.>, where similar ground state propertieshave been revealed in recent decades <cit.>. The ground state of the single-band Hubbard model at half filling on a two dimensional square lattice is a Mott insulator with long range magnetic order <cit.>. The three-band Hubbard model at half filling also has been found to exhibit AFM order <cit.>.However, discrepancies between these two models are not negligible.Although it has been extensively studied in the context of high temperature superconductivity, no evidence of PDW has been found in the single-band Hubbard model. More strikingly, the single-band Hubbard ladder exhibits a Luther-Emery phase with dominant superconducting and charge orders upon light doping <cit.>, while the three-band Hubbard ladder is a Luttinger liquid for the same doping concentrations <cit.>.In order to understand these discrepancies, we studythe three-band Hubbard model with longer-range hopping on a two-leg ladder. The inclusion of the longer-range hopping is motivated by previous studies on the single-band Hubbard model, in which the further neighbor particle hoppings are essentialfor superconductivity <cit.>.We study the ground state properties of the lightly doped three-band Hubbard model on two-leg cylinders of Lieb lattices. The further neighbor hopping terms we introduce here are the hopping between copper sites in the adjacent cells with coefficients t_dd, and the hopping between oxygen sites in the adjacent cells with coefficients t_pp^'. By tuning the hopping parameters, we get a phase diagram with a Luttinger Liquid phase with intertwined PDW, charge density wave (CDW) and spin density wave (SDW) correlationswhen both hoppings are close to zero, and a Luther-Emery SC phase with d-wave symmetry when these two parameters are greater than a critical value.The rest of the manuscript is organized as follow: in Sec. <ref> we describe the model Hamiltonian and illustrate the phase diagram; in Sec. <ref> we show various ground state correlations and discuss how the hopping parameters affect the ground state properties; in Sec. <ref> we present an analysis of down-folding the three-band Hubbard model to an effective t-t^'-J-J^' model using the exact diagonalization (ED) method; and finally in Sec. <ref> we summarize our results. § MODEL AND PHASE DIAGRAM We use the density-matrix renormalization group (DMRG) method <cit.> based on iTensor library <cit.> to study the ground state phase diagram of the three-band Hubbard model with further neighbor hoppings on two-leg (L_y=2) Lieb lattice cylinders with length up to L_x=64. By keeping bond dimensions up to m=7000 we make sure the truncation error remains in the order of 10^-7 or below. The Lieb lattice we simulate is shown in the top panel of Fig. <ref>. The “squares” and “circles” represent copper and oxygen orbitals, respectively. We implement periodic (open) boundary condition along the vertical (horizontal) direction.We study the following model Hamiltonian in the hole language: H = H_tb+H_intH_tb= -T^pd - T^pp -T^dd - T^pp' +Δ_pd∑_in_i^p H_int =U_d∑_in_↑,i^d n_↓,i^d + U_p∑_in_↑,i^p n_↓,i^p,where the kinetic term is defined as T^αβ = ∑_i,j;σt_αβ(c_ i,σ^α† c_j,σ^β + h.c.), and α, β belongs to each kind of orbitals. We take into account four types of hopping terms in this model as shown in the top panel of Fig. <ref>: 1) t_pd between the adjacent oxygen and copper sites; 2) t_pp between the nearest p_x and p_y orbitals; 3) t_dd between copper sites in the adjacent unit cells; 4) t_pp' between the same type of oxygen sites in the adjacent cells. In this work, we keep U_d = 8, U_p=3, t_pd=1, t_pp=0.5, Δ_pd=3 <cit.> unless otherwise specified. t_dd and t_pp^' are the tuning parameters. We follow the sign convention in Ref. <cit.>, which is equivalent to the original Emery model by a gauge transformation.We calculate the following correlation functionsto study the ground state properties of the systems. These include the spin-spin correlation functionS(r) = ⟨ S^z_0S^z_r⟩,the charge density-density fluctuation correlation function D(r) = ⟨ n_0n_r⟩ - ⟨ n_0⟩⟨ n_r⟩,the single particle Green function G(r) = ⟨ c^†_↑,0c_↑,r⟩,and the spin-singlet SC pair-pair correlation functionΦ(r) = ⟨Δ^†_0Δ_r⟩ .Here Δ^† represents the spin-singlet Cooper pair creation operator between neighboring sites Δ^†_i = 1/√(2)(c^†_i,↓c^†_i+1,↑ - c^†_i,↑c^†_i+1,↓). In the bottom panel of Fig.<ref> we show the phase diagram as a function of t_dd and t_pp^' at δ = 1/8hole doping concentration.When both hopping parameters are around zero, the system is in a Luttinger liquid phase with intertwined PDW, CDW and SDW correlations. In order to find the strongest signal of the pairing order, we have computed the SC pairing correlations on different bonds (see Appendix <ref>) and find that Φ_hh(r), i.e., the spin-singlet SC pairs on nearest copper sites along x-direction, is the dominant SC component. By increasing both hopping coefficients to the positive values, the spin-singlet pairing correlations are further enhanced and the system eventually undergoes a quantum phase transition to a d-wave SC phase. The properties of this d-wave SC phase is consistent with that of a Luther-Emery liquid state with quasi-long-range SC and CDW correlations but short-range spin-spin correlation. Different with the Luttinger liquid phase, the spin-singlet pairing correlationbetween adjacent copper sites along y-direction Φ_uu(r) becomes dominant over all the other bonds here, and exhibits a d-wave symmetry.§ GROUND STATE CORRELATIONS When t_dd and t_pp^' are both small and positive, the ground state of the system is consistent with that of the Luttinger liquid phase with power-law single particle, PDW, SDW and CDW correlations. However, different with the single-band Hubbard model, the SC correlation (Fig. <ref>(b)) in this case has a spatial oscillation with sign change, which can be described by the following formula:⟨Δ_0^†Δ_r ⟩ = A· cos(ω r+ϕ)/r^κ_pdw + B/r^κ_scIf the value of B≤ A, the overall fluctuation has a spatial oscillation around zero, which suggests the presence of the PDW order. Our results are consistent with this, where we find that A≈ 0.01, B≈ 0.001 when t_dd=t_pp^'=0. For t_dd=t_pp^'=0.2, we find that A≈ 0.006, B≈ 0.02. The charge density properties of the system can be described by the charge density profilen_α(x,y) and its rung average ρ_α(x)=∑_y=1^L_yn_α(x,y)/L_y. Consistent with previous studies <cit.>, the spatial decay of the CDW correlation at long distance is dominated by a power-law with an exponent κ_c with two ordering wavevectors Q=2πδ and 2Q. The value of the exponent κ_c can be obtained by fitting the charge density oscillations ρ(x) with a generalized Friedel oscillation formula induced by the open boundaries of the cylinder <cit.>ρ(x) = A_Q∗ cos(Q x + ϕ_1)/x^κ_c/2 +A_2Q∗ cos(2Q x + ϕ_2)/x^κ_c/2 +n_0,where A_Q and A_2Q are the amplitudes, ϕ_1 and ϕ_2 are the phase shifts and n_0 is the mean density. It has been shown thatthe 2Q charge order usually competes with the superconductivity <cit.>.which is also consistent with our results. We will see that when the system enters the superconducting phase, the 2Q mode is suppressed.By increasing the parameters to the larger positive values, the system enters a distinct d-wave SC phase with dominant power-law superconducting correlations but exponentially decaying single-particle and spin-spin correlations. In the d-wave SC phase, the short-ranged spin-spin correlation is mutually commensurate with the chargecorrelation at the wavevector Q, although both of them are incommensurate on the finite lattice with open boundaries in the longer direction. Contrary to the Luttinger liquid phase with small further neightor electron hoppings, we find that in this d-wave SC phase the coefficient B is greater than A . Moreover, the charge density modulation has only one characteristic wave vector Q (Fig. <ref>). The 2Q mode is suppressed while the SC order is enhanced. The symmetry of the SC correlations can be determined by comparing the relative signs of the SC pair-pair correlations between different bonds. The right panel of Fig. <ref> shows that the pairing correlations between u bonds display values in opposite sign with the correlations between u and h bonds (see in Fig. <ref> for the definition of the bonds). Besides, the multiplication of κ_c and κ_sc is close to 1. All these suggest that the system is in a Luther-Emery phase, with a d-wave pairing symmetry.The results above show that the inter-cell hopping termsenhance the SC correlations while suppressing the spin-spin correlation. This is closely connected to the results of charge transfer energy which also suggests that the density distribution is associated with the intertwined orders in the charge transfer insulators <cit.>. In order to understand how these two pictures reconcile, we study how the hopping terms affect the density distributions on each orbital. It is shown in <cit.> that for the undoped three-band Hubbard model, about 70% of the holes are on the copper sites. Upon (hole) doping, however, most of the doped holes will occupy the oxygen sites <cit.>. Interestingly, if we turn on the further neighbor electron hoppings t_dd and t_pp^', we find that the effect of increasing t_pp^' is equivalent to decreasing the effective Δ_pd, where the average copper density will decrease (See Appendix <ref>). On the contrary, the influence of t_dd on density distribution is negligible.As a complementary comparison, we have computed the SC pairing correlation for different charge transfer energies Δ_pd while all the other parameters remain the same. In Fig. <ref> we show the SC pairing correlations for two representative cases in each of the two phases, where we find that for both phases, decreasing Δ_pd can enhance the SC pairing correlations.§ DOWN-FOLDING THE THREE-BAND HUBBARD MODEL TO AN EFFECTIVE T-T^'-J-J^' MODELThe low energy physics of the three-band Hubbard model (or CuO_2 plane) can be mapped to an effective t-t^'-J-J^' model in Eq. <ref>. Following the prescription in Ref. <cit.>, we present the down-folding for the three-band Hubbard model with further neighbor hopping terms. In this scheme we consider two types of small clusters: the Cu_2O_7 cluster with two unit cells aligned, andthe Cu_2O_8 cluster with the two unit cells along the diagonal direction. Specifically, the hopping parameter t (t^') is determine by the spin singlet and triplet energy splitting of a Cu_2O_7 (Cu_2O_8) cluster; J (J^') is determined by the energy difference between bonding and anti-bonding states of a Cu_2O_7 (Cu_2O_8) cluster. The results are shown in Table <ref>. When the three-band Hubbard model is in the d-wave SC phase, we find in the effective model that the hoppings to both the nearest (t) and next nearest neighbors (t^') are enhanced, as well as the spin exchange coupling J. The strengthening of the hybridization between orbitalsmakes the pairing orders more favorable. H_t-t^'-J-J^'= - t∑_⟨ i,j ⟩σ(c_i,σ^† c_j,σ + h.c.)- t^'∑_⟨⟨ i,j ⟩⟩σ(c_i,σ^† c_j,σ + h.c.)+ J∑_⟨ i,j ⟩σ(S⃗_i·S⃗_j - 14 n_in_j)+ J^'∑_⟨⟨ i,j ⟩⟩(S⃗_i·S⃗_j - 14 n_in_j) To further support this argument, we have also calculated the ground state properties of the t-t^'-J-J^' model on a N=64× 2 ladder with DMRG, using the parameters listed in Table <ref>. We choose two representative sets of parameters in the first and last rows of the table to implement the calculations. The corresponding three-band Hubbard models of these two data points are in the Luttinger liquid phase and d-wave SC phase respectively. The results are presented in Fig. <ref>. We can see in the left panel that all the correlations decay as a power-law, with dominant single particle and spin correlations. However in the right panel, the SC pairing and CDW correlations become dominant while the spin and single particle correlations are short-ranged which decay exponentially. Similar with three-band Hubbard model, the pairing symmetry of the SC correlations is also d-wave in this case. § CONCLUSIONSTo summarize, we have studied the ground state properties of the lightly doped three-band Hubbard model with longer-range hopping terms on two-leg cylinders. By tuning these hopping coefficients positively, we observed a quantum phase transition from a Luttinger liquid phase characterized by intertwined PDW, CDW, and SDW correlations to a d-wave SC phase (i.e., Luther-Emery phase). Through this transition, the pairing order intensifies, changing from a mixed SC and PDW order to a predominantly d-wave symmetry SC order. This transition is underlined by modifications in the band structure and density distributions stemming from the tuning of hopping parameters. Our computational analyses pinpointed an intriguing correlation: the increase in the t_pp^' parameter mimics the effect of a reduced effective charge transfer energy, Δ_pd.By down-folding the three-band Hubbard model to the effective t-t^'-J-J^' model, we observed an increase of both the nearest and second neighbor hopping parameters and the ratio of J/t, when entering the Luther-Emery phase. This finding sheds light on the question “why the lightly doped three-band Hubbard ladder behaves as a Luttinger liquid but not the Luther-Emery liquid, as the single-band Hubbard model does”. From our small cluster study we found that although these two models were closely related, the parameters of the original three-band Hubbard model did not have a strong enough hybridization between orbitals. However, by introducing longer-range hopping terms, the Luther-Emery phase emerges in the vicinity of the Luttinger liquid phase.In this study, we have examined the effect of further-neighbor particle hopping terms on the ground state properties of the three-band Hubbard model on two-leg cylinders. It is worth noting that the recovery of the Luther-Emery phase on the three-band Hubbard model can also be achieved by introducing long-range Coulomb interactions <cit.>. An intriguing avenue for future research would be to ascertain whether this influence persists in wider systems and at higher doping levels. A comprehensive understanding would also necessitate the exploration of the combined effects of further-neighbor hopping and long-range Coulomb interactions, for which previous studies have demonstrated that the later one is crucial for enhancing or in some instances even inducing the PDW order and superconductivity <cit.>.§ ACKNOWLEDGEMENT This work was supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract DE-AC02-76SF00515. Computational work was performed on the Sherlock cluster at Stanford University and on resources of the National Energy Research Scientific Computing Center, supported by the U.S. DOE, Office of Science, under Contract no. DE-AC02-05CH11231.§ EFFECTS OF HOPPING PARAMETERS AND CHARGE TRANSFER ENERGY ON DENSITY DISTRIBUTION In order to demonstrate how the hopping coefficients affect the number density, we compute the average density on copper and oxygen sites for systems with length Lx=48 for two cases: (1) half filling and (2) 1/8 hole doping. Fig. <ref> shows the average local densities on Cu, O_x and O_y sites with different values of intra-orbital hoppings.In both of the half filling and the hole doped regimes, the density on copper sites mainly depends on the values of t_pp^', and keeps nearly unchanged with t_dd. When t_pp^' is increasing,the occupation number on Cu sites decreases and more holes go to oxygen sites, which is similar with the effect of decreasing the charge transfer energy Δ_pd.Nevertheless, the doped holes (the increase of the hole density upon doping) have both t_dd and t_pp^' dependencies. With the increase of t_pp^', the doped holes on both O_x and O_y decreases. With the increase of t_dd, the doped holes on O_x and O_y have opposite trends, and the number of doped holes on copper sites decreases. § COMPARISON OF THE PAIRING ORDERS ON DIFFERENT BONDS In Fig. <ref> we show the pairing correlations on all types of bonds illustrated in the top panel of the schematic Figure <ref>. When the system is in the PDW phase, the PDW order on the h bonds is slightly stronger than all the others, although all of the pairing orders are quasi-long ranged. However in the d-wave SC phase, the SC order on u and h bonds are dominant, and are stronger than the others by a few orders of magnitude. § GROUND STATE CORRELATIONS IN THE LL PHASE We choose one representative data point in the LL phase and present the ground state properties. For the case t_dd=t_pp^'=0, we show the pairing correlations in Fig. <ref>. The pairing order has a spatial oscillation with a vanishing average, and exhibits a d-wave symmetry. Fig. <ref> shows the local density profile and the decaying exponent obtained by curve fitting with Eq. <ref>. | http://arxiv.org/abs/2310.17706v1 | {
"authors": [
"Luhang Yang",
"Thomas P. Devereaux",
"Hong-Chen Jiang"
],
"categories": [
"cond-mat.str-el",
"cond-mat.supr-con"
],
"primary_category": "cond-mat.str-el",
"published": "20231026180756",
"title": "Recovery of a Luther-Emery phase in the three-band Hubbard model with longer-range hopping"
} |
[email protected] Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627, USA Institute for Quantum Studies, Chapman University, 1 University Drive, Orange, CA 92866, USAInstitute for Quantum Studies, Chapman University, 1 University Drive, Orange, CA 92866, USA Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627, USA Determining the Markovianity and non-Markovianity of a quantum process is a critical problem in the theory of open quantum systems, as their behaviors differ significantly in terms of complexity. It is well recognized that a quantum process is Markovian if and only if the quantum master equation can be written in the standard Lindblad form with all rates nonnegative for all time. However, here we present a striking result that any finite-dimensional open quantum system dynamics can be described by a quantum master equation in the Lindblad form with all rates nonnegative for all time. In fact, it can be shown that one can arbitrarily decide the sign of the rates in any case at any time interval. Note that here we take an unconventional approach where the quantum master equation we construct will in general be state-dependent, which means that the Hamiltonian, jump operators and rates will all depend on the current state of the density matrix ρ(t). Our findings raise serious questions on the current criterion in determining Markovianity and non-Markovianity in open quantum system dynamics.Signs of the rates in the Lindblad master equations can always be arbitrarily determined and Andrew N. Jordan January 14, 2024 ======================================================================================== Introduction.—Quantum systems that interact with their environment are called open quantum systems. Since no quantum systems are completely isolated, all realistic quantum systems are open, which means that their evolution is generally non-unitary. As a result, one cannot describe their evolution by the Schrödinger equation alone, but has to rely on other means such as quantum master equations. It can be shown <cit.> that all quantum master equations can be cast into the standard Lindblad form,ρ̇=-i[H, ρ]+∑_i=1^d^2-1γ_i(L_iρ L_i^†-1/2{L_i^† L_i, ρ}),where the Hamiltonian H, jump operators L_i and rates γ_i are potentially time-dependent. An important problem in the theory of open quantum systems is determining the Markovianity of a Lindblad master equation, since Markovian systems are usually significantly easier to describe. It is well recognized <cit.> that a quantum master equation in the standard Lindblad form is Markovian if and only if γ_i(t) ≥ 0 for all time t, corresponding to the complete positive and trace-preserving (CPTP) process <cit.>.However, in this Letter, we are going to show a striking result, that any finite, d-dimensional continuous open quantum system dynamics can be described by the quantum master equation in the standard Lindblad form with nonnegative rates γ_i(t) ≥ 0for all i and all time. Moreover, we can prove exactly the same conclusion, but this time with nonpositive rate γ_i(t) ≤ 0 for all i and all time. Actually, it turns out that one can even decide freely which γ_i(t) is going to be nonpositive, and which is going to be nonnegative, in any case. This finding raises serious questions on the current criterion in determining Markovianity. In the following, we are going to prove the above statement. Proof.—Here we will focus on the 4-d case for the purpose of better illustration, but the same reasoning can be easily carried over to arbitrarily finite dimensional systems, demonstrated later. To start with, let us first consider the simplified case that the Hamiltonian H(t)=0 and density matrix ρ_D(t) is diagonalized, such that ρ_D= diag(p_1, p_2, …, p_d) and ρ̇_D=diag(f_1, f_2, …, f_d) (in our case, d=4). We define the following jump operators, a_1 =[ 0 0 0 0; 1 0 0 0; 0 0 0 0; 0 0 0 0 ];a_2=[ 0 0 0 0; 0 0 0 0; 0 1 0 0; 0 0 0 0 ];a_3=[ 0 0 0 0; 0 0 0 0; 0 0 0 0; 0 0 1 0 ]a_4 =[ 0 0 0 0; 0 0 0 0; 1 0 0 0; 0 0 0 0 ]; a_5=[ 0 0 0 0; 0 0 0 0; 0 0 0 0; 0 1 0 0 ]; a_6=[ 0 0 0 0; 0 0 0 0; 0 0 0 0; 1 0 0 0 ],and their conjugate transpose a^†_i's. We also denote the quantum channel Φ_a_i, associated with a_i, and quantum channel Φ_a^†_i, associated with a^†_i, as Φ_a_i[ρ(t)] =γ_i(t)(a_iρ(t) a_i^†-1/2{a_i^† a_i, ρ(t)}) Φ_a^†_i[ρ(t)] =γ^'_i(t)(a^†_iρ(t) a_i-1/2{a_i a^†_i, ρ(t)}),Then one can calculate Φ_a_i's and Φ_a^†_i's. For instance, Φ_a_1[ρ_D(t)] =[ -γ_1 p_1000;0γ_1 p_100;0000;0000 ]; Φ_a^†_1[ρ_D(t)]=[γ^'_1 p_2000;0 -γ^'_1 p_200;0000;0000 ] Φ_a_2[ρ_D(t)] =[0000;0 -γ_2 p_200;00γ_2 p_20;0000 ]; Φ_a^†_2[ρ_D(t)]=[0000;0γ^'_2 p_300;00 -γ^'_2 p_30;0000 ]. There are a few critical properties of the matrices Φ's presented above: * Each Φ is diagonalized, and generally contains exactly two non-zero entries; * Each Φ is traceless, meaning that if one of the non-zero entry equals +g then the other one must equal -g. * Φ_a_i and Φ_a^†_i are linearly dependent with each other for a given i, but they have the opposite sign if γ_i and γ^'_i have the same signs. * Φ_a_i's are generally linearly independent with each other for different i. Similarly for Φ_a^†_i's.Note that the above properties also apply to the d-dimensional case, where we will have ∑_k=1^d-1 k=d(d-1)/2 linearly independent a_i's, and d(d-1)/2 linearly independent a^†_i's, resulting in a total number of d(d-1) jump operators. On the other hand, to select two eigenvalues out of d, there are d2=d(d-1)/2 different ways. Since there are exactly d(d-1)/2 Φ_a_i's (or Φ_a^†_i's), the linearly independency of Φ_a_i's (or Ψ_a^†_i's) means that Φ_a_i's (or Ψ_a^†_i's) span the space formed by all possible “2-eigenvalue channel”, i.e. the quantum process that exactly two eigenvalues of the density matrix changes.Recall that we need to solve the Eq. (<ref>) for nonnegative (later generalized to other cases) γ_i's and γ^'_i's, assuming arbitrary physical and known p_i's and f_i's, via the jump operators a_i's and a^†_i's defined above. To do so, we need to solveρ̇_D=∑_i(Φ_a_i[ρ_D(t)]+Φ_a_i^†[ρ_D(t)]), or equivalently,f_i=F_i(γ_1, γ_2, …, γ_d(d-1)/2, γ^'_1, γ^'_2, …, γ^'_d(d-1)/2)where F_i, a function of γ's and γ^''s, is the ith diagonal element of the master equation, whose explicit expression can be easily calculated. All off-diagonal elements of both sides vanish. Note that this is an inhomogeneous undetermined linear system of equations, which means that if there exists a solution, then there are infinitely many solutions. While there are some algebraic ways (e.g. row reduction) to solve the system of equations, here we take another approach. We divide f_i's into two categories: the nonnegative ones, f_+,1, f_+,2, …,f_+,j≥ 0, and negative ones, f_-,1, f_-,2, …,f_-,k < 0, such that j+k=d (in our case, d=4). Note that if all f_i's are zero, then the evolution is unitary and all γ's vanish. To proceed, let us first focus on f_-,1. Since f_-,1 is negative, its corresponding eigenvalue p_-,1 of ρ(t) decreases. Such decreased amount of eigenvalue must be compensated by the same amount elsewhere by f_+'s so as to ensure Tr(ρ̇_D)=0. The key idea here is that we have the freedom to decide which f_+ is going to get the compensation by what amount, and each different choice corresponds to a different solution of γ's.Let us say f_+,1 is going to get the compensation as much as f_-,1 can afford and f_+,1 can receive. By the aforementioned analysis, we know that there always exists an Φ_a_l or Φ_a^†_l for some l such that the Φ_a_l or Φ_a^†_l can deliver the compensation with nonnegative rate γ̃_1 ≡γ_l or γ̃_1 ≡γ^'_l. For instance, in d=4 case, if f_2 is negative, and f_1 is positive and will be compensated, then by Eq. (<ref>) we should choose Φ_a^†_1 (but not Φ_a_1); if f_3 is positive and will be compensated, then we should choose Φ_a_2 (but not Φ_a^†_2). In general, there are three possible outcomes of the compensation: * f_+,1 is exactly compensated by f_-,1, which means |f_+,1|= |f_-,1|. In this case, both f_+,1 and f_-,1 are ruled out for future compensations. * f_+,1 is under-compensated, which means |f_+,1|> |f_-,1|. In this case, f_-,1 is ruled out for future compensations, whereas f_+,1 will still enjoy future compensations, and its value is updated to f_+,1+f_-,1. * f_+,1 is over-compensated, which means |f_+,1|< |f_-,1|. In this case, f_+,1 is ruled out for future compensations, whereas f_-,1 will still enjoy future compensations, and its value is updated to f_+,1+f_-,1.As can be seen, at the end of a single round of compensation, at least one of the f's will be ruled out for the future compensations. We proceed the same process for the rest pairs of f_+'s and f_-'s, e.g. f_+,1 and f_-,2, or f_-,1 and f_+,2, etc. In each round, we will solve for a nonnegative rate γ̃_i, and rule out at least one of the f's. In the last round, the f_+ and f_- will exactly cancel out each other, so two f's are guaranteed to be eliminated. Since there are in total of d number of f's, as a result, we will have up to d-1 rounds of compensation, in which we will solve for nonnegative rates γ̃_1, γ̃_2, …, γ̃_d-1. This also implies that as few as d-1 jump operators are sufficient to describe any finite, d-dimensional open quantum system dynamics. This finding echos our recent work <cit.>, in which we concluded that as few as d-1 unitary jump operators are sufficient to describe any d-dimensional open quantum system dynamics.Moreover, we can show by the same method that any quantum master equation can be written in the standard Lindblad form such that all rate γ̃_i(t) ≤ 0 for all time. Actually, we can even choose which γ̃_i's are going to be nonnegative, and which are going to be nonpositive, freely, in any case at any time interval. This is because for a designated sign of γ̃_i, we can always choose from Φ_a_j and Φ_a^†_j for some j and at least one of them will deliver the compensation since Φ_a_jand Φ_a^†_j has the opposite sign. After finishing all compensation processes, we will have a list of rates γ̃_1, γ̃_2, …, γ̃_d-1, which are potentially time-dependent, and corresponding jump operators ã_1, ã_2, …, ã_d-1, so that we can write down the master equation for the diagonalized density matrix,ρ̇_D=∑_i=1^d-1γ̃_i ( ã_i ρ_D ã_i^†-1/2{ã_i^†ã_i, ρ_D}).To obtain the master equation for the non-diagonal case, we use the same trick we developed in <cit.>, by establishing an explicit correspondence between ρ_D and ρ, and ρ̇_D and ρ̇, ρ_D(t)=𝕌_t^†ρ(t) 𝕌_t ρ̇_D(t)=𝕌_t^†(ρ̇(t)+i[H(t), ρ(t)]) 𝕌_t,and plugging in ρ̇_D and ρ_D into Eq. (<ref>). The 𝕌_t denotes the unitary matrix diagonalizing ρ(t), and H(t) denotes the Hamiltonian that can solve all instantaneous eigenvectors |ψ_i(t)⟩ of ρ(t), i.e. H(t) |ψ_i(t)⟩=i |∂_t ψ_i(t)⟩ for i=1,2, …, d. An explicit form of H(t), which is state-dependent, can be given by <cit.>,H(t)=i∑_i=1^d ∂_t ψ̃_i(t)ψ̃_i(t),where|ψ̃_i(t) ⟩≡ e^i ϕ_i(t) |ψ_i(t) ⟩ and ϕ_i(t)=∫-i⟨∂_tψ_i(t)|ψ_i(t)⟩ d t. The above Hamiltonian, which can be proven to be Hermitian by taking the time derivative of both sides of 1=∑_i=1^dψ̃_i(t)ψ̃_i(t), is optimal in the sense that it has the minimum Hilbert-Schmit norm <cit.> H(t) _HS=Tr(H^2(t)). One can also take H(t)=i∑_i=1^d ∂_t ψ_i(t)ψ_i(t) if such optimization is unnecessary. After plugging in ρ̇_D and ρ_D into Eq. (<ref>), we can obtain the master equation, ρ̇=-i[H(t), ρ(t)]+∑_i=1^d-1γ̃_i(A_i, tρ A_i, t^†-1/2{A_i, t^† A_i, t, ρ}),where A_i,t=𝕌_tã_i𝕌_t^†. The master equation obtained in this way, which can describe any continuous, d-dimensional open quantum system dynamics, has only linearly, d-1 many γ̃_i terms, and can have arbitrary designated sign of γ̃_i, including the case that all γ̃_i's are nonnegative or nonpositive. Note that different from usual master equations, the master equation Eq. (<ref>) is generally state-dependent, which means that the H(t), γ̃_i and A_i,t all depend on the current state of the density matrix ρ(t). Generalization to d-dimensional case.—To show that the proof applies to arbitrary d-dimensional case, we denote a_ij = |i⟩⟨j|, a^†_ij=|j⟩⟨i|, with j<i≤ d, and ρ_D=diag(p_1, p_2, …, p_d). Then we have,Φ_a_ij[ρ_D] = p_j λ_ij(|i⟩⟨i|-|j⟩⟨j|) Φ_a^†_ij[ρ_D] = -p_i λ^'_ij(|i⟩⟨i|-|j⟩⟨j|),where we have used,a_ijρ_D=p_j a_ij , ρ_Da_ij=p_i a_ij, a^†_ijρ_D=p_i a^†_ij , ρ_Da^†_ij=p_ja^†_ij, a_ija^†_ij=|i⟩⟨i| ,a^†_ija_ij=|j⟩⟨j|, |i⟩⟨i|ρ_D=p_i |i⟩⟨i| , ρ_D |i⟩⟨i|=p_i |i⟩⟨i|.From the expressions of Φ_a_ij and Φ_a^†_ij, it is immediately clear that if we want to describe a “2-eigenvalue channel”, where the ith eigenvalue changes by some amount and jth eigenvalue changes by the negative of that amount, there always exists an Φ with which we can describe the process. In this particular case, we should either choose Φ_a_ij or Φ_a^†_ij for the description, depending on whether we want our λ to be negative or positive. The rests of the arguments follow exactly the same as illustrated earlier.Example - Jaynes-Cummings model. Here we demonstrate an example in d=2 where all γ's are made nonnegative and nonpositive, respectively, for all time. We consider the Jaynes-Cummings model under the rotating wave approximation, which describes the Rabi oscillation of a two-level atom in an cavity, H_S E=ħω_c a^† a+ħω_aσ_z/2+ħΩ/2(a σ_++a^†σ_-),where σ_±=σ_x ± i σ_y. For simplicity, we take ω_c=ω_a=ω and ħ=Ω=1. The model, which is conventionally considered as highly non-Markovian, can be solved exactly <cit.>, and the reduced density matrix ρ_S(t) of the atom can be found byρ_S(t)=([ρ_11(0) cos ^2(t/2) ρ_12(0) cos( t/2) e^-i ω t; ρ_21(0) cos(t/2) e^i ω t1-ρ _11(0) cos^2(t/2) ]),where we have assumed that the cavity is initially in the vacuum state. Alternatively, one can describe the above dynamics by the following master equation, ρ̇=-i[H(t), ρ] +γ_1(t)(σ̃_-ρσ̃_+-1/2{σ̃_+σ̃_-, ρ})+γ_2(t)(σ̃_+ρσ̃_--1/2{σ̃_-σ̃_+, ρ}),where H(t)=∑_i=1^2|∂_t ψ_i(t)⟩⟨ψ_i(t)|, σ̃_± = 𝕌_t σ_±𝕌^†_t, and |ψ_i(t)⟩ is the instantaneous eigenvector of ρ(t). Let ρ_D(t) which is diagonalized by 𝕌_t be given by ρ_D(t)=diag(λ_1(t), λ_2(t)). If we want both γ's to be nonnegative, we can take γ_1(t)=max (-λ̇_1/λ_1, 0) and γ_2(t)=max (λ̇_1/1-λ_1,0); if we want both γ's to be nonpositive, we can take γ_1(t)=min(-λ̇_1/λ_1, 0) and γ_2(t)=min(λ̇_1/1-λ_1,0). If initially, ρ_S(t) is already diagonalized (i.e. ρ_12(0)=ρ_21(0)=0), then the solution have a simple explicit form. In this case, 𝕌_t=𝕌^†_t=1 such that σ̃_± = σ_±, and we can take H=1/2ωσ_z. If we want both γ's to be nonnegative, we can takeγ_1(t) ={[tant/2≥0, t∈[0+2n π,(2n+1) π), n∈ℕ;0otherwise ]. γ_2(t) ={[α(t) ≥ 0, t∈[(2n+1) π,(2n+2) π), n∈ℕ;0 otherwise, ].and if we want both γ's to be nonpositive, we can takeγ_1(t) ={[tant/2≤0, t∈[(2n+1) π,(2n+2) π), n∈ℕ;0otherwise ]. γ_2(t) ={[ α(t) ≤0, t∈[0+2n π,(2n+1) π), n∈ℕ;0 otherwise, ].where α(t)=ρ_11(0)sin t/ρ_11(0)cos t+ρ_11(0)-2. Intuitively, Eq. (<ref>) can be understood as a qubit being in contact with a cold bath during the time interval t ∈[0+2 n π,(2 n+1) π), n ∈ℕ, with rate γ_1(t), and with a hot bath during the other time, with rate γ_2(t). Both processes are conventionally thought to be Markovian. Note that γ can be singular by approaching infinity. The reason is that the eigenvalue λ_1(t), which could be zero at certain moment, appears in the denominator. We stress that such a singular value, while annoying, will not hamper the description of the dynamics. In fact, such singularity has been reported in the literature before <cit.>.Conclusions.—Since for any given quantum master equation, we can always rewrite it in the Lindblad form given by Eq. (<ref>) and choose whatever the sign of γ_i we want, the current notion of Markovianity and non-Markovianity breaks down, at least in the sense of the signs of γ_i's. Moreover, the current interpretation <cit.> which associates non-Markovianity with the information backflow from the environment to the system also becomes questionable. As such, a future reexamination of the those notions becomes necessary and essential. Acknowledgement.—We are grateful to Shengshi Pang for valuable discussions. This work was supported by the Army Research Office (ARO) under Grant No. W911NF-22-1-0258.ieeetr | http://arxiv.org/abs/2310.17881v1 | {
"authors": [
"Le Hu",
"Andrew N. Jordan"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231027040650",
"title": "Signs of the rates in the Lindblad master equations can always be arbitrarily determined"
} |
[email protected] International Center for Theoretical Physics (ICTP), Trieste, 34151, Italy Istituto di Struttura della MateriaCNR (ISM-CNR), Trieste 34149, Italy Condensed Matter Physics and Materials Science Department, Brookhaven National Laboratory, Upton, New York 11973, USA Istituto di Struttura della MateriaCNR (ISM-CNR), Trieste 34149, ItalySurface Physics and Material Science Division, Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Kolkata 700064, [email protected] Department of Physics and Astronomy, West Virginia University, Morgantown, WV 26506, United States.Istituto di Struttura della MateriaCNR (ISM-CNR), Trieste 34149, ItalyThe Zhang-Rice (ZR) state is a strongly hybridized bound state formed by the transition metal and oxygen atoms. The spin-fluctuations within the ZR state are known to play an important role in high-T_c superconductivity in cuprates. Here, we employ a combination of angle-resolved photoemission spectroscopy (ARPES), X-ray photoemission spectroscopy (XPS), and ab initio embedded dynamical mean-field theory (eDMFT) to investigate the influence of magnetic ordering on the spectral characteristics of the valence band and Mn 2p core-level in MnO (001) ultrathin films. Our results demonstrate that a complex spin-selective evolution of Mn 3d-O 2p hybridization develops due to the long-range antiferromagnetic (AFM) ordering. This hybridization significantly alters the spectral shape and weight of the ZR state. Specifically, in the AFM phase, we observed the sharpening of the ZR state and band folding with the periodicity of the AFM unit cell of MnO(001). We also demonstrated a strong connection between the spectral evolution of the ZR state and the non-local screening channels of the photoexcited core holes. Further, our detailed temperature-dependent study reveals the presence of short-range antiferromagnetic correlations that exist at much higher temperatures than T_N. Such comprehensive studies showing the evolution of the ZR state across the magnetic transitions and its implication to the core-hole screening have never been reported in any 3d binary transition metal oxides.Spin Selective Evolution of Zhang-Rice State in Binary Transition Metal Oxide Carlo Carbone January 14, 2024 =============================================================================The 3d binary transition metal oxides (TMOs) have garnered significant research interest due to their unusual insulating and magnetic properties <cit.>, along with their potential for various technological applications <cit.>. Having unfilled d orbitals, these oxides should be a metal based on the conventional single-particle band theory; however, they are well-known antiferromagnetic insulators with wide charge gaps. Later, it was found that the strong intra-atomic Coulomb interactions between d electrons are responsible for opening up this insulating charge gap and they can be described by Mott-Hubbard theory <cit.>. These TMOs have broad similarities with the high-T_c cuprate superconductors (SC): in both cases, the transition metal atoms are octahedrally coordinated by oxygen atoms, and the parent compound of cuprates and binary TMOs exhibits an antiferromagnetic insulating ground state. Besides, these TMOs host intriguing many-body electronic states such as Zhang-Rice bound state (ZRBS) <cit.>, analogous to the ZR singlet observed in high-T_c cuprate superconductors (SC) <cit.>. According to the originally proposed model by Zhang and Rice <cit.>, in hole-doped cuprates, the hybridization strongly binds a hole on each square of oxygen atoms to the central Cu^2+ ion in the CuO_2 plane, and forms a singlet which is known as ZR singlet. It is widely accepted that the ZR singlet in cuprate plays an important role in superconductivity <cit.>. Owing to the various similarities between the TMOs and cuprates, understanding the ZR physics in relatively simpler binary TMO systems could help in better understanding the more complex physics of high-T_c superconducting cuprates.However, the role of magnetic interactions on the electronic structure and ZRBS of these binary oxides remains controversial due to the conflicting theoretical and experimental results <cit.>. From the experimental side, it is partially due to the difficulties in obtaining high-quality bulk single crystals and the technical challenges of performing low-temperature photoemission measurements due to their insulating nature, which causes potential charging issues. Recent studies have shown that the charging problem can be overcome by growing ultrathin films of these materials on metallic substrates <cit.>. However, the high-resolution ARPES studies across the magnetic transition are still spare, with conflicting results <cit.>. For example, in the case of CoO, Shen <cit.> reported that the ZRBS remain unchanged across the magnetic transition, which contradicts the recent ARPES studies by Barman <cit.>, where strong sharpening of ZRBS is observed in the AFM phase, in line with theoretical predictions <cit.>. Similar conflicting results are also found for MnO <cit.>. In the case of NiO, most of the ARPES studies were only performed in the AFM phase <cit.>. Accessing the PM phase requires annealing the sample above T_N = 525 K at which sample decomposition occurs which limits the reliable measurements across T_N <cit.>.Theoretical modeling of the electronic structure of these compounds is further challenging due to simultaneously describing both localized and itinerant pictures of electrons. Most of the first-principles approach based on density functional theory (DFT) and various beyond-DFT methods <cit.> such as DFT+U, hybrid functionals, and GW-approximation face a great challenge to properly describe the fluctuating moments in time and fail to open up an insulating gap in the paramagnetic phase of the TMOs. The exception is dynamical mean field theory (DMFT) in combination with DFT, where one can obtain proper PM and AFM phases without broken-symmetry configuration a priori, has become a gold standard for investigating correlated materials <cit.>. Interestingly, recent DFT+DMFT studies predicted a strong connection between the evolution of ZRBS with magnetic ordering and non-local screening channels of photo-excited core holes <cit.>. In contrast, the measured core-level photoemission spectra of CoO do not exhibit any measurable changes across T_N<cit.>. All these conflicting results highlight the need for further combined experimental and theoretical studies for an improved understanding of the electronic structure of these materials.To address these issues, we chose MnO(001) thin film as a prototype system and performed a detailed electronic structure study across T_N using ARPES and XPS techniques and complemented with our embedded-DMFT (eDMFT) calculations. MnO is chosen as Mn^2+ has the highest spin state of 5/2 among the transition-metal monoxides and is therefore expected to exhibit significant changes in electronic structure across its Néel temperature (T_N ∼ 120 K) due to strong fluctuating moments. ARPES results show band folding and sharpening of ZRBS across T_N. Theoretical results demonstrate that the sharpening of ZRBS in the AFM state is directly linked to the strongly enhanced hybridization between the minority spin channel of Mn e_g and O 2p. We have also shown that the change in the hybridization of ZRBS across T_N is strongly connected to the non-local screening channel of Mn 2p core-hole. Details about the experimental and theoretical methods can be found in the Supplemental Material (SM). Figure <ref>(a) shows the schematic arrangements of Mn^2+ ions and their spins in the magnetic unit cell of MnO. The magnetic moments of Mn^2+ ions are aligned parallel within the (111) planes but antiparallel between adjacent (111) planes <cit.>. This type of spin orientation naturally produces in-plane AFM order on the surface of MnO(001) films with p(2×1) translation symmetry w.r.t the chemical unit cell of the MnO(001), illustrated in Fig. <ref>(b). The Bulk Brillouin zone (BZ) of MnO and its [001] surface projection are shown in Fig. <ref>(c). The symmetry points without a bar are used to represent the bulk phase, while those with a bar are used to represent the surface projection. From Fig. <ref>(c) it can be seen that the (001) surface projection of Γ-X direction is the same as the Γ̅-M̅ direction; they are essentially the same at k_z= 0. Further, the schematics of the chemical (green) and magnetic (red and black) surface Brillouin zones (SBZs) are shown in Fig. <ref>(d). Two orthogonal magnetic SBZ are drawn as our MnO(001) films exhibit twin magnetic domains due to the 4-fold rotational symmetry of the Ag(001) substrate <cit.>. These AFM domains were resolved in our previous experiments on MnO(001) <cit.> and for other binary 3d TMOs <cit.>. First, we will discuss the implication of AFM ordering to the electronic structure. In the AFM phase, band folding is expected along the Γ̅-X̅ direction due to the doubling of unit-cell dimension compared to the PM phase. However, here, the situation could be more complex due to the presence of two orthogonal magnetic domains. This is because folding is expected along the Γ̅-X̅ direction for one domain [2×1 (black)], whereas no folding is expected for another domain [1×2 (red)] along the same direction [Fig. <ref>(d)]. Thus, in the AFM phase, we expect a superposition of folded and unfolded bands along Γ̅-X̅. The ARPES spectra along this path are shown for the PM and AFM phases in Fig. <ref>(e) and <ref>(f), respectively. Figures <ref>(g) and (h) show their respective second derivatives. Insets show the zoomed view of the topmost valence band. The electronic states in the upper part of the valence band (between -1.5 to -5 eV) show less dispersion compared to the lower part (between -5 to -9 eV) as the former region is dominated by strongly correlated Mn 3d, while the later by less-correlated O 2p states <cit.>. According to our previous eDMFT computations, within the Mn 3d dominated region, the topmost part is mostly due to the e_g character, while the lower part is dominated by t_2g character <cit.>. However, significant Mn 3d-O 2p hybridization has been observed throughout the valence band <cit.>. The topmost valence band is a hybridized bound state formed by the Mn e_g and O 2p orbitals, which is called ZRBS. From our ARPES data, it can be seen that the dispersion of this state changes between the PM to AFM phases and it shows folding in the AFM phase, with a periodicity two times higher than that of the PM phase [inset of Figs. <ref>(g) and (h)]. The effect of band folding can also be seen from the constant energy cuts (at E=-2.1 eV) as shown in Figs. <ref>(i) and (j). In the PM phase, high-intense spots are observed near the X̅ whereas, in the case of the AFM phase, additional high-intense spots appear close to the M̅. It is exactly what is expected from the band folding scenario as M̅ (for the 1×1 cell) becomes X̅ (for 2×1 and 1×2 cells), thus the spectral features near X̅ should be reflected around M̅. The residual intensity observed around the X̅ in the AFM phase can be attributed to the simultaneous presence of both folded and unfolded bands as we discussed earlier. Between the PM and AFM phases, some changes in band dispersion are also seen for the second valence band (around -4 eV), however, no measurable change has been observed for the states with dominant O 2p characters. Similar behavior was observed for the antiferromagnetic MnO_2 chains <cit.>. This could be due to the fact that the magnetic moments are sitting on the Mn^2+ sites not on O^2-. Thus the Mn 3d electrons could dominantly feel the interactions with the spin compared to the O^2- electrons.Across the magnetic transition, a distinct change in the band dispersion of the ZRBS is also observed along the Γ̅-M̅ direction, illustrated in Figs. <ref>(a) and (b). Especially, in the AFM phase, the two branches of the ZRBS are observed at around the midway between Γ̅-M̅ [Fig. <ref>(b)], and their dispersions are well reproduced by our eDMFT calculations [Fig. <ref>(d)]. However, in the PM phase, an apparent discrepancy in the dispersion of ZRBS can be seen between the theory and experiment around Γ̅, where an extra band (enclosed by a dotted ellipse) is seen around -2 eV in theory [Fig. <ref>(c)], which is absent in the experiment. Further, eDMFT computations reveal that this extra band appears due to the folding of the electronic state because of the large dimension of the PM cell (AFM cell with random spins) considered in our calculations; this band disappears if we perform eDMFT computation in a PM cell (see also Fig. S1 in the SM for comparison). To exactly probe the electronic structure along the X-Γ-X path, we have performed photon energy dependent (k_z dependence) ARPES measurements in a wide photon energy range that covers two consecutive Γ points in the bulk BZ, as shown in Figs. <ref>(e) and (f). By comparing ARPES data with the theoretical band dispersion [Fig. <ref>(c)], the symmetry point Γ is identified, where all the bands apparently touch each other, similar to what is also observed for NiO and CoO <cit.>. From the symmetry of the band dispersion, we estimated the value of the crystal potential (V_0) to be ∼ 6.0 eV. In Figs. <ref>(e) and (f), the ARPES spectra at around Γ X/2 (41 eV≤ hv ≤49 eV) show strong suppression of photoelectron intensity due to the antiresonance <cit.>. Further, we noticed that the spectral intensity of the ZRBS is strongly suppressed around the Γ point but enhanced in the next Γ (denoted by Γ_1), which points to the matrix element effect as its origin. Overall, the electronic states in the AFM phase are found sharper and well-defined compared to the PM phase, in agreement with our theoretical results [Fig. <ref>(d)]. The integrated energy distribution curves (EDCs) within the whole momentum range (Γ-X-Γ_1) also show the same behavior [Fig. <ref>(g)]. Besides, from Fig. <ref>(g), a clear redistribution of spectral weight can be observed in a wide energy range. Spectral weight renormalization is often observed in various strongly correlated magnetic materials due to the intricate change in hybridization across the transition <cit.>.Further, to understand the nature of electronic reconstructions and their connection to magnetism, we conducted extensive temperature-dependent ARPES studies over a broad temperature range. Figure <ref>(h) shows the temperature evolution of ARPES spectra at Γ̅ = 0. Figure <ref>(i) represents the EDCs integrated over a wider momentum range (Γ̅± 0.5 Å^-1) for various temperatures. It can be noticed that the dispersion and the spectral weight of bands gradually change in a wide temperature range, without a sharp jump at T_N. However, the changes are more between 100 to 170 K ([Fig. <ref>(h) and (i)]. Spectral weight change at a much higher temperature than T_N, suggests the presence of short-range AFM order, which is in line with the neutron diffraction and spin-polarized photoelectron diffraction SPPD experiments reported on bulk MnO single crystals <cit.>. We note that the ARPES spectra also show relatively less change between the AFM and PM phases compared to our eDMFT calculations. This is possibly due to the presence of short-range AFM ordering even in the PM phase and finite k_z broadening in ARPES. To get insight into the observed electronic structure reconstructions, we have calculated the partial density of states (DOS), hybridization function, and orbital projected electronic band dispersions in the PM and AFM phase using eDMFT and present them in Fig. <ref>. From Fig. <ref>(a), we notice that the first valence peak, which is the ZRBS, is a hybridized state between minority spin Mn-d and O-2p. We also noticed that the ZRBS gets sharpened in the AFM phase compared to the PM phase, consistent with our ARPES results. To better understand this, we show eDMFT computed hybridization functions in Fig. <ref>(b) for both PM and AFM phases for Mn-e_g electrons. It's evident from Fig. <ref>(b) that the peak (in red) in the hybridization function sharpening for the ZRBS in the AFM phase. Interestingly, the sharpening is observed only for the minority spin-channel as the contribution for the majority channel is found to be relatively much smaller for the first peak and grows only around -4 eV for the majority spin channel (in blue), where the PM phase also shows a peak (black). Next, we plot the orbital and k-resolved eDMFT spectral functions for the PM phase in Fig. <ref>(c). For the AFM phase, we show this for both spin minority or down (Fig. <ref>d) and the majority or up (Fig. <ref>e) components. In the PM phase, we notice that the first valence peak consists of the Mn-d and O-2p. For the AFM phase, the orbital contribution in the spectral functions for majority and minority channels are distinctly different. The Mn-e_g electrons mostly contribute to the spectral function for the minority channel (green), while for the majority channel, it is mostly due to O-2p (red), which is consistent with the PDOS. Now we turn our discussions to the core-level spectra. Before discussing our results, it’s important to note that theoretical studies using the DMFT approach on 3d transition-metal oxides have shown that there is a strong connection between the ZRBS hybridization and the multiplet structure of 2p core levels <cit.>. It has been shown that, in the AFM phase of NiO, the Zhang-Rice peak gets sharpened compared to the PM phase which leads to the dominance of nonlocal core-hole screening over local screening. Furthermore, the relative strength of these screenings is determined by both the spin arrangement of the crystals and the hybridization function <cit.>. To understand how these effects act on Mn 2p core-levels of MnO, a detailed temperature dependence study has been conducted as shown in Figure <ref>. From Fig. <ref>(a) it can be seen that both Mn 2p_1/2 and Mn 2p_3/2 peaks show multiplet structures, whereas it is better resolved for the later and at low temperatures. The features denoted as A, B, and S in the XPS spectra are predominantly originating from the cd^6L, cd^6Z and cd^5 photoemission final states, respectively, wherec, L, and Z, represent a hole in the Mn 2p, O 2p ligand and ZRBS, respectively <cit.>. That means the peak `A' is associated with the local charge transfer screening of the core-hole within the core-excited MnO_6 octahedron while the peak `B' is due to the nonlocal screening (NLS) accompanied by ZRBS <cit.>. By lowering temperatures, we noticed a gradual enhancement of the intensity of the B peak relative to the A peak, analogous to the spectral evolution of the ZRBS (Fig. <ref>(a,d)). This suggests nonlocal screening dominates over local screening in the AFM phase compared to PM. Thus, our results clearly represent that there is a strong coupling between the hybridization strength of ZRBS and the screening channels of core holes, in agreement with the theoretical predictions <cit.>. Strong dependence of the line shape of Mn 2p on the magnetic state has been also observed for different manganites <cit.>. Another important observation is that within the paramagnetic phase, Mn 2p_3/2 peak intensity significantly drops with increasing temperature from 340 to 500 K, whereas the intensity of Mn 2p_1/2 peak is almost constant (Fig. <ref>(b)). As the intensity of these spin-orbit split states is determined by the degeneracy factor (2J+1, where J is the total angular momentum ), the relative intensity change could mean that breaking of degeneracy. It should be noted that in the case of transition-metal and rare-earth elements with partially occupied d and f levels, the effective spin value within the J is not solely the spin of the remaining unpaired electron (generated during photoemission) but it often interacts with the spin of unpaired valence electrons. Thus the decrease of Mn 2p_3/2 intensity at high temperatures could be due to the decrease of the interaction strength between the spin-up component of Mn 2p and the Mn 3d spin as short-range magnetic correlation collapses ∼ 530 K <cit.>. Is it important to note that, a strong dependency of the Mn 3s peak intensity and line shape was previously observed by Hermsmeier et al. and was explained due to the change of interaction strength between 3s and 3d spins <cit.>. However, further studies such as spin-polarized XPS are needed to verify the exact origin of the Mn 2p_3/2 intensity change, which is outside the scope of our present study. In summary, we have reported detailed electronic structure results of MnO(001) thin films across T_N by ARPES, XPS measurements, and complemented by first-principles eDMFT computations. In spite of the strongly localized character of valence bands of 3d binary transition metal oxides, here, we clearly resolve band-folding and strong spectral evolution due to the AFM-II spin ordering. An enhancement of ZRBS intensity and the overall spectra are found to sharpen in the AFM phase due to the spin-dependent change in Mn 3d-O 2p hybridization. We explicitly show that the strength of the hybridization significantly grows in the AFM phase only in the minority spin channel, which is subject to stronger spin-fluctuations. We further show that enhancement of this hybridization strength in ZRBS has a significant effect on the non-local screening channel of 2p core-hole, in agreement with the theoretical predictions. By performing extensive temperature-dependent ARPES and XPS studies we found that the spectral evolution persists at much higher temperatures than T_N, which suggests the presence of short-range AFM correlation even at the PM phase. Finally, we believe that our robust observation of spin-dependent change in the valence band and core-level electronic structure can be observed for other similar metal oxides. A.K.K. acknowledges receipt of a fellowship from the ICTP-TRIL Programme, Trieste, Italy. During the preparation of the manuscript, A.K.K. received funding from the US Department of Energy, Office of Basic Energy Sciences, contract no. DE-SC0012704. S. M. acknowledges the support from the Air Force Office of Scientific Research by the Department of Defense under the award number FA9550-23-1-0498 of the DEPSCoR program and benefited from the Frontera supercomputer at the Texas Advanced Computing Center (TACC) at The University of Texas at Austin, which is supported by National Science Foundation grant number OAC-1818253.49 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Shen et al.(1990a)Shen, Allen, Lindberg, Dessau, Wells, Borg, Ellis, Kang, Oh, Lindau et al.]shen1990photoemission author author Z.-X. Shen, author J. Allen, author P. Lindberg, author D. Dessau, author B. Wells, author A. Borg, author W. Ellis, author J. Kang, author S.-J. Oh, author I. Lindau,et al.,@noopjournal journal Phys. Rev. Bvolume 42, pages 1817 (year 1990a)NoStop [Shen et al.(1991)Shen, List, Dessau, Wells, Jepsen, Arko, Barttlet, Shih, Parmigiani, Huang et al.]shen1991electronic author author Z.-X. Shen, author R. List, author D. Dessau, author B. Wells, author O. Jepsen, author A. Arko, author R. Barttlet, author C. Shih, author F. Parmigiani, author J. Huang,et al.,@noopjournal journal Phys. Rev. Bvolume 44, pages 3604 (year 1991)NoStop [Chen et al.(2017)Chen, Sakata, Yamauchi, Yang, Kumara, Song, Palina, Taguchi, Ina, Katsuya et al.]chen2017lattice author author Y. Chen, author O. Sakata, author R. Yamauchi, author A. Yang, author L. S. R. Kumara, author C. Song, author N. Palina, author M. Taguchi, author T. Ina, author Y. Katsuya,et al., @noopjournal journal Phys. Rev. B volume 95, pages 245301 (year 2017)NoStop [Mandal et al.(2019a)Mandal, Haule, Rabe, and Vanderbilt]mandal2019influence author author S. Mandal, author K. Haule, author K. M. Rabe,andauthor D. Vanderbilt,@noopjournal journal Phys. Rev. Bvolume 100, pages 245109 (year 2019a)NoStop [Zhang et al.(2021)Zhang, Mondal, Mandal, Allred, Aghamiri, Fali, Zhang, Zhou, Cao, Rodolakis, McChesney, Wang, Sun, Abate, Roy, Rabe, andRamanathan]SM-pnas author author Z. Zhang, author S. Mondal, author S. Mandal, author J. M. Allred, author N. A. Aghamiri, author A. Fali, author Z. Zhang, author H. Zhou, author H. Cao, author F. Rodolakis, author J. L. McChesney, author Q. Wang, author Y. Sun, author Y. Abate, author K. Roy, author K. M.Rabe,and author S. Ramanathan, 10.1073/pnas.2017239118 journal journal Proceedings of the National Academy of Sciences volume 118, pages e2017239118 (year 2021)NoStop [Ohta et al.(2003)Ohta, Hirano, Nakahara, Maruta, Tanabe, Kamiya, Kamiya, andHosono]ohta2003fabrication author author H. Ohta, author M. Hirano, author K. Nakahara, author H. Maruta, author T. Tanabe, author M. Kamiya, author T. Kamiya,and author H. Hosono, @noopjournal journal Applied physics letters volume 83,pages 1029 (year 2003)NoStop [Jung et al.(2012)Jung, Kim, Oh, and Kim]jung2012stability author author J. Jung, author D. L. Kim, author S. H. Oh,and author H. J. Kim, @noopjournal journal Solar energy materials and solar cells volume 102, pages 103 (year 2012)NoStop [Hubbard(1964)]hubbard1964electron author author J. Hubbard, @nooptitle Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, Vol. volume 281 (publisher The Royal Society, year 1964) pp. pages 401–419NoStop [Rödl et al.(2009)Rödl, Fuchs, Furthmüller, andBechstedt]rodl2009quasiparticle author author C. Rödl, author F. Fuchs, author J. Furthmüller,andauthor F. Bechstedt, @noopjournal journal Phys. Rev. B volume 79, pages 235114 (year 2009)NoStop [Zhang and Rice(1988)]zhang1988effective author author F. Zhang and author T. Rice,@noopjournal journal Phys. Rev. Bvolume 37, pages 3759 (year 1988)NoStop [Kuneš et al.(2007)Kuneš, Anisimov, Skornyakov, Lukoyanov, and Vollhardt]kunevs2007nio author author J. Kuneš, author V. Anisimov, author S. Skornyakov, author A. Lukoyanov,and author D. Vollhardt, @noopjournal journal Phys. Rev. Lett. volume 99, pages 156404 (year 2007)NoStop [Bała et al.(1994)Bała, Oleś, and Zaanen]bala1994zhang author author J. Bała, author A. M. Oleś,and author J. Zaanen, @noopjournal journal Phys. Rev. Lett. volume 72, pages 2600 (year 1994)NoStop [Damascelli et al.(2003)Damascelli, Hussain, and Shen]damascelli2003angle author author A. Damascelli, author Z. Hussain,and author Z.-X. Shen,@noopjournal journal Reviews of modern physics volume 75, pages 473 (year 2003)NoStop [Monney et al.(2016)Monney, Bisogni, Zhou, Kraus, Strocov, Behr, Drechsler, Rosner, Johnston, Geck et al.]monney2016probing author author C. Monney, author V. Bisogni, author K.-J. Zhou, author R. Kraus, author V. N. Strocov, author G. Behr, author S.-L. Drechsler, author H. Rosner, author S. Johnston, author J. Geck,et al., @noopjournal journal Phys. Rev. B volume 94, pages 165118 (year 2016)NoStop [Barman et al.(2020)Barman, Kundu, and Menon]barman author author S. Barman, author A. K. Kundu, and author K. S. R. Menon,@noopjournal journal Journal of Magnetism and Magnetic Materials volume 515,pages 167292 (year 2020)NoStop [Hermsmeier et al.(1990)Hermsmeier, Osterwalder, Friedman, Sinkovic, Tran, and Fadley]hermsmeier1990spin author author B. Hermsmeier, author J. Osterwalder, author D. Friedman, author B. Sinkovic, author T. Tran,and author C. Fadley, @noopjournal journal Phys. Rev. B volume 42, pages 11895 (year 1990)NoStop [Kundu et al.(2017)Kundu, Barman, and Menon]kundu2017effects author author A. K. Kundu, author S. Barman, and author K. S. R. Menon,@noopjournal journal Phys. Rev. Bvolume 96, pages 195116 (year 2017)NoStop [Barman and Menon(2018)]barman2018growth author author S. Barman and author K. S. R. Menon, @noopjournal journal Journal of Crystal Growth volume 487, pages 28 (year 2018)NoStop [Kundu and Menon(2016)]kundu2016growth author author A. K. Kundu and author K. S. R. Menon, @noopjournal journal Journal of Crystal Growth volume 446, pages 85 (year 2016)NoStop [Das and Menon(2015)]das2015revisit author author J. Das and author K. S. R. Menon, @noopjournal journal Journal of Electron Spectroscopy and Related Phenomena volume 203, pages 71 (year 2015)NoStop [Terakura et al.(1984)Terakura, Oguchi, Williams, andKübler]terakura1984band author author K. Terakura, author T. Oguchi, author A. Williams,andauthor J. Kübler,@noopjournal journal Phys. Rev. Bvolume 30, pages 4734 (year 1984)NoStop [Shen et al.(1990b)Shen, Shih, Jepsen, Spicer, Lindau, andAllen]shen1990aspects author author Z.-X. Shen, author C. Shih, author O. Jepsen, author W. Spicer, author I. Lindau,and author J. Allen, @noopjournal journal Phys. Rev. Lett. volume 64, pages 2442 (year 1990b)NoStop [Mandal et al.(2019b)Mandal, Haule, Rabe, and Vanderbilt]mandal2019systematic author author S. Mandal, author K. Haule, author K. M. Rabe,andauthor D. Vanderbilt,@noopjournal journal npj Computational Materials volume 5, pages 1 (year 2019b)NoStop [Yin et al.(2011)Yin, Haule, and Kotliar]haule3 author author Z. P. Yin, author K. Haule,andauthor G. Kotliar, @noopjournal journal Nat. Mater. volume 10, pages 932 (year 2011)NoStop [Haule et al.(2010)Haule, Yee, and Kim]Haule_prb10 author author K. Haule, author C.-H. Yee, and author K. Kim, 10.1103/PhysRevB.81.195107 journal journal Phys. Rev. B volume 81, pages 195107 (year 2010)NoStop [Kotliar et al.(2006)Kotliar, Savrasov, Haule, Oudovenko, Parcollet, and Marianetti]DMFT_review author author G. Kotliar, author S. Y. Savrasov, author K. Haule, author V. S. Oudovenko, author O. Parcollet,and author C. A. Marianetti, 10.1103/RevModPhys.78.865 journal journal Rev. Mod. Phys. volume 78, pages 865 (year 2006)NoStop [Kuneš et al.(2008)Kuneš, Yamasaki, Lukoyanov, Feldbacher, Anisimov, Yang, Scalettar, Andersen, Pickett, and Held]Kunes:2008bh author author J. Kuneš, author A. Yamasaki, author A. V. Lukoyanov, author M. Feldbacher, author V. I. Anisimov, author Y. F. Yang, author R. T. Scalettar, author O. Andersen, author W. E. Pickett,and author K. Held, @noopjournal journal Nature Mat. volume 7, pages 198 (year 2008)NoStop [Mandal et al.(2017)Mandal, Zhang, Ismail-Beigi, and Haule]FeSe_monolayer author author S. Mandal, author P. Zhang, author S. Ismail-Beigi,andauthor K. Haule, https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.119.067004 journal journal Phys. Rev. Lett. volume 119, pages 067004 (year 2017)NoStop [Mandal et al.(2014a)Mandal, Cohen,and Haule]Mandal:2014 author author S. Mandal, author R. E. Cohen, and author K. Haule, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.90.060501 journal journal Phys. Rev. B volume 90, pages 060501 (year 2014a)NoStop [Mandal et al.(2014b)Mandal, Cohen,and Haule]Mandal2:2014 author author S. Mandal, author R. E. Cohen, and author K. Haule, https://journals.aps.org/prb/abstract/10.1103/PhysRevB.89.220502 journal journal Phys. Rev. B volume 89, pages 220502 (year 2014b)NoStop [Mandal et al.(2018)Mandal, Cohen, and Haule]Mandal:2018 author author S. Mandal, author R. E. Cohen, and author K. Haule, 10.1103/PhysRevB.98.075155 journal journal Phys. Rev. B volume 98, pages 075155 (year 2018)NoStop [Hariki et al.(2013)Hariki, Ichinozuka, and Uozumi]hariki2013dynamical author author A. Hariki, author Y. Ichinozuka,and author T. Uozumi,@noopjournal journal Journal of the Physical Society of Japan volume 82, pages 043710 (year 2013)NoStop [Hariki et al.(2017)Hariki, Uozumi, and Kune šš]hariki2017lda author author A. Hariki, author T. Uozumi, and author J. Kune šš, 10.1103/PhysRevB.96.045111 journal journal Phys. Rev. B volume 96, pages 045111 (year 2017)NoStop [Blech and Averbach(1966)]blech1966long author author I. Blech and author B. Averbach, @noopjournal journal Physical Review volume 142, pages 287 (year 1966)NoStop [Kundu et al.(2018)Kundu, Barman, and Menon]kundu2018evolution author author A. K. Kundu, author S. Barman, and author K. S. R. Menon,@noopjournal journal Journal of Magnetism and Magnetic Materials volume 466,pages 186 (year 2018)NoStop [Menon et al.(2011)Menon, Mandal, Das, Menteş, Niño, Locatelli, and Belkhou]menon2011surface author author K. S. R.Menon, author S. Mandal, author J. Das, author T. O. Menteş, author M. A. Niño, author A. Locatelli,and author R. Belkhou, @noopjournal journal Phys. Rev. B volume 84, pages 132402 (year 2011)NoStop [Das and Menon(2018)]das2018evolution author author J. Das and author K. S. R. Menon, @noopjournal journal Journal of Magnetism and Magnetic Materials volume 449,pages 415 (year 2018)NoStop [Lad and Henrich(1988)]lad1988electronic author author R. J. Lad and author V. E. Henrich, @noopjournal journal Phys. Rev. B volume 38, pages 10860 (year 1988)NoStop [Nekrasov et al.(2013)Nekrasov, Pavlov, and Sadovskii]nekrasov2013consistent author author I. Nekrasov, author N. Pavlov, and author M. Sadovskii,@noopjournal journal Journal of Experimental and Theoretical Physics volume 116,pages 620 (year 2013)NoStop [Eder(2008)]eder2008correlated author author R. Eder, @noopjournal journal Phys. Rev. B volume 78, pages 115111 (year 2008)NoStop [Trimarchi et al.(2018)Trimarchi, Wang, and Zunger]trimarchi2018polymorphous author author G. Trimarchi, author Z. Wang, and author A. Zunger,@noopjournal journal Phys. Rev. Bvolume 97, pages 035107 (year 2018)NoStop [Schmitt et al.(2019)Schmitt, Moras, Bihlmayer, Cotsakis, Vogt, Kemmer, Belabbes, Sheverdyaeva, Kundu, Carbone et al.]schmitt2019indirect author author M. Schmitt, author P. Moras, author G. Bihlmayer, author R. Cotsakis, author M. Vogt, author J. Kemmer, author A. Belabbes, author P. M.Sheverdyaeva, author A. K.Kundu, author C. Carbone,et al., @noopjournal journal Nat Commun volume 10, pages 2610 (year 2019)NoStop [Kundu et al.(2021)Kundu, Barman, and Menon]kundu2021role author author A. K. Kundu, author S. Barman, and author K. S. R. Menon,@noopjournal journal ACS Applied Materials & Interfaces volume 13, pages 20779 (year 2021)NoStop [Han et al.(2023)Han, Telford, Kundu, Bintrim, Turkel, Wiscons, Zangiabadi, Choi, Li, Steigerwald et al.]han2023interplay author author S. Y. Han, author E. J. Telford, author A. K. Kundu, author S. J. Bintrim, author S. Turkel, author R. A. Wiscons, author A. Zangiabadi, author E.-S.Choi, author T.-D. Li, author M. L. Steigerwald,et al., @noopjournal journal arXiv preprint arXiv:2307.01397(year 2023)NoStop [Dagdeviren et al.(2018)Dagdeviren, Mandal, Zou, Zhou, Simon, Walker, Ahn, Schwarz, Ismail-Beigi, and Altman]Topo-SM1 author author O. E. Dagdeviren, author S. Mandal, author K. Zou, author C. Zhou, author G. H. Simon, author F. J.Walker, author C. H.Ahn, author U. D. Schwarz, author S. Ismail-Beigi,and author E. I.Altman, 10.1103/PhysRevMaterials.2.114205 journal journal Phys. Rev. Mater. volume 2, pages 114205 (year 2018)NoStop [Renninger et al.(1966)Renninger, Moss, and Averbach]alan author author A. Renninger, author S. Moss, and author B. Averbach,@noopjournal journal Physical Reviewvolume 147, pages 418 (year 1966)NoStop [Hermsmeier et al.(1989)Hermsmeier, Osterwalder, Friedman, andFadley]herm author author B. Hermsmeier, author J. Osterwalder, author D. Friedman,and author C. Fadley, @noopjournal journal Phys. Rev. lett. volume 62, pages 478 (year 1989)NoStop [Horiba et al.(2004)Horiba, Taguchi, Chainani, Takata, Ikenaga, Miwa, Nishino, Tamasaku, Awaji, Takeuchi et al.]horiba2004nature author author K. Horiba, author M. Taguchi, author A. Chainani, author Y. Takata, author E. Ikenaga, author D. Miwa, author Y. Nishino, author K. Tamasaku, author M. Awaji, author A. Takeuchi,et al.,@noopjournal journal Phys. Rev. Lett.volume 93, pages 236401 (year 2004)NoStop [van Veenendaal(2006)]van2006competition author author M. van Veenendaal, @noopjournal journal Phys. Rev. B volume 74, pages 085118 (year 2006)NoStop | http://arxiv.org/abs/2310.17833v1 | {
"authors": [
"Asish K. Kundu",
"Polina M. Sheverdyaeva",
"Paolo Moras",
"Krishnakumar S. R. Menon",
"Subhasish Mandal",
"Carlo Carbone"
],
"categories": [
"cond-mat.str-el"
],
"primary_category": "cond-mat.str-el",
"published": "20231027010513",
"title": "Spin Selective Evolution of Zhang-Rice State in Binary Transition Metal Oxide"
} |
On the Verification of Parametric Systems Dennis Peuter, Philipp Marohn and Viorica Sofronie-Stokkermans January 14, 2024 ==================================================================§ ABSTRACTUsing data from 35 Participatory Budgeting instances in Amsterdam, we empirically compare two different Participatory Budgeting rules: the greedy cost welfare rule and the Method of Equal Shares. We quantify how proportional, equal and fair the rules are and conclude that, for a small price in total voter satisfaction, the Method of Equal Shares performs better on all notions of fairness studied. We further provide a popular and a visual explanation of the Method of Equal Shares.§ INTRODUCTIONParticipatory Budgeting (PB) is the practice of democratically distributing a budget over proposed projects. <cit.> published a comprehensive survey of the literature studying and designing different incarnations of PB. Common to most PB processes is the collection of projects proposed by citizens, a voting phase for all citizens and the consequent allocation of funds to a subset of all proposed projects. However, the exact method of collecting projects, organising the voting phase and mapping the votes to a budget allocation, varies. In this project, we will focus on the latter: how does one map votes to a `fair' budget allocation? In particular, we will empirically study the performance of two such Participatory Budgeting rules: the greedy cost welfare rule and the Method of Equal Shares.The data was collected from a number of PB instances organised by the municipality of Amsterdam. Since 2019, the municipality organises yearly PB instances in different districts under the title “Buurtbudget” (“Neighbourhood budget”) or “Amsterdam begroot” (“Amsterdam budgets”). In the actual PB processes, a version of the greedy cost welfare rule was used to allocate the available budget. The aim of this project is to compare the budget allocation under the greedy cost welfare rule with the allocation that the Method of Equal Shares would have yielded.The paper is organised as follows. In Section <ref>, we briefly introduce the relevant notions from the field of PB. In Section <ref>, we present the results of the analysis. We draw conclusions and formulate recommendations for authorities organising a PB instance in Section <ref>. In the appendix, we provide a short explanation of the Method of Equal Shares for the general public.§ PARTICIPATORY BUDGETINGIn this section, we provide definitions of different notions in PB for reference and notation, but we refer to <cit.> for a more extensive introduction to PB. In the rest of the text, by PB we always refer to indivisible participatory budgeting, i.e., each project is either fully funded or it receives no funding; we cannot partially fund the cost of a project. §.§ Basic Definitions Consider a fixed set of voters = {1,…, n}.A PB instance I = (, c, b) consists of a finite set of projects = {p_1,…,p_m}, a cost function c→_>0 and a budget limit b∈_>0. An approval ballot for voter i∈ is a map A_i→{0,1} indicating which projects voter i approves of (where i approves of p∈ if A_i(p) = 1). A profile (of approval ballots) A = (A_1,…,A_n) is tuple of (approval) ballots A_i for each voter i∈.Given a PB instance I = (, c, b), a PB rule is a function R that maps possible profiles to subsets of . We say that a PB rule R selects a project p∈ under profile A if p∈ R(A). Perhaps the simplest rule to decide on a budget allocation, given some profile of approval ballots, is to rank the projects by the number of approvers and fund projects starting from the top of the list (only skipping a project if its cost exceeds the remaining budget). In fact, this procedure yields a decision rule that approximates maximising the total welfare of all voters, if the welfare of a voter is proportional to the total cost of selected projects that she approves of <cit.>. Therefore, the rule is referred to as the greedy cost welfare rule GreedCost. The greedy cost welfare rule GreedCost is the PB rule generated by the following process. Given a set of voters , a PB instance I = (, c, b) and a profile A, assign to each project p∈ the score σ(p):= ∑_i∈ A_i(p). Construct a subset π ofas follows. Initially, π = ∅. Iteratively, consider the project p∈ with the highest score σ(p) that has not yet been considered (if multiple such projects exist, apply a given tie-breaking rule). If c(π∪{p})≤ b, add p to π. Once all projects have been considered, return π. Although the assumption that the welfare of individual voters is proportional to the total cost of selected approved projects is certainly debatable, the choice for GreedCost can (under this assumption) be justified by arguing that it aims at maximising total (and therefore average) welfare: if one project generates more welfare than another, GreedCost prefers that project. However, maximising total welfare does not take into account how `equal' of `fair' the budget allocation is, as the following example (adapted from <cit.>) illustrates. Imagine a town consisting of four districts: North, East, South and West. All districts have roughly equal populations: North has 10.000 citizens, East has 9.900, South has 9.800 and West has 9.700 citizens. A budget of €10.000 is available for citizens' proposed projects. In every neighbourhood, many projects are submitted; in fact, the total overall budget is exceeded by each individual neighbourhood alone. Consequently, a voting round is organised.It is not unnatural to assume that Northerners tend to vote mostly for projects in North, Easterners for projects in East, etc. Suppose that indeed, all Northerners vote for all and only all projects in North, all Easterners vote for those projects in East, etc. That means that all projects in North receive 10.000 votes, all projects in East receive 9.900 votes, all in South 9.800 votes and all in West 9.700 votes. The rule GreedCost then selects a number of projects in North, after which it cannot fund any projects in East, South or West. We see that in the above example, the entire budget allocation is decided by roughly a quarter of the voters, i.e., the budget allocation is in some sense `disproportional'. Multiple alternative PB rules have been proposed to achieve different notions of proportionality <cit.>. One such proposal by <cit.> is the Method of Equal Shares, which we define in the following subsection. §.§ The Method of Equal Shares Although the Method of Equal Shares is defined for any satisfaction function, we assume again – for simplicity and consistency with the above – that a voter's welfare is proportional to the total cost of all selected projects that she approves of.The intuition behind the Method of Equal Shares is the following. The total budget is equally divided among all voters (virtually) and we simulate the voters forming coalitions to `buy' a project that the whole coalition approves of, as follows. If there is any project that costs more than the budget share owned by all its approvers together, the project is disregarded. For all other projects, we find the project for which the cost can be `most equally distributed' over its approvers. This project is selected and the distributed cost is subtracted from each approver's virtual budget. We repeat this process until all projects are either selected or disregarded.The term `most equally distributed' needs some specification. How equal a distribution is, in this case, is expressed by a parameter α∈_≥ 0 representing the ratio of the cost paid by and the welfare gained by the largest contributor to `buying' the project. That is, if a project has cost c and has k approvers, we aim to let each approver contribute c/k to buying the project. If some approvers cannot afford this contribution, they contribute all of their remaining budget and the other approvers must contribute more than c/k, say γ, to compensate. The parameter α is then defined as γ/c, i.e., the ratio between the cost paid by the largest contributor (namely γ) and the welfare gained by the largest contributor if the project is selected (namely c). The lower α, the more `equal' our distribution is.Formally, the Method of Equal Shares is defined as follows. The Method of Equal Shares mes (for cost satisfaction) is the PB rule generated by the following process.* Given a set of voters = {1,…,n}, a PB instance I = (, c, b) and a profile A, define for each agent i∈ a virtual budget b_i. Initially, set each b_i to b/n. Construct a subset π ofas follows. Initially, π = ∅. * For each project p∈, let the set of approvers p⊆ be the set of agents i∈ such that A_i(p) = 1. If the project is not affordable by its approvers, i.e., ∑_i∈pb_i < c(p), continue to the next project. Otherwise, initialise for each agent i∈p her contribution to p as γ^p_i = c(p)/|p| (i.e., all approvers contribute equally to the cost of p). * We call agents i∈ with b_i > γ^p_i rich agents. All other agents are poor agents. Compute the sum of the poor agents' budgets s = ∑_{i∈| b_i ≤γ^p_i}b_i (this is the maximal contribution all poor agents can make together). For all poor agents i, update γ^p_i to b_i. For all rich agents i, update γ^p_i to c(p) - s/|{j∈| b_j > γ^p_j}| (this is an equal distribution over the rich agents of the part of the project cost that cannot be covered by the poor agents). * Repeat process (a) until b_i≥γ^p_i for all i∈p. The affordability of p is α^p := max_i∈pγ^p_i/c(p), i.e., the affordability is the ratio between the maximal contribution made by an agent and the cost of the project.* Select the project p∈ with minimal affordability α^p. Add p to π and update b_i to b_i - γ^p_i for each agent i∈p (i.e., the approvers of p collectively buy p and distribute the cost as equally as possible). * Repeat processes 2. and 3. until no project is affordable anymore. Return π.Note that mes, by design, respects at least some notion of proportionality: if a voter's virtual budget runs out (or gets smaller), she cannot contribute to projects anymore and therefore has no (or less) power to select projects she approves of. Therefore, a coalition of voters can never exert more decision power than their `fair' proportion of the budget. More precisely, mes satisfies the axiom EJR up to one project (EJR-1), which is (a relaxation of) a formalisation of proportionality <cit.>. §.§ Completion of PB Rules The Method of Equal Shares has a drawback for most practical applications: it is not complete. We call a PB rule complete if given any profile, the budget allocation π it returns, cannot be extended by any project p∈∖π without exceeding the budget. The literature provides multiple approaches to `completing' a PB rule, two of which we consider in this project <cit.>. The first approach combines a given PB rule R with a second rule R' that is complete. Given a PB instance I and a profile A, we first apply the rule R to I and A to obtain a budget allocation π. Then we remove all projects p∈π from I and A and subtract the total cost of π from the budget limit. We apply the rule R' to the updated instance and profile to obtain a second budget allocation π'. Now, the budget allocation π∪π' is a feasible and complete budget allocation for the initial instance I. In the following, if we complete a PB rule R with GreedCost as the secondary rule, we denote the resulting rule as R^+.The second approach aims to complete a PB rule R by iteratively running the rule R on a PB instance I, where for each run, the budget of I is increased by ϵ > 0. If in some round i∈, the resulting budget allocation π_i is complete and feasible for the initial budget of I, we return π_i and terminate. If the resulting budget allocation π_i is not feasible for the initial budget of I, we return π_i-1 and terminate. Otherwise, we increase the budget by ϵ and continue to round i+1. Note that this approach of completing a PB rule does not always yield a complete budget allocation. It is nonetheless of practical importance, since it avoids using a different PB rule and is therefore arguably more `in the spirit of' rule R than the first approach. In the following, if we complete a PB rule R by this approach, we denote the resulting rule as R^*. Naturally, R^*+ should be read as (R^*)^+.In particular, mes^+ is a single run of mes followed by a run of GreedCost and mes^*+ is an iterated run of mes with increasing budgets followed by a run of GreedCost. § RESULTS Having defined the relevant notions of PB, we turn to the analysis of the data obtained from Amsterdam's PB instances. In this section, we first consider properties of the instances and profiles alone, such as the vote count, the project count and the project costs. We then turn to a comparison of the greedy cost welfare rule, GreedCost, with two different completions of the Method of Equal Shares, mes^+ and mes^*+. We first consider some general properties, such as overlap in the outcome of the rules and the median cost of selected projects; and then consider some fairness properties, such as the average satisfaction score and the Gini coefficient of the satisfaction scores. We conclude by analysing some typical example instances more qualitatively.The data received from the municipality of Amsterdam contained 43 PB instances, 4 of which lacked crucial data (either the ballots submitted or the cost of some projects). We disregarded these instances in our analysis. The remaining instances vary widely in the number of voters and the number of projects submitted. The smallest vote count is 66 and the largest 14411. The smallest project count is 3 and the largest is 97. As not to influence the statistical analysis disproportionally by outliers, we also disregarded all instances with fewer than 100 voters or fewer than 10 projects. The remaining 35 instances were used for our analysis. All the data used in this project, can be accessed via <http://pabulib.org/?city=Amsterdam>.The code used to convert the data to the standard data format PaBuLib (see <cit.>), to run the different rules and to statistically analyse the results, can be found at <https://gitlab.com/pctnelissen/empirical-analysis-of-participatory-budgeting>.§.§ Properties of the Instances and Profiles In Figure <ref>, the vote and project counts of all instances are plotted against each other. The median vote count is 3218 and the median project count is 34.[Both medians are plotted as a line, partitioning the instances in categories `small' and `large' per axis. All results in this section were studied for the data set as a whole, as well as partitioned into these size-based quadrants. However, all findings for the four quadrants were analogous to the findings for the data set as a whole, and are therefore omitted from this text.] All instances are coloured by the district in which they were organised and labelled by year. Note that some districts organised multiple PB instances in a single year. In those cases, the district was partitioned into multiple neighbourhoods and one PB instance was organised per neighbourhood.Figure <ref> displays some properties of the budget limits, projects and ballots of the 35 PB instances. In Figure <ref>, we see that the budget limits range from €30.000 to €500.000 with a median of €250.000. In Figure <ref>, we see the average project cost per instance, normalised by the budget limit of the instance. The average project cost ranges from 4% of the budget limit to 44%, with a median of 8%. In Figure <ref>, we see a measure of funding scarcity for each instance, namely the ratio between the total sum cost of all projects in an instance and the budget limit in the instance. The instance with least funding scarcity has a budget limit of 1.2 times the total funding requested by all projects, and the highest funding scarcity is 9.6. The median is 3.5. Finally in Figure <ref>, we see the average size of the ballots submitted in an instance, measured by the total cost of its approved projects and normalised by the budget limit of the instance. In the instance with the smallest average ballot, the average ballot approved projects costing 5% of the budget limit, whereas the average ballot approved 47% in the instance with the largest average ballot. The median is 20%. Note that for many instances, voters were bound by some cardinality or cost constraints while voting, which clearly influences the latter boxplot. §.§ Properties of the Greedy Rule and the Method of Equal Shares We turn our attention to the PB rules GreedCost, mes^+ and mes^*+, and analyse their performance on our data set.§.§.§ General PropertiesFigure <ref> plots four properties of the three PB rules. The similarity between two different sets of winners W_1 and W_2 for the same PB instance is defined as(W_1∩ W_2)/1/2((W_1) + (W_2)),where (W) is the total cost of all projects in a set W. Note that if W_1 and W_2 are complete (in the sense that adding another project to W_i exceeds the budget limit), then 1/2((W_1) + (W_2)) is close to the budget limit. Further note that the similarity of W_1 and W_2 is 1 if and only if they are the same set, and it is 0 if and only if the sets are disjoint. We empirically measure the similarity of a rule R to GreedCost by the average similarity over all PB instances between the winners selected by R and the winners selected by GreedCost. Thus, in Figure <ref> we read that mes^+ allocates an average of 17% of the budget differently than GreedCost does, and for mes^*+ this is 21%.Furthermore, we see that the number of winners selected by mes^+ and mes^*+ is higher than for GreedCost; and, connected to this, the median cost of the projects selected by mes^+ or mes^*+ is lower than the median cost selected by GreedCost.Finally, 15 of the PB instances in our data set contain categorisations of the submitted projects; for example, one instance differentiates between projects aimed at youth, projects aimed at green spaces and projects aimed at social cohesion. For these instances, we can compare the distribution of funding for projects in the different categories found in the ballots, versus the distribution of funding by a PB rule R. Given a profile A and a set of projects C⊆, the proportion of funds allocated to C by the voters isq(C) := 1/n∑_i∈(C∩ A_i)/(A_i).Given a PB rule R, the proportion of funds allocated to C by R isq_R(C) := (C∩ R(A))/(R(A)).We empirically measure the category disproportionality of a rule R by taking the square root of the mean squared difference between q(C) and q_R(C) for all categories C⊆ identified in an instance. The category proportionality, being the inverse of category disproportionality, is obtained by applying x↦ e^-x. Note that in Figure <ref>, category proportionality is the only measure that does not significantly change from GreedCost to mes^+ or mes^*+.§.§.§ Fairness PropertiesPB rules can be compared by different measures of fairness. Figure <ref> displays four such measures. Given a budget limit b, a profile A and a PB rule R, the average cost satisfaction is the average of (R(A)∩ A_i)/b over all voters i∈. That is, the cost satisfaction of a voter is the total cost of the selected projects that the voter approves of, normalised by the budget limit. Figure <ref> plots the mean average cost satisfaction over all instances in our data set.Furthermore, Figure <ref> plots the Gini coefficient (a well-known measure of the equality of a distribution of resources) of the cost satisfaction of all voters, and the Gini coefficient of the effort (or share) welfare measure. The latter is a notion introduced by <cit.> to measure the `effort' put in by decision makers to satisfy a voter. As such, the effort Gini coefficient expresses to what extent the decision makers distribute their `efforts' fairly.Finally, we call a voter happy if at least one of the projects she approves of, is selected. The overall happiness of voters is the percentage of voters that are happy, i.e., that are not left empty-handed by the PB rule. §.§ Qualitative Examples To illustrate qualitatively the differences between the three rules considered above, we study three example instances. The examples were selected to illustrate the largest negative effect that switching from GreedCost to mes^*+ would have on fairness and category proportionality, the median effect it would have and the largest positive effect it would have (within the instances of the data set that contain category information). The examples were selected by taking the minimal, median and maximal average increase of equality and category proportionality, as defined in Sections <ref> and <ref>.§.§.§ Instance 522: The Largest Negative Effect The largest negative effect of switching from GreedCost to mes^*+ occurred in the instance in neighbourhood Diamantbuurt in 2021, identified by ID 522. The following list summarises how many projects were selected in each category by the different rules.[The exact projects selected by the rules in instance 522 can be found in Appendix <ref>.]* Selected by both rules (€63.300): * Streets, greenery & squares: 2 projects (€50.000)* Doing things together: 1 project (€7.000)* Health, well-being & opportunities: 1 project (€6.300) * Selected by GreedCost only (€29.000): * Streets, greenery & squares: 1 project (€29.000) * Selected by mes^*+ only (€28.250): * Streets, greenery & squares: 1 project (€20.000)* Doing things together: 1 project (€8.250)Note that the largest part of the budget, €63.300, is allocated identically by GreedCost and mes^*+. The rule GreedCost supplements this common allocation by one project of category `Streets, greenery & squares', while mes^*+ supplements the common allocation by two cheaper projects from the categories `Streets, greenery & squares' and `Doing things together'.Figure <ref> demonstrates the category proportionality of GreedCost and mes^*+. In green, we see the distribution of funds over the different categories directly by the voters, and in red and blue we see the distribution that the two PB rules bring about. The closer the red or blue values are to the green values, the more proportional the allocation is. In the case of PB instance 522, we see that GreedCost (surprisingly) performs slightly better than mes^*+.Figure <ref> demonstrates the distribution of the cost satisfaction over all voters: the x-axis enumerates all voters from lowest cost satisfaction to highest cost satisfaction, and the value plotted is the total cost of all projects selected by the respective rule that the voter approves of. In green, we see this distribution if only those projects are funded that are selected by both GreedCost and mes^*+. As such, the red and blue areas illustrate the unique contribution that the respective rule makes to the satisfaction distribution. Note that for voters with low satisfaction, mes^*+ is preferable to GreedCost, whereas for most other voters it is the other way around.§.§.§ Instance 613: The Median Effect The median effect of switching from GreedCost to mes^*+ occurred in the instance in neighbourhood Noord Oost in 2022, identified by ID 613. We analyse the example analogously to the above. The projects selected in the different categories are the following.[The exact projects selected by the rules in instance 613 can be found in Appendix <ref>.]* Selected by both rules (€201.678): * Equal opportunities & education: 4 projects (€53.765)* Meeting & connection: 4 projects (€49.130)* Youth activities: 2 projects (€46.065)* Sports & exercise: 2 projects (€28.938)* Safety: 2 projects (€23.780) * Selected by GreedCost only (€59.000): * Meeting & connection: 1 project (€59.000) * Selected by mes^*+ only (€57.781): * Equal opportunities & education: 1 project (€5.404)* Meeting & connection: 4 projects (€19.782)* Sports & exercise: 5 projects (€26.095)* Art & creativity: 1 project (€6.500)Note that in this case, mes^*+ replaces a single project selected by GreedCost by a list of 11 projects in diverse categories.In Figure <ref>, we see that mes^*+ is slightly more proportional than GreedCost for instance 613. And in Figure <ref>, we see again that voters with low satisfaction should prefer mes^*+, whereas most voters should prefer GreedCost.§.§.§ Instance 644: The Largest Positive Effect Finally, the largest positive effect of switching from GreedCost to mes^*+ occurred in the instance in neighbourhood Geuzenveld Slotermeer in 2022, identified by ID 644. The projects selected in the different categories are the following.[The exact projects selected by the rules in instance 644 can be found in Appendix <ref>.]* Selected by both rules (€118.500): * Public space: 3 projects (€101.500)* Social: 1 project (€7.000)* Green: 1 project (€5.000)* Other: 1 project (€5.000) * Selected by GreedCost only (€147.000): * Public space: 2 projects (€147.000) * Selected by mes^*+ only (€147.260): * Public space: 2 projects (€10.000)* Social: 7 projects (€92.760)* Green: 2 projects (€17.000)* Culture: 2 projects (€21.500)* Other: 1 project (€6.000)Note that in this case, mes^*+ replaces two projects of the same category selected by GreedCost (representing more than half of the budget) by a list of 14 projects in diverse categories.In Figure <ref>, we see that mes^*+ is far more proportional than GreedCost for instance 644. And in Figure <ref>, we see a similar effect as above, but stronger: voters with low satisfaction should prefer mes^*+, whereas voters with high satisfaction should prefer GreedCost.§ CONCLUSION AND DISCUSSION We analysed the greedy cost welfare and two different completions of the Method of Equal Shares, using 35 PB instances conducted in Amsterdam between 2019 and 2022. Our primary conclusion is that the difference in budget allocation by these rules is significant: mes^+ allocates 17% of the budget differently from GreedCost, and for mes^*+ this is 21%. Which rule should be preferred, depends on the aims of a particular PB instance. However, the results obtained in this project can offer guidance to authorities organising a PB instance.In terms of fairness, Figure <ref> shows that the Method of Equal Shares generates results that can be characterised as `more fair' or `more equal' than the greedy cost welfare rule, at the expense of total (i.e., average) voter satisfaction. The slightly increased average equality and fairness of the outcome under the Method of Equal Shares, is due to an increase of (cost and effort) satisfaction for a minority at the bottom of the distribution, at the expense of the satisfaction of the majority at the top of the distribution. The increase of happiness from 94% to 96% or 97% seems a rather small increase at first sight, but it roughly halves the number of citizens that do not approve of a single selected project. For the median vote count of roughly 3.000, this translates to about 70 extra voters not being left empty-handed. Arguably, the increase of three different measures of fairness outweighs the slightly decreased average cost satisfaction.Other properties of the PB rules might also be relevant for designing a PB process: category proportionality, median selected cost and explainability. The category proportionality of the Method of Equal Shares seems to be higher than that of the greedy cost welfare rule, but the difference is not statistically significant in our data set. However, from the examples studied above, it seems that the Method of Equal Shares selects a more diverse set of projects than the greedy cost welfare rule, resulting in funding a set of projects that is more representative of the voters' preferences. As a result of selecting more (and thus smaller) projects, the median selected cost by the Method of Equal Shares is considerably lower than by the greedy cost welfare rule. Whether benefiting cheaper projects over more expensive projects is an asset or a drawback, is highly context-dependant. Finally, the Method of Equal Shares is harder to explain and understand than the greedy cost welfare rule. As a consequence, voters might have more trust in the greedy cost welfare rule, because they better understand how the budget allocation is decided. For this reason, a popular explanation of the Method of Equal Shares is included in the appendix.All in all, we conclude that the Method of Equal Shares and the greedy cost welfare rule yield considerably different budget allocations. The Method of Equal Shares arguably allocates the budget more fairly and equally, and selects a more proportional, diverse set of cheaper projects; all at the expense of a slight decrease in average voter satisfaction.§ A POPULAR EXPLANATION OF THE METHOD OF EQUAL SHARES (ENGLISH)§.§ Visual Explanation In Participatory Budgeting (PB), we aim to democratically distribute a budget over some items. For example, six housemates have €90 to spend on a vacuum cleaner, a casserole, a mixer and a plant, but they cannot afford all items. Each housemate submits a ballot stating which items they would like. A Participatory Budgeting rule decides which items are funded.The Greedy Cost Welfare (GreedCost) selects the items with the most votes: the vacuum and the casserole.The Method of Equal Shares (MES) distributes the budget evenly over all voters, and simulates coalitions buying the items for which the cost can be distributed most evenly. It selects the vacuum, mixer and plant. Note that now no voter is left empty-handed. §.§ Textual Explanation The goal of Participatory Budgeting (PB) is to democratically decide which projects a government should fund and which projects it should reject. A PB process consists of collecting possible projects, organising an election, and announcing which projects will be funded. Between the last two steps, votes are counted and winning projects are selected. This can be done in multiple ways, including the `Method of Equal Shares' (MES). Below, we explain how MES works. Another explanation can be found at <https://equalshares.net/explanation>.The aim of MES is to distribute the budget as fairly as possible; every voter should have an equal influence on the outcome. The voting rule tries to accomplish this by virtually distributing the budget among all voters, and allowing voters to `buy' projects together. When a voter's virtual wallet is empty, the voter no longer has any influence on which additional projects are funded.Voting is done with approval ballots. This means that each voter indicates which projects they approve of and which projects they reject. The MES rule is then used to count the votes.MES can be seen as a simulation of voters forming coalitions to `buy' projects together. The simulation consists of the following steps. * The total budget is evenly distributed among all voters: each voter has a virtual wallet with an equal amount of money.* For each project, we compute whether the approvers of the project have enough money in their virtual wallets to buy the project together. If not, the project is not funded.* If multiple projects remain, we consider how fairly the costs of each project can be distributed among its approvers. The project for which the costs can be distributed most fairly, is bought by its approvers. The costs are subtracted from the virtual wallets, and the simulation returns to step 2.* When each project has been rejected or bought, the simulation ends. How fairly the costs of a project can be distributed, is determined as follows.Initially, the costs of a project are distributed equally among its approvers. However, some voters might have already `bought' other projects and, therefore, might not have enough money left to pay this equal share. If so, these voters pay their entire remaining budgets. The costs they cannot cover, are divided equally among the other approvers that still have enough money in their wallets.As such, the distribution of the costs becomes somewhat unequal. The inequality of the distribution is calculated by dividing the contribution by each voter, γ, by the total cost of the project, c. We thus obtain a value α = γ/c for each voter. Assuming that the total cost of a project is equal to the satisfaction the project provides, α is the cost per unit satisfaction paid by a voter. When the costs can be distributed perfectly equally, α is low for all approvers of a project. When the costs are distributed very unequally, the α of the largest contributor is much larger than that of smaller contributors. The cost distribution of the project with the lowest maximum α is therefore considered the most fair.All in all, MES simulates what voters would do if they were all given an equal share of the budget, and were allowed to (as fairly as possible) form coalitions to realise projects. In this way, each voter has an equal influence on the selection of projects, and we achieve a proportional and fair budget allocation.§ A POPULAR EXPLANATION OF THE METHOD OF EQUAL SHARES (DUTCH)§.§ Visuele uitleg Het doel van Participatieve Begroting (PB) is om op democratische wijze een begroting op te stellen. Bijvoorbeeld, zes huisgenoten hebben €90 te besteden aan een stofzuiger, een pan, een mixer en een plant, maar ze kunnen niet alle items betalen. Dus geeft elke huisgenoot aan welke items ze willen hebben. Een Participatieve Begrotingsregel beslist welke items ze kopen.De Greedy Cost Welfare (GreedCost) regel kiest de items met de meeste stemmen: de stofzuiger en de pan.De Methode van Gelijke Delen (MES) verdeelt het budget eerlijk over alle stemmers en simuleert coalities die de items kopen. De kosten worden zo gelijkmatig mogelijk verdeeld. MES selecteert de stofzuiger, de mixer en de plant. Daardoor blijft geen enkele stemmer met lege handen achter. §.§ Tekstuele uitleg dutch Het doel van Participatieve Begroting (PB) is om democratisch te beslissen welke projecten een overheid moet financieren en welke projecten zij moet afwijzen. Een PB proces bestaat uit het verzamelen van mogelijke projecten, het organiseren van een verkiezing en een bekendmaking van welke projecten gefinancierd worden. Tussen de laatste twee stappen worden de stemmen geteld en wordt er bepaald welke projecten winnen. Dat kan op meerdere manieren, waaronder de `Methode van Gelijke Delen', ofwel `Method of Equal Shares' (MES). Hieronder leggen we uit hoe MES werkt. Een andere uitleg staat op de website <https://equalshares.net/explanation>.Het doel van MES is om het budget zo eerlijk mogelijk te verdelen; elke kiezer moet even veel invloed hebben op de uitslag. Dat doet deze stemregel door het budget virtueel te verdelen over alle kiezers, en de kiezers samen projecten te laten `kopen'. Wanneer de virtuele portemonnee van een kiezer leeg is, heeft de kiezer geen invloed meer op welke projecten nog meer gefinancierd worden.Het stemmen gebeurt met goedkeurings-biljetten. Dat betekent dat elke kiezer op diens stembiljet aangeeft welke projecten die goedkeurt en welke projecten die afkeurt. Vervolgens wordt de MES-regel gebruikt om de stemmen te tellen.MES kan gezien worden als een simulatie van kiezers die coalities vormen om samen projecten te `kopen'. De simulatie bestaat uit de volgende stappen. * Het totale budget wordt eerlijk verdeeld over alle kiezers. De kiezers hebben dus een virtuele portemonnee met elk even veel geld.* Voor elk project wordt gekeken of alle kiezers die dit project goedkeuren, samen genoeg geld in hun virtuele portemonnees hebben om het project te kopen. Is dat niet zo, dan wordt het project niet gefinancierd.* Als er meerdere projecten overblijven, wordt er per project gekeken hoe eerlijk de kosten van het project verdeeld kunnen worden onder de kiezers die het project goedkeuren. Het project waarbij dat het meest eerlijk kan, wordt door de kiezers gekocht. De kosten worden uit de virtuele portemonnees gehaald en de simulatie gaat terug naar stap 2.* Wanneer elk project is afgekeurd of gekocht, eindigt de simulatie. Hoe eerlijk de kosten van een project verdeeld kunnen worden, wordt als volgt bepaald.Aanvankelijk worden de kosten volledig gelijk verdeeld over de kiezers die het project goedkeuren. Het zou echter kunnen dat sommige kiezers al andere projecten `gekocht' hebben en dus niet genoeg geld over hebben om dit gelijke deel te betalen. Als dat het geval is, betalen deze kiezers hun gehele overgebleven budgetten. De kosten die zij niet kunnen dekken, worden gelijk verdeeld over de andere kiezers die het project steunen en nog wel genoeg geld in hun portemonnee hebben.De verdeling van de kosten wordt zo dus enigszins ongelijk. Hoe ongelijk de verdeling is, wordt berekend door de bijdrage die een kiezer moet leveren, γ, te delen door de totale kosten van het project, c. Die waarde is dus α = γ/c. Onder de aanname dat de totale kosten van een project gelijk zijn aan de voldoening die het project oplevert, zijn α dus de betaalde kosten per eenheid voldoening. Wanneer de kosten volledig gelijk verdeeld kunnen worden, is α laag voor alle kiezers die het project ondersteunen. Wanneer de kosten zeer ongelijk verdeeld worden, is de α van de grootste bijdrager veel groter dan die van kleinere bijdragers. De kostenverdeling van het project met de laagste maximale α wordt daarom als het meest eerlijk gezien.Al met al simuleert MES wat kiezers zouden doen, als we hun inderdaad allemaal een gelijk deel van het budget zouden geven en ze vrij zouden zijn om (zo eerlijk mogelijke) coalities te vormen om projecten te realiseren. Zo geeft de regel elke kiezer even veel invloed en bereikt MES een meer proportionele en gelijke verdeling van het budget. § PROJECTS (WITH CATEGORY AND COST) SELECTED IN INSTANCE 522* Selected by both rules (€63.300): * A greener and more pleasant Robijn Square! (Streets, greenery & squares, €30.000)* Smaragd Street: Meeting place at the entrance of the Diamond neighborhood (Streets, greenery & squares, €20.000)* An Iftar, a neighbourhood party and a Christmas meal (Doing things together, €7.000)* First aid for cardiac arrest - an AED can save lives in the neighbourhood (Health, well-being & opportunities, €6.300) * Selected by GreedCost only (€29.000): * Container gardens against litter and more greenery (Streets, greenery & squares, €29.000) * Selected by mes^*+ only (€28.250): * Replacement of playgrounds at Smaragd Square/Street (Streets, greenery & squares, €20.000)* Musical gems of the Diamond neighbourhood (Doing things together, €8.250)§ PROJECTS (WITH CATEGORY AND COST) SELECTED IN INSTANCE 613* Selected by both rules (€201.678): * Trips and activities for children (Youth activities, €13.139)* Homework guidance (Equal opportunities & education, €9.115)* Sewing lessons to combat loneliness and debt (Meeting & connection, €9.600)* The eyes and ears of and in our neighbourhoods (Safety, €9.780)* Shade in the pasture for livestock/livability for animals (Meeting & connection, €4.600)* Girls Kick-Off (Sports & exercise, €11.638)* 30 worm hotels/facade benches on sidewalks of North-East (Meeting & connection, €25.350)* Safety and greenery... you have to do it! (Safety, €14.000)* Well-prepared for the transition to secondary education (Equal opportunities & education, €31.650)* Atelier Nieuwendam (Equal opportunities & education, €4.120)* Addressing loitering youth in the neighbourhood (Youth activities, €32.926)* Sports and exercise for girls and women (Sports & exercise, €17.300)* How do we promote equal opportunities in education? (Equal opportunities & education, €8.880)* Soup at the Waaltje (Meeting & connection, €9.580) * Selected by GreedCost only (€59.000): * Lower garden Plan van Gool (Meeting & connection, €59.000) * Selected by mes^*+ only (€57.781): * The 5 World Countries Fashion Show (Meeting & connection, €3.298)* Sports opportunities for immigrant women (Sports & exercise, €13.075)* Flower power (Meeting & connection, €3.104)* Go for a colourful Zunderdorp! (Meeting & connection, €6.055)* Walking buddies (Sports & exercise, €6.340)* Neighbourhood party with a food truck, DJ and an MC (Meeting & connection, €7.325)* Sports and a board in Naardermeerstraat (Sports & exercise, €2.260)* Sports and a board in de Kleine Wereld (Sports & exercise, €2.160)* The Bea Creas (Equal opportunities & education, €5.404)* Sports and a board in Plan van Gool (Sports & exercise, €2.260)* Music and culture in nature for the neighborhood (Art & creativity, €6.500)§ PROJECTS (WITH CATEGORY AND COST) SELECTED IN INSTANCE 644* Selected by both rules (€118.500): * Play park for all ages (Public space, €60.000)* Repair shop for walkers and mobility scooters (Other, €5.000)* Community dinner for the homeless (Social, €7.000)* Everything market (Public space, €11.500)* Container gardens Noordzijde (Public space, €30.000)* Benches in the Tuinen van West nature reserve (Green, €5.000) * Selected by GreedCost only (€147.000): * Water fountains/trick fountains at Lambertus Zijlplein (Public space, €75.000)* Makeover of Confucius playground (Public space, €72.000) * Selected by mes^*+ only (€147.260): * Ping pong in Noorderhof (Public space, €5.000)* Empower Women: training and sports lessons (Social, €7.700)* Keep New West Clean: speak out and make your street your second home (Culture, €15.500)* Trunk market in Geuzenveld, Slotermeer (Culture, €6.000)* Girls of New West in their power (Social, €6.700)* Painting concrete blocks at Jan de Jonghkade (Public space, €5.000)* Karaoke New West (Social, €7.500)* Sports facilities for and by young people (Social, €9.400)* A greenhouse to give away plants in the Volkstuinpark de Bretten (Green, €10.000)* Recycled tanks for rainwater harvesting (Green, €7.000)* Addressing learning deficiencies caused by COVID-19 (Social, €23.000)* Kumbet Foundation against elderly loneliness (Social, €30.000)* Less conflict and more connection through workshops for young people (Other, €6.000)* Activities for lonely women in New West (Social, €8.460) | http://arxiv.org/abs/2310.18033v1 | {
"authors": [
"Pelle Nelissen"
],
"categories": [
"cs.GT",
"econ.GN",
"q-fin.EC"
],
"primary_category": "cs.GT",
"published": "20231027101426",
"title": "An Empirical Analysis of Participatory Budgeting in Amsterdam"
} |
0000-0002-8163-4608]Shang-Min Tsai Department of Earth and Planetary Sciences, University of California, Riverside, CA, USAUniversité Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Laboratoire Lagrange, Nice, France 0000-0002-6907-4476]João M. Mendonça National Space Institute, Technical University of Denmark, Elektrovej 328, DK-2800 Kgs. Lyngby, Denmark 0000-0003-2278-6932]Xianyu Tan Tsung-Dao Lee Institute, Shanghai Jiao Tong University, 520 Shengrong Road, Shanghai, People’s Republic of China School of Physics and Astronomy, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai, People’s Republic of China 0000-0001-9423-8121]Russell Deitrick School of Earth and Ocean Sciences, University of Victoria, Victoria, British Columbia, Canada 0000-0002-6893-522X]Mark Hammond Atmospheric, Oceanic and Planetary Physics, Department of Physics, University of Oxford, UK 0000-0002-2454-768X]Arjun B. Savel Center for Computational Astrophysics, Flatiron Institute, New York, USA Astronomy Department, University of Maryland, College Park, 4296 Stadium Dr., College Park, USA 0000-0002-8706-6963]Xi Zhang Department of Earth and Planetary Sciences, University of California Santa Cruz, Santa Cruz, CA, USA 0000-0002-5887-1197]Raymond T. Pierrehumbert Atmospheric, Oceanic and Planetary Physics, Department of Physics, University of Oxford, UK0000-0002-2949-2163]Edward W. Schwieterman Department of Earth and Planetary Sciences, University of California, Riverside, CA, USAThe atmospheric dynamics of tidally-locked hot Jupiters is dominated by the equatorial winds. Understanding the interaction between global circulation and chemistry is crucial in atmospheric studies and interpreting observations. Two-dimensional (2D) photochemical transport models shed light on how the atmospheric composition depends on circulation. In this paper, we introduce the 2D photochemical transport model, VULCAN 2D, which improves on the pseudo-2D approaches by allowing for non-uniform zonal winds. We extensively validate our VULCAN 2D with analytical solutions and benchmark comparisons. Applications to HD 189733 b and HD 209458 b reveal distinct characteristics in horizontal transport-dominated and vertical mixing-dominated regimes. Motivated by the inferred carbon-rich atmosphere by <cit.>, we find that HD 209458 b with super-solar carbon-to-oxygen ratio (C/O) exhibits pronounced C2H4 absorption on the morning limb but not on the evening limb, owing to horizontal transport from the nightside. We discuss when a pseudo-2D approach is a valid assumption and its inherent limitations. Finally, we demonstrate the effect of horizontal transport in transmission observations and its impact on the morning-evening limb asymmetry with synthetic spectra, highlighting the need to consider global transport when interpreting exoplanet atmospheres.§ INTRODUCTIONWe are entering the era of detailed exoplanet atmosphere characterization. The atmospheric characterization has come a long way since the first transit observation with the Hubble Space Telescope (HST) <cit.>. Recent JWST spectral measurements revealed unprecedented details of gas giants <cit.>. Transmission spectroscopy is central for probing atmospheric composition. To improve our ability to interpret observations through theoretical models, it is important to consider how the temperature and composition at the terminators are shaped by global circulation. This aspect cannot be addressed by traditional 1D models that omit variations and processes in the horizontal direction.Since transmission observations probe terminator regions that are composed of opposite sides of the planet, recent works have highlighted the need to account for spatial inhomogeneities. <cit.> and <cit.> introduced methods to separate the atmospheric components from the morning (leading) and evening (trailing) limbs. The advancement of high-resolution spectroscopy also expands our capacity to resolve the climate <cit.> and chemical variations <cit.> across the planet. Atmospheric retrieval frameworks are beginning to extend to 2D and 3D to prevent biases from fitting the data with 1D profiles <cit.>. As we gain more complete observations from transit, eclipse, and phase curves, we will soon have the ability to probe the composition distributions across the planet. It is essential to have models resolving global variations to support the progress of observations.The atmospheric composition of gas giants is generally governed by thermochemistry, photochemistry, and mixing processes. The merit of 1D models is they can incorporate all the above processes as needed. However, the 1D structure intrinsically neglects the variations in 3D, and more importantly, how the global circulation impacts the local properties <cit.>. 3D general circulation models (GCMs) can capture the intricate dynamic interactions across the planet and have become commonly applied to interpret observations <cit.>. However, simplifications of the physical and chemical processes are usually required due to computation limitations.In addition to reducing the full Navier–Stokes dynamics into primitive equations <cit.>, 3D GCMs often make further simplifications to radiative transfer and chemical processes. The chemical processes in particular are often severely simplified in most applications, using the assumption of thermochemical equilibrium, which is valid only at high temperatures and pressures. Modeling efforts have been put into implementing reduced chemical schemes in a GCM to account for transport-induced disequilibrium. While Earth climate models have provided insights into the oxygen response of Earth-like atmospheres <cit.>, to date,the incorporation of photochemistry into 3D models for diverse non-Earth-like atmospheres has not been realized yet. For most tidally-locked giant exoplanets, global circulation features an equatorial superrotating jet. For these planets, “pseudo-2D" photochemical models <cit.> that employ a rotating 1D column to mimic a uniform jet in a Lagrangian frame have emerged as useful complementary tools. Their relatively fast computation enables them to incorporate the same detailed mechanisms as 1D models, but the simulated circulation is limited to uniform winds[Strictly speaking, it is the angular velocity held constant in a rotating 1D model. The top layer of the equivalent flow moves faster than the bottom by H_atmω, where H_atm is the atmospheric depth in the model and ω is the assumed rotational frequency applied to simulate the equatorial jet.]. On the other hand, 2D models implemented with horizontal diffusion have been used to model the meridional plane for solar system gas giants <cit.> but have not yet been applied to exoplanets. In this work, we present the 2D photochemical-transport model, VULCAN 2D, which lifts this restriction on wind patterns while sharing the same advantage as pseudo-2D models. In Section <ref>, we describe the construction of the VULCAN 2D model. In Section <ref>, we validate the numerical scheme and modeling results through comparisons to an analytical solution, a pseudo-2D approach, and a 3D GCM, respectively. In Section <ref>, we delve into the applications to canonical hot Jupiters, HD 189733 b and HD 209458 b, including a super-solar C/O scenario for HD 209458 b. We use limiting cases to demonstrate the roles of horizontal and vertical transport and draw comparisons with previous works <cit.>. We then compare VULCAN 2D to pseudo-2D models in Section <ref>, exploring when the uniform wind assumption in pseudo-2D models may no longer hold. Finally, we discuss the observational implications and the morning-evening limb asymmetry arising from horizontal transport in Section <ref>. § CONFIGURATION OF THE 2D CHEMICAL TRANSPORT MODEL §.§ The 2D GridA standard 1D photochemical kinetics model solves the continuity equation in the form of a set of coupled partial differential equations:∂ n (z, t)/∂ t =P -L - ∂ϕ/∂ z,where n is the number density (cm^-3) of each species and t denotes the time. P and L are the chemical production and loss rates (cm^-3 s^-1) of the corresponding species at each vertical layer <cit.>. Equation (<ref>) can be readily generalized to a 2D Cartesian space (x, z) to include horizontal transport:∂ n (x, z, t)/∂ t =P -L - ∂ϕ_z/∂ z - []∂ϕ_x/∂ xPwhere ϕ_z, ϕ_x are the vertical (z) and horizontal (x) transport flux, respectively. In general, ϕ_z encompasses vertical advection, eddy diffusion, and molecular diffusion <cit.>, whereas ϕ_x describes horizontal transport on isobaric surfaces. Using isobaric flux is motivated by the fact that previous analysis of 3D GCMs is traditionally done in isobaric coordinates and winds on the isobaric levels typically dominate heat and chemical transport on tidally-locked planets <cit.>. We note that we use log-pressure coordinates with the vertical coordinate defined as z = - Hln(p/P_s), where H is the pressure scale height and P_s is the reference pressure. While the base log-pressure levels remain the same, the corresponding z differs between columns due to the horizontal temperature gradient. Therefore, the horizontal derivative is evaluated at the same pressure level but not the same geometric height. The configuration of our 2D grid is illustrated in Figure <ref>. The real challenge lies in numerically solving Equation (<ref>). Most of the numerical methods for stiff equations require evaluating the Jacobian matrix <cit.>. In a 1D system, the Jacobian matrix is neatly constructed through nested looping each species within each vertical layer (See Figure 14. in <cit.>). For example, the Rosenbrock method with a band matrix solver is employed in VULCAN. However, this structure breaks down when extending to 2D. To take advantage of the established numerical solver built for 1D systems, we apply an asynchronous integrator within each timestep <cit.>. Specifically, the right-hand side of Equation (<ref>) is first discretized in x asd n_x/d t =P_x -L_x - ϕ_z^i,j+1/2,k - ϕ_z^i,j-1/2,k/Δ z -ϕ_x^i,j,k+1/2 - ϕ_x^i,j,k-1/2/Δ xwhere i, j, k denote the species, vertical, and horizontal indices, respectively and the +1/2 and -1/2 represent the interfaces enclosing that layer. Equation <ref> of each x describes each x vertical column. Since the last term (i.e. the horizontal transport flux) on the right-hand side of Equation (<ref>) only contributes to the diagonal elements in the Jacobian matrix, Equation (<ref>) has the same numerical structure as that in a 1D system (Equation (5) in <cit.>). This allows us to evaluate each column, including horizontal transport, using the existing 1D solver. Specifically, Equation (<ref>) is computed for each x column within each timestep. Although errors can arise from the asynchronous update of horizontal transport flux associated with each column, we have tested integrating each column in different orders and found the errors to be negligible.For the horizontal advection, we adopt a first-order upwind difference scheme, where the local concentration is affected by the upwind cell only (same as the vertical advection described in <cit.>). The advective flux for the k cell in the x direction isϕ_x-1/2 = v_k-1/2 n_k-1, forv_k-1/2 > 0v_k-1/2 n_k, forv_k-1/2 < 0 and ϕ_x+1/2 = -v_k+1/2 n_k, forv_k+1/2 > 0-v_k+1/2 n_k+1, forv_k+1/2 < 0 where v_k-1/2 and v_k+1/2 are the zonal wind velocity at the left and right interfaces of the grid cell k, respectively. In addition to advective transport, horizontal diffusion contributes to meridional transport on Jupiter and Saturn <cit.> and is also implemented in VULCAN 2D. However, as horizontal winds can be directly obtained from GCM output, we focus only on advection for horizontal transport in this study. The advantage of our 2D spatial grid is it accommodates any two-dimensional flow patterns, such as the mean circulation derived from the 3D GCM. In contrast, for pseudo-2D models that employ a Lagrangian rotating 1D column <cit.>, the horizontal transport is restricted to a uniform jet by design. Both our 2D model and the pseudo-2D model share a limitation known as the Courant–Friedrichs–Lewy (CFL) condition <cit.>, as already pointed out by <cit.>. Given that we explicitly solve for each vertical column, the integration step size must be constrained by the time it takes for the horizontal flow to travel to neighbouring grid cells. This constraint becomes more stringent as the number of horizontal columns increases. In practice, to reduce the overall integration time to achieve convergence, we initially run our 2D model without horizontal transport (thereby removing the stepsize restriction) to attain the 1D steady state. We then run the full 2D VULCAN from this steady state (achieved without horizontal transport) with the stepsize limited by the CFL condition: dt < min( dx/v_x ) until the final steady state. § VALIDATION AND COMPARISON TO PREVIOUS WORK §.§ Validation with analytical solutions<cit.> derived an analytical solution for a 2D advective-diffusive system with parameterized chemical sources and sinks. We apply their analytical solution for a 2D zonal plane to validate our numerical model. To briefly recap, Equation (13) in <cit.> describes the steady-state abundance as a function of longitude and altitude, governed by vertical diffusion, horizontal advection, and chemical sources and sinks. We assume vertically uniform diffusion to simplify the expression, i.e., with α = γ = 0, the analytical solution for the volume mixing ratio under day-night sinusoidal chemical production (k = 1) isχ(λ, ξ) =P_0/L_0[ 1 + 1/√(1+q^2)(λ - ϕ) ] e^-ξ where χ is the volume mixing ratio, λ is longitude, ϕ is the phase shift, ξ is the vertical coordinate, P_0 and L_0 are the production and loss rates, respectively, and q relates the ratio of advection to chemical timescales, following the same notation as <cit.>.To compare with the analytical solution, we implemented a mock chemical network with only A and B as reactive species. The rate constant of A -> B is given by L_0N_0 e^-ξ/[A] and that of B -> A is given by P_0N_0 (1 + kλ)/[B], corresponding to the loss and production prescriptions in Section 5 of <cit.>. We assume constant zonal winds of 71.5 m/s and R_p = R_J, together with P_0N_0 = 2 × 10^-7 cm^-3 s^-1 and L_0N_0 = 2 × 10^-7 cm^-3 s^-1 to yield q = 1. The vertical diffusion is set to 10^8 cm^-2 s^-1 but the horizontal distribution does not directly depend on this value. Figure <ref> compares our 2D numerical model to the analytical solution. The impact of the horizontal wind is twofold: it both transports and homogenizes the chemical gradient. While our numerical results show somewhat lower peak amplitudes than the analytical solution, which can likely be attributed to numerical diffusion, they effectively capture the overall distribution shape and correctly reproduce the “phase shift" due to eastward transport. §.§ Comparison with the Pseudo 2D approach <cit.><cit.> apply a rotating 1D model to represent an air column moving with a uniform zonal jet in a Lagrangian frame. Their chemical kinetics scheme was based on <cit.>. For a like-for-like comparison, we adopt the same rotating 1D-column as <cit.> (referred to as Lagrangian-1D VULCAN) and compare it against our 2D photochemical model (2D VULCAN) using identical chemistry. We chose the canonical hot Jupiter HD 189733 b for this comparison. In Lagrangian-1D VULCAN, the 1D column is set to travel around the equator over 2.435 days, following <cit.>, i.e. with a constant angular velocity of 1.493×10^-5 rad/s. To translate this rotation into the equivalent zonal-mean wind, we use a quasi-uniform zonal wind (faster at higher altitudes) in 2D VULCAN to match the same angular velocity. The zonal wind is 2430 m/s at 1 bar and scaled to each altitude according to the geometry. The equatorial plane is divided into 64 columns in longitude for both 2D VULCAN and Lagrangian-1D VULCAN. We used the same eddy diffusion coefficient (K_zz) for vertical mixing as <cit.>: K_zz = 10^7 × (P/ 1 bar)^-0.65 cm^2s^-1. Other physical and chemical parameters were kept as much the same as possible between Lagrangian-1D and 2D VULCAN models. The comparisons between our Lagrangian-1D and 2D VULCAN for species displaying apparent longitudinal gradients are shown in Figure <ref>. 2D VULCAN and Lagrangian-1D VULCAN exhibit consistent results for species with either short (e.g. H) or long (e.g. CH4) chemical timescales. This indicates that 2D VULCAN correctly captures the processes of vertical mixing and horizontal transport. We find minor deviations that appeared in trace species (volume mixing ratios ≲ 10^-6). For instance, NH3 exhibits the most notable differences above 10^-4 bar. The discrepancy is likely due to the numerical integration: the first-order upwind in 2D VULCAN has stronger numerical diffusion than the Rosenberg method with second-order convergence in time used in Lagrangian-1D VULCAN. Nevertheless, the agreements are generally within a factor of two in the region below 0.1 mbar level. Our 2D VULCAN, with the equivalent uniform wind, can successfully reproduce the results obtained by the Lagrangian-1D approach. §.§.§ Comparison with 3D transport (without photochemistry) We have validated VULCAN 2D through an analytical solution and a pseudo-2D approach, to ensure the accuracy of our physical and numerical implementation within the 2D framework. In our next step, we will perform additional comparisons with a 3D GCM to see whether our 2D model effectively captures the primary transport process.<cit.> previously investigated the global transport of chemically active tracers: H2O, CH4, CO, and CO2 in the 3D simulation of WASP-43b. The chemical relaxation scheme <cit.> implemented in <cit.> replaces the thermochemical kinetics with a linear response according to the chemical timescale. We compare with the work by <cit.> because their chemical timescales are derived from the same chemical network implemented in VULCAN. Here, photochemistry in 2D VULCAN is switched off to have an equivalent comparison. The equatorial region ( ± 20^∘) is divided into four quadrants in longitude: dayside (325^∘ – 45^∘), morning limb (45^∘ – 135^∘), nightside (135^∘ – 225^∘), and evening limb (225^∘ – 325^∘), following Figure 4. in <cit.>. We adopt the average temperature and zonal wind field in this equatorial region from the same GCM output in <cit.>. Given that the eddy diffusion coefficients (K_zz) derived from the mixing length theory while values taking root-mean-square vertical velocity (w_rms) and scale height (H) typically overestimate the effective vertical mixing for hot Jupiters <cit.>, we adopt two assumptions: K_zz = w_rms H and K_zz = 0.01 × w_rms H to bracket the plausible range of K_zz. The mean molecular weight is fixed to 2.2387, following the same setting used in <cit.>.Figure <ref> compares the abundance distribution of CH4 (for solar composition) and H2O (for C/O = 2) in the four quadrants of the equatorial region computed by VULCAN 2D and <cit.>. Our 2D model with K_zz = 0.01 × w_rms H predicts vertical quench levels close to those in the 3D results. The preference for a weaker K_zz might be attributed to the high gravity of WASP-43b. The slight increase of H2O between 0.1 and 0.01 in the 3D GCM is likely associated with meridional transport from higher latitude <cit.> not accounted for in the 2D model. The overall good agreement between VULCAN 2D and the 3D results reinforces our 2D model's representation of the mixing process in the equatorial region of a hot Jupiter. § CANCELED: 2D SIMULATIONS OF HD209 OR HD189 WITHOUT PHOTOCHEMISTRYthe point of this section is to demonstrate the horizontal mixing behavior. Because the timescale of H is more comparable to the dynamics, use it to illustrate: 1. the limiting cases 2. when morning-evening asymmetry emerges 3. show the difference with pseudo-2D, can we see the effects of the divergent flow? We freely scaled the zonal wind and Kzz to explore the effects and interactions of horizontal transport with vertical mixing and chemical kinetics. - Try 0.1XUx and 5X Ux first: horizontal-transport dominated regime, vertical mixing dominated regime, half-half? Show the effects of zonal wind shear? - vary Kzz to show it is in the zonal transport dominated regime?For example, When horizontal transport is not as efficient as the vertical mixing, the morning-evening asymmpetry is evident.cite (Figure 9 in Baeyens)§ RESULTS: EQUATORIAL CHEMICAL DISTRIBUTIONS ON HD 189733 B AND HD 209458 BWe now first present an overview of the abundance distribution in the atmosphere of our fiducial simulations of HD 189733 b and HD 2094598 b. We briefly compare our results to previous work by <cit.> on the same planets. To gain insight into the effects of vertical and horizontal transport, we examine the limiting cases where individual transport processes are isolated following the approach of <cit.>. We then explore the sensitivity to the eddy diffusion coefficients converted from the GCM wind and discuss the results guided by the limiting cases. Lastly, we explore the global chemical transport when C/O ≳ 1 with HD 209458 b, motivated by recent high-spectral-resolution observations.§.§ Setup and overviewWe adopt 3D GCM output from SPARC/MITgcm <cit.> for HD 189733 b and HD 209458 b. Chemical equilibrium is assumed when calculating radiative transfer with the correlated-k distribution method. We did not include TiO and VO as shortwave opacity sources in our simulations, based on the lack of evidence of a thermal inversion from the reanalysis of emission spectra obtained by Spitzer and HST <cit.>. Except for excluding TiO and VO, we follow the same parameters for the GCM as described by <cit.>. The equatorial average temperatures and winds for our HD 189733b and HD 209458 b simulations are shown in Figure <ref>. The zonal wind speeds evidently vary across longitudes at pressures above 0.1 bar. Both planets share similar thermal and dynamical structures in the equatorial region, though HD 209458b exhibits higher temperatures and overall faster zonal wind speeds. We adopt the same global eddy diffusion coefficients as <cit.> for vertical mixing. These K_zz profiles are consistent with those estimated by the mixing length theory, as shown in Figure <ref>. We will further explore the sensitivity to the adopted eddy diffusion coefficients in Section <ref>.Figures <ref> and <ref> showcase the chemical abundance distributions simulated by VULCAN 2D in the equatorial regions of HD 189733b and HD 209458 b. The vertical abundance distributions along longitudes of several important species are summarized in Figure <ref>. In the observable regions of the atmospheres, species such as CH4, H, NH3, and HCN show considerable horizontal (longitudinal) gradient, whereas H2O and CO (not shown) are rather uniform everywhere. Species that are more susceptible to photodissociation, such as CH4 and NH3, are destroyed on the dayside but can partly recover on the nightside. Photochemical products, such as atomic H and HCN, build up on the dayside while being transported horizontally by the zonal jet at the same time. We find that the chemical transport exhibits qualitative similarities between HD 189733b and HD209458 b, with the main difference being that CH4 and NH3 are more favored on the cooler HD 189733 b.This behavior of chemical transport can be understood by comparing the horizontal and vertical dynamical timescales, as illustrated in Figure <ref>: Horizontal mixing dominates at altitudes below ∼0.1 mbar, while vertical mixing dominates at altitudes above ∼0.1 mbar level. Within the horizontal mixing region, species with longer timescales tend to display more uniform global abundances, whereas compositional gradients in the upper atmosphere are mainly controlled by vertical mixing and photochemistry. §.§ Limiting cases with only horizontal transport and with only vertical mixingFollowing the pedagogical exercise in <cit.>, this section presents the simulated vertical abundance profiles of HD 189733 b and HD 209458 b under limiting cases to isolate individual dynamical effects. Specifically, we examine scenarios where we exclude horizontal transport and vertical mixing, respectively. Figures <ref> and <ref> first show the distributions of CH4 and H in these limiting cases, demonstrating the principles of horizontal transport and vertical mixing. As expected, vertical mixing tends to homogenize the vertical gradient of composition, while horizontal transport tends to homogenize the horizontal gradient. However, the influence of transport processes on a species depends on its chemical properties, which can be broadly categorized into two groups: For species that are efficiently recycled and replenished from the deep layers such as CH4, their distribution resembles the vertical-mixing case. For photochemical products like H that are produced efficiently in the upper atmosphere, their distribution resembles closer to the horizontal transport case.Detailed comparisons between the nominal 2D models, the vertical-mixing case, the horizontal-transport case, and chemical equilibrium at different longitudinal locations are presented in <ref> and <ref>. Examining the quenched species such as CH4 and NH3, we find that the vertical mixing and horizontal transport cases have the same quench levels. This is not surprising, as these quench levels correspond to the transition point associated with the same chemical timescales, regardless of the dynamical process. However, the abundance distributions above the quench levels differ substantially between the vertical mixing and horizontal transport scenarios. This is because vertical mixing makes chemical abundances quenched from the hot deep layers, while horizontal transport results in quenching from the hot and irradiated dayside, as also noted in <cit.>. Owing to this nature of horizontal transport, the substellar abundance profiles closely resemble those from the vertical mixing case (the close match between the solid and dashed lines) for both planets. Taking a closer look at Figures <ref> and <ref>, species with long chemical timescales like CH4 and NH3 closely follow the vertical mixing distribution at hotter longitudes, the substellar point and evening limb, up to about 10^-4 bar. On the other hand, at the colder antistellar point and morning terminator, the equilibrium abundances differ substantially from the dayside, causing the abundance distribution predicted by the vertical-mixing model to diverge from the nominal 2D model (with both vertical and horizontal transport) at these cooler locations. These results shed light on the limitation of applying 1D models to interpret observations that probe the chemical properties of different regions.One crucial feature of horizontal transport is transporting the photochemical products from the dayside to the nightside <cit.>. For both planets, atomic H produced by photolysis can penetrate into the nightside, even reaching regions around the antistellar point where no UV photons are available. Transport of atomic H plays a key role in reacting with CH4 and NH3 to form HCN on the nightside. The abundances of CH4 and NH3 in Figures <ref> and <ref> fall between those in the vertical mixing and horizontal transport cases, highlighting the importance of considering both mixing processes.To conclude our limiting-case analysis, we note that for hot Jupiters similar to HD 189733 b and HD209458 b, 1D model including vertical mixing serves as a fairly good approximation for the dayside. Given that the equatorial jet efficiently transports heat, 1D models generally better capture the hotter evening limb better compared to the cooler morning limb in the pressure regions most relevant for transmission spectroscopy. We will discuss the limb asymmetry as a result of horizontal transport more in Section <ref> and <ref>.§.§ Comparison with <cit.>There are several differences in modeling assumptions and setups between our models of HD 18933b and HD 209458 b and those presented in <cit.>: (i) <cit.> assume uniform zonal winds, while we adopt longitude- and pressure-dependent wind profiles from the GCM. (ii) Shortwave opacity sources of TiO and VO, which are responsible for generating thermal inversion, are included in <cit.> but are not included in our model for HD 209458 b (iii) <cit.> use the chemical scheme of <cit.>, while our study adopts the chemical scheme of <cit.>.Despite these differences, our nominal 2D VULCAN and those from the limiting cases are qualitatively consistent with the pseudo-2D outcomes presented in<cit.>. The major difference lies in the thermal inversion of HD 209458 b in <cit.>, making the dayside temperature at low pressures ∼ 1000 K higher than that in our model. Consequently, <cit.> predict lower levels of CH4, NH3, and HCN on HD 209458 b. The hotter dayside also leads to a lower equilibrium abundance of CO2, creating the notable CO2 day-night contrast seen in <cit.>. Overall, apart from the temperature inversion included in <cit.>, we show qualitative agreements in terms of quenching and transport behaviors. We will discuss the implications of assuming uniform zonal winds (i.e. pseudo-2D model) in Section <ref> in more detail. §.§ Sensitivity to Kzz While VULCAN has the capability to employ vertical advection instead of eddy diffusion <cit.>, it is numerically more stable to employ diffusion in the vertical direction in conjunction with molecular diffusion. One apparent caveat is that the parameterization of vertical mixing with eddy diffusion has been a long-standing uncertainty in atmospheric modeling <cit.>. To test the sensitivity to uncertainties in estimating the eddy diffusion coefficient (K_zz), we vary the eddy diffusion coefficient profile by an order of magnitude in our model of HD 189733 b, both increasing and decreasing it. This explored range roughly spans the uncertain range considered in the previous literature <cit.>.The resulting abundance of CH4 and HCN in our HD 189733 b model with varying vertical eddy diffusion coefficients are demonstrated in <ref>. Stronger vertical mixing makes the transition to a vertical-mixing-dominated region occur at a higher pressure level, as expected. This allows morning-evening asymmetries to emerge at somewhat higher pressures when horizontal transport becomes less efficient compared to vertical mixing. The pressure levels where morning-evening asymmetry becomes notable are generally between 1 and 0.05 mbar, as summarized in Table <ref>. Increased vertical mixing also results in slightly higher quenched abundances, while weaker mixing leads to lower abundances from horizontal quenching from the dayside. Despite these effects, the global distribution only mildly depends on the choice of K_zz within the explored range. For HD 189733 b and HD 209458b, the bulk of the atmosphere below the millibar level still remains in the horizontal-transport-dominated regime, even when considering strong eddy diffusion. §.§ Super-solar C/O of HD 209458 b§.§.§ backgroundRecent transit observations using high-resolution spectroscopy of HD209458 b reported detection of H2O, CO, HCN, CH4, NH3, and C2H2 molecules at statistically significant levels <cit.>. Notably, the presence of C2H2 and HCN are indicative of super-solar carbon-to-oxygen ratio (C/O). While the interpretation by <cit.> is driven by cross-correlating with grids of models assuming thermochemical equilibrium, C2H2 and HCN can actually be efficiently produced by photochemistry out of equilibrium. To explore the chemical transport under carbon-rich conditions, we run the same 2D VULCAN model for HD 209458 b but with a C/O of 1.05, following the best-fit value determined by<cit.>. We keep the same solar metallicity for comparison with our solar C/O model, since metallicity is not well constrained in <cit.>.§.§.§ Enhanced hydrocarbon and horizontal transport induced limb asymmetryThe top and middle panels of Figure <ref> depict the same mixing-ratio distributions as in Figure <ref> but for C/O = 1.05. When C/O exceeds unity, H2O loses its dominance to CH4 due to the lack of available oxygen after CO <cit.>. What is intriguing is the production of water resulting from the photolysis of CO in the dayside upper atmosphere above 0.1 mbar. The same process also operates in a solar C/O condition <cit.>, but the increase of H2O is more pronounced here, owing to its lower equilibrium composition. Compared to solar C/O, C2H2 abundances are significantly enhanced. Moreover, C2H2 exhibits substantial zonal variations, spanning several orders of magnitude across the equator between 1 bar and 0.1 mbar level (bottom panel of Figure <ref>). Although CH4 is rather uniform below 10^-4 bar across the planet, several hydrocarbons show notable compositional gradients in the zonal direction, as seen in Figures <ref> and <ref>. C2H2 abundance on the evening limb is about 100–1000 times higher than that on the morning limb, whereas C2H4 peaks between 0.1 and 10^-3 bar on the morning limb with strong vertical variations. The morning-evening limb asymmetry in C2H2 is driven by the temperature difference and manifested due to its relatively short chemical timescale. C2H2 is favored at the hotter evening limb, where the temperature is about 200–300 K higher than the morning limb, as evident in the profiles without zonal wind in Figure <ref>. In this region, the main destruction pathway for C2H2 proceeds through a sequence of unsaturated hydrocarbons:C2H2 + H->[M] C2H3 C2H3 + H2 -> C2H4 + H C2H4 + H->[M] C2H5 C2H5 + H-> CH3 + CH32(CH3 + H2-> CH4 + H) 3pt : C2H2 + 3H2-> 2CH4.The timescale of C2H2 can be estimated from the rate-limiting steps, either the formation of CH3 or C2H3 in the above pathway. At 1 mbar pressure level, the lifetime of C2H2 on the nightside ranges from 10^3 to 10^5 seconds. This timescale is comparable to that of horizontal transport (Figure <ref>), explaining the compositional gradient C2H2 displays.Although C2H4 also has a higher equilibrium abundance on the warming evening limb, photochemistry and horizontal transport lead to an accumulation of C2H4 on the cooler morning limb instead. The combining effects result in a peak C2H4 distribution on the morning limb greater than the abundance on the evening limb, in contrast with the lower morning C2H4 abundance predicted in the absence of zonal transport (bottom panel of Figure <ref>). The photolysis of CH4 on the dayside produced methyl radical (CH3), the precursor to produce other hydrocarbons. CH3 flow into the nightside where the lower temperature promotes the formation of C2H6, C2H5, and C2H4, since the combination of CH3 into C2H6 is exothermic and kinetically favored at cooler temperatures. With the aid of horizontal transport, the photochemically produced CH3 is able to transport to the nightside to initiate the production of hydrocarbon species. This hydrocarbon production on the nightside is then carried to the morning limb by the zonal wind, leading to the peak shown in C2H4. Similar behavior is found in C2H6 distribution as well, but at significantly lower abundances. For completeness, the equatorial abundance distribution as a function of the longitude and pressure of several key species can be found in Figure <ref>.§ WHEN DOES THE ASSUMPTION OF A UNIFORM ZONAL JET IN PSEUDO-2D MODELS BREAK DOWN?The equatorial jet is a robust feature for tidally-locked planets that receive steady day-night thermal forcing <cit.>. However, the equatorial jet transitions to a day-to-night flow when the radiative timescale or drag timescale becomes short <cit.>. In the case of cooler sub-Neptunes or nonsynchronously rotating planets, the zonal wind in the equatorial region can also develop a more complex structure with winds changing directions across varying pressure levels <cit.>. Here, we examine how horizontal transport changes on a hot Jupiter with strong frictional drag dominated by a day-to-night flow. We adopted the output of T_eq = 1600 K with strong drag (τ = 10^4 s) from <cit.> as our fiducial atmosphere with a day-to-night flow. Our goal is to determine when a pseudo-2D approach with uniform flow remains valid and when it breaks down.§.§ Comparisons within the superrotating regime The equatorial jet induced by the stationary day-night heating typically resembles a Gaussian-like vertical profile, gradually diminishing towards zero in both the upper and deeper layers of the atmosphere, as depicted in Figure <ref>. We use HD 189733 b as an example of circulation characterized by equatorial superrotation for the comparison between 2D and pseudo-2D approaches. For the uniform zonal wind, the pseudo-2D model needs to adopt either the peak jet speed <cit.> or the averaged wind velocity over the jet region <cit.>. For our pseudo-2D model of HD 189733b, we adopt a mean zonal wind of 1736 m/s at 1 bar from the GCM as the uniform zonal wind.Figure <ref> shows the comparison of CH4 distributions, demonstrating differences caused by the simplified transport in the pseudo-2D approach verse the 2D model. In the deep region where thermochemical equilibrium holds, the actual wind pattern is not effectively relevant. The CH4 distribution transitions from a horizontal homogenized regime to a vertical mixing-dominated regime at the same pressure level around 10^-4 bar in both models. In the upper atmosphere above 10^-4 bar, the pseudo-2D model has slightly slower horizontal transport (Figure <ref>) and predicts slightly more vertically mixed profiles compared to those in our nominal 2D model, where stronger winds around 1 mbar level at certain longitudes can alter the distribution. Similar trends are seen in other species as well. Despite these minor differences, we find the pseudo-2D approach to be a valid assumption when a broad superrotation jet is present. §.§ comparisons within the day-to-night flow regimeNext, we explore the scenario of a hotter hot Jupiter with strong radiative <cit.> or magnetic drag <cit.>, where the global circulation has changed from equatorial superrotation to day-to-night flow <cit.>. Figure <ref> compares the abundance distributions of CH4 and HCN in the equatorial region of the fiducial strong-drag hot Jupiter atmosphere with T_eq = 1600 K simulated by our nominal 2D model and pseudo-2D model. It is evident that the day-night flow leads to a symmetrical distribution, differing from the distribution governed by uniform zonal winds. This discrepancy is most pronounced around the morning limb, owing to the distinctive transport dynamics at play. In the nominal model, nightside-to-morning-limb advection occurs, contrasting with the morning-limb-to-nightside advection in the model with uniform zonal winds. In the nominal 2D model, the CH4 and HCN abundances remain low on both morning and evening limbs. Conversely, when assuming uniform eastward winds, CH4 and HCN exhibit higher abundances around the morning limb due to nightside transport.The disparity in abundance profiles across longitudes from the nominal model and pseudo-2D model is further illustrated in Figure <ref>. This comparison demonstrates that, apart from the dayside region near the substellar point, the assumption of uniform zonal winds in the pseudo-2D framework can yield orders of magnitude differences when the dominant circulation is featured by a day-to-night flow. The pseudo-2D approach is only suitable for a tidally locked atmosphere with moderate drag such that the circulation is still within the equatorial superrotation regime. § SYNTHETIC SPECTRA (FOR THE EVENING AND MORNING LIMBS)We present synthetic transmission spectra for the morning and evening limbs based on our 2D model results. The transmission spectra are computed using PLATON <cit.>. including opacity sources of CH4, CO, CO2, C2H2, H2O, HCN, NH3, O2, NO, OH, C2H4, C2H6, H2CO, NO2, and collision-induced absorption (CIA) of H2-H2 and H2-He. We will focus on the observational impact of horizontal transport and the differences between the evening and morning limbs. It is worth noting that the distribution of clouds and hazes can also significantly influence limb asymmetry <cit.>. For the scope of this study, we will leave the effects of clouds to future work and will solely delve into the chemical transport within our cloud-free models.Figure <ref> illustrates the synthetic transmission spectra with and without horizontal transport for HD 189733 b and HD 209458 b (including solar and super-solar C/O). For HD 189733 b, 1D models (i.e. without horizontal transport) produce methane features on the cooler morning limb that are absent on the evening limb (Figure <ref>). However, this compositional gradient is readily homogenized once the zonal transport is included. For HD 209458 b (solar C/O), methane abundance remains too low in both 1D and 2D models (below ppm level; see Figure <ref>). Consequently, the influence of horizontal transport on the spectra of HD 209458 b with solar C/O is negligible. Instead, the dominant molecules that show up in the spectra are H2O, CO, and CO2, all of which exhibit relatively uniform equilibrium abundances throughout the planet. As a result, these gases do not contribute to limb asymmetry, whether horizontal transport is in play or not. In the case of HD 209458 b with super-solar C/O, CH4 took over H2O to make the strongest spectral features at 2.3, 3.3-3.9, 5.5-6.6, and 7–8.5 μm. As CH4 becomes the predominant carbon-bearing molecule, it also reaches a uniform distribution across the entire planet. The major limb asymmetry is the presence of C2H4 absorption on the morning limb but not on the evening limb, due to the horizontal transport of C2H4 from the nightside to the morning limb. § DISCUSSIONS AND CONCLUSIONS In this paper, we present the 2D version of the photochemical model VULCAN. We first validate VULCAN 2D with analytical solutions. VULCAN 2D successfully reproduces the special case equivalent to the pseudo-2D approach with uniform winds and also demonstrates consistent results with a 3D GCM <cit.>. We use limiting cases to demonstrate the distinct effects of the vertical and horizontal mixing processes. For typical hot Jupiters, such as HD 189733 b and HD 209458 b, we find most of the atmosphere below 1–0.1 mbar is within the horizontal transport-dominated region, where zonal advection prevails over vertical mixing. In the upper atmosphere above this region, photochemistry and vertical mixing control the composition. We explore the sensitivity to the parametrization of vertical mixing and find a mild dependence in the abundance distribution of our hot Jupiter models. We note that stronger vertical mixing can, in principle, promote morning-evening asymmetry.For HD 189733 b, the morning-evening limb asymmetry in CH4 predicted by 1D models is readily homogenized when horizontal transport is included. For HD 209458 b with solar C/O, the transmission spectra exhibit no limb asymmetry attributed to the composition due to the paucity of CH4. However, with super-solar C/O, horizontal transport results in notable limb asymmetries in hydrocarbons (C2H4 in this case). For atmospheres with circulation dominated by an equatorial jet, we show that the pseudo-2D (rotation 1D column) approach can reasonably capture the transport, but the assumption of uniform flow breaks down for day-to-night circulations under stronger drag and pseudo-2D models can yield orders of magnitude differences. The 2D modeling framework highlights the need to consider both horizontal and vertical transport when interpreting the compositions from transmission observations probing the limbs. The 2D framework developed here bridges the gap between traditional 1D photochemical kinetics models and 3D general circulation models that typically exclude chemical kinetics. Future directions include incorporating sulfur chemistry and applying the model to the meridional plane or tidally locked coordinate <cit.> to explore the role of overturning circulation. In this work, we did not explore the effects of vertical advection. The upward or downward transport can lead to significantly different distributions, as already demonstrated in the ammonia distribution on Jupiter by a 1D model <cit.>. <cit.> also showed how slow overturning winds could transport much more heat than fast zonal jets on tidally locked planets with weak temperature gradients. A future model development should represent both of these processes.By elucidating how the atmosphere composition is regulated by global circulation, this 2D modeling approach will pave the way for self-consistent and more comprehensive models and provide a useful tool to enhance the capacity of 3D GCMs for interpreting observations. Part of this work is supported by the European community through the ERC advanced grant EXOCONDENSE (#740963; PI: R.T. Pierrehumbert). S.-M.T. acknowledges support from NASA Exobiology Grant No. 80NSSC20K1437 and the University of California at Riverside. X.Z. acknowledges support from the NASA Exoplanet Research Grant 80NSSC22K0236 and the NASA Interdisciplinary Consortia for Astrobiology Research (ICAR) grant 80NSSC21K0597. Financial support to R.D. was provided by a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant to C.Goldblatt. aa | http://arxiv.org/abs/2310.17751v1 | {
"authors": [
"Shang-Min Tsai",
"Vivien Parmentier",
"João M. Mendonça",
"Xianyu Tan",
"Russell Deitrick",
"Mark Hammond",
"Arjun B. Savel",
"Xi Zhang",
"Raymond T. Pierrehumbert",
"Edward W. Schwieterman"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20231026194519",
"title": "Global Chemical Transport on Hot Jupiters: Insights from 2D VULCAN photochemical model"
} |
Quantum simulation of the tricritical Ising model in tunable Josephson junction ladders Michele Burrello======================================================================================= This paper delineates the formulation and verification of an innovative robotic forearm and elbow design, mirroring the intricate biomechanics of human skeletal and ligament systems. Conventional robotic models often undervalue the substantial function of soft tissues, leading to a compromise between compactness, safety, stability, and range of motion. In contrast, this study proposes a holistic replication of biological joints, encompassing bones, cartilage, ligaments, and tendons, culminating in a biomimetic robot. The research underscores the compact and stable structure of the human forearm, attributable to a tri-bone framework and diverse soft tissues. The methodology involves exhaustive examinations of human anatomy, succeeded by a theoretical exploration of the contribution of soft tissues to the stability of the prototype. The evaluation results unveil remarkable parallels between the range of motion of the robotic joints and their human counterparts. The robotic elbow emulates 98.8% of the biological elbow's range of motion, with high torque capacities of 11.25 Nm (extension) and 24 Nm (flexion). Similarly, the robotic forearm achieves 58.6% of the human forearm's rotational range, generating substantial output torques of 14 Nm (pronation) and 7.8 Nm (supination). Moreover, the prototype exhibits significant load-bearing abilities, resisting a 5kg dumbbell load without substantial displacement. It demonstrates a payload capacity exceeding 4kg and rapid action capabilities, such as lifting a 2kg dumbbell at a speed of 0.74Hz and striking a ping-pong ball at an end-effector speed of 3.2m/s. This research underscores that a detailed anatomical study can address existing robotic design obstacles, optimize performance and anthropomorphic resemblance, and reaffirm traditional anatomical principles. § INTRODUCTION In recent years, significant advancements in the robotics field have focused on developing and controlling humanoid robots for integration into daily life. These robots are designed to interact with humans and perform a variety of tasks. One envisioned scenario involves physical collaboration between humans and robots, which has long captivated the scientific community. Human-centred and ergonomic design are crucial aspects of engineering, and when humans interact with robots, safety and system efficiency are the primary considerations. The pursuit of a biomimetic appearance resembling the human body is also a key direction of effort in this field. Numerous studies have focused on developing control architectures for ergonomic physical human-robot interaction<cit.>. However, the hardware design of humanoid robots has rarely been considered for optimization in collaborative actions and is often assumed as a given. This paper contributes to the development of optimal biomimetic robotic elbow and forearm designs, grounded in human anatomical structures, to enhance performance and ergonomics in human-robot collaborative tasks.The elbow and forearm are crucial components of the upper limb. In traditional robotic arm designs, the forearm typically features a geared motor directly connected to the forearm output rotation, with two rotating joints in series to mimic elbow flexion/extension and forearm rotation<cit.>. These designs offer several advantages, such as a large range of motion<cit.>. Ultra-powerful motors can generate considerable torque by increasing motor and limb size<cit.>. They can achieve exceptional strength and ultra-high accuracy using materials like stainless steel, aluminium, titanium, hinged joints, high-precision gearboxes, and advanced manufacturing technology. Moreover, these designs can simplify the design, manufacturing, and maintenance processes. However, balancing compactness and high output performance can be challenging since the motor needs to be installed near the joint for optimal efficiency. For example, using a small motor for forearm rotation may result in insufficient output torque, while an excessively large motor can lead to a bulky forearm, taking up space within the forearm structure and complicating the installation of muscles responsible for hand joint movements when using remote tendon control. On the other hand, achieving compactness in the forearm often requires local control of hand actuation, with all hand actuators located inside the hand, making it difficult to generate larger output torque at finger joints. Moreover, as human-robot interaction requirements increase, the rigidity and power of such robotic systems can pose safety risks during interactions. Additionally, many robots lack the natural, human-like aesthetics needed for comfortable interaction.The structure of the human forearm and elbow joint has several advantages. Firstly, its compact nature accommodates robust muscles within a small forearm diameter, enabling intricate and precise hand and wrist movements, hence contributing to manual dexterity. Secondly, its stability, reflected in mobile yet resilient joints that resist easy dislocation, permits the handling of heavy tasks without elbow and forearm damage, thereby facilitating significant load-bearing capacity. The third attribute pertains to safety and compliance. Unlike conventional rigid joints, human joints exhibit damping and elastic properties, thereby offering variable joint stiffness. A salient feature of biological joints is their capacity to dislocate under extreme external forces and their inherent self-recovery mechanisms. This process echoes the actions of an orthopaedic surgeon in treating a dislocated human joint. The human joints have the capability to heal over time, traditional robots, on the other hand, require external intervention for repairs. For enhanced operator safety, robotic joints designed to permit controlled dislocation, and subsequently be straightforward "reset", facilitate rapid repairs and swift return to operational status, all while safeguarding the user. The potential of these characteristics to enhance robotics and automation systems warrants further exploration. Incorporating this feature into a robotic arm can enhance operator safety, appropriate in a human-robot interaction environment. Consequently, many researchers have developed biomimetic designs that emulate the human structure <cit.>. Some of these designs have adopted a tendon-driven approach akin to the biological arm, leveraging the physical properties of tendons to mimic the inherent compliance and dynamics of musculoskeletal characteristics. Additionally, some have achieved a more closely resembling appearance to a biological arm. While these designs successfully replicate basic human forearm and elbow functionality, they often only partially represent human anatomy, particularly in terms of soft tissues. Insufficient soft tissue representation may result in structural stability issues, such as lateral forearm stability. Refined soft tissue representation can improve load-carrying ability, impedance, and compliance, and provide flexible limitations when the joint reaches its extreme position. Incorporating soft tissues allows the joint to recover to a certain extent when dislocated by extreme external forces, significantly enhancing the safety of human-robot interaction. Soft tissues also play a crucial role in introducing damping to the entire system, effectively mitigating oscillations that may occur during mechanical motion.This study investigates the human forearm and elbow to understand how the three bones (humerus, ulna and radius) achieve a wide range of motion while maintaining axial and lateral load-carrying ability, compactness, and stability. This insight is then applied to the design of a biomimetic robot. Furthermore, a biomimetic actuation method was implemented in the robot to explore whether adopting a human-like actuation scheme could enhance joint output while preserving compactness. The research explores the potential advantages of this approach from an academic perspective to determine whether employing these structures can optimize the biomimetic robot's design and address inherent issues. § RELATED WORK§.§ Existing robotic arm designs In traditional robotic arms, the elbow allows flexion/extension with a split forearm connected to the elbow for forearm rotation<cit.>. This design paradigm has maintained preeminence in the realm of robotic arms, largely owing to its straightforward, efficient architecture that streamlines design, manufacturing, and maintenance procedures, in addition to facilitating the implementation of control algorithms. These designs often employ rigid components such as bearings and shafts to achieve a wide range of motion and stabilize the joints. In contrast, the human elbow relies on a three-bone structure, wherein the radius rotates around the ulna to accomplish forearm rotation, and both the radius and ulna rotate around the humerus to achieve elbow flexion/extension. This distinctive configuration attains two rotational joints within a compact structure, adeptly balancing mobility and stability without the dependence on shafts.Drawing from the human skeletal system, several research groups have proposed robotic elbow and forearm mechanisms reflecting the three-bone structure of the human arm <cit.>. These solutions, employing conventional hinge and ball-and-socket articulations, mirror the human forearm structure, facilitating the rotation of the radius around the ulna. Despite their bio-inspired designs, these systems predominantly resort to rigid architectures to imitate articulated joints, thus achieving humanoid joint motions. While they successfully address certain limitations of conventional designs, such as compactness and mobility, with <cit.> even simulating human ligaments for enhanced safety, they often neglect to thoroughly investigate or exploit the human body's innate structural advantages. These designs generally provide a simplified representation of bodily joint functions, mostly overlooking the contributions of the human body's soft tissues. Their exclusion may lead to stability concerns or increased joint friction under substantial loading. Moreover, safety remains a substantial challenge in human-robot interaction scenarios. Our research delves into the intricacies of human joint structures, investigating their inherent mechanical properties, and utilizes this knowledge to propose an innovative elbow and forearm design addressing the aforementioned challenges. We initiate our exploration with an introduction to the human elbow and forearm's anatomical structure.§.§ Anatomy study of biological elbow and forearmThe elbow joint is a vital component of the upper extremity, serving two primary functions in the human body. Firstly, it operates as a hinge joint, facilitating forearm flexion/extension around the humerus, which is essential for activities such as feeding, reaching, throwing, and personal hygiene. Secondly, it functions as a rotational joint in conjunction with the two radioulnar joints, enabling forearm supination/pronation, which can occur independently or simultaneously with elbow flexion/extension. Pronation and supination of the forearm enable the hand to generate rotating torque, allowing it to perform tasks such as screwing. Given its ability to operate efficiently in narrow spaces and generate omnidirectional torque. This mechanism is achieved through the rotation of the radius around the fixed ulna.The elbow comprises two individual joints: the humeroulnar joint (Fig.<ref>(a)), located between the humerus and ulna, often considered a hinge joint in robotic designs. The typical range of motion for flexion/extension at the humeroulnar joint spans from 0to 146. The second joint, the humeroradial joint (Fig.<ref>(b)), can be considered as a ball-and-socket joint, is situated between the humerus and radius, consisting of flexion/extension and rotation motions.The forearm encompasses two joints: the proximal radioulnar joint (PRUJ), positioned between the proximal ends of the ulna and radius (Fig.<ref>(b)), and the distal radioulnar joint (DRUJ), located at the distal ends of the ulna and radius (Fig.<ref>(d)). Both PRUJ and DRUJ facilitate pronation and supination of the forearm, occurring around an axis defined by a line extending from the centre of the radial head's fovea to the distal ulna head<cit.>, is represented by red line in Fig.<ref>(b). Pronation and supination are simple motions involving the radial head pivoting on the ulna, while the distal end of the radius glides around a stationary ulna.Primary stability of the humeroulnar joint is ensured by two collateral ligaments of the elbow: the Medial Collateral Ligament (MCL) and the Lateral Collateral Ligament (LCL). The MCL, a critical element in maintaining elbow joint stability, comprises three primary components: anterior, posterior, and transverse bundles, as depicted in Fig.<ref>(a). The anterior and posterior bundles do not originate directly from the elbow rotation axis, causing variable ligament tension during flexion and extension <cit.>. Specifically, the anterior bundle experiences tension during elbow extension, while the posterior bundle is tensioned during flexion <cit.>. The LCL complex, another pivotal stabilizer of the elbow joint, is illustrated in Fig.<ref>(b). Constituting the Lateral Ulnar Collateral Ligament (LUCL), Radial Collateral Ligament (RCL), and the annular ligament, the LCL complex maintains consistent tension through the elbow's motion, given the central origin of the LUCL and RCL in relation to elbow flexion/extension <cit.>. The annular ligament encapsulates the radial head and is anchored to the ulna, with the RCL's connection to the annular ligament providing further stabilization to the radial head <cit.>. The interosseous membrane (IOM) plays a crucial role in connecting the ulna and radius throughout the length of the forearm (Fig.<ref>(a) (b))<cit.>. It is made up of three main parts: the distal membranous portion (DOB), the middle portion, and the proximal portion. The middle portion can be further divided into the central band (CB) and the accessory band (AB). The IOM performs several critical functions. First, it acts as a pivot for forearm rotation and connects the radius to the ulna. Second, it improves the stability of the DRUJ <cit.>, ensuring longitudinal stability for the forearm. Most importantly, research has indicated that the IOM can be viewed as a load transfer system that distributes the load from the radius to the ulna<cit.>. Approximately 80% of the compression force crossing the wrist is directed through the radiocarpal joint (Fig.<ref>(a)), with the remaining 20% crossing the distal side of the wrist via the soft tissues in the 'ulnocarpal space' <cit.>. As shown in Fig.<ref>(a), the compression force acting on the radius from the wrist can be distributed to the ulna via the IOM, which helps reduce the load on the radial head and stabilizes the forearm against radioulnar bowing or splaying by drawing the ulna and radius towards the interosseous space. Similarly, As shown in Fig.<ref>(b), when a distracting force is applied to the distal radius from the wrist, this force tightens the fibres of the IOM, transferring the load to the ulna and limiting the load transferred to the proximal radius to be distributed across its limited articular surface area. As a result, IOM distributes the axial force from the radius to the ulna, and effectively disperses it across multiple joints (including the DRUJ, PRUJ, and humeroulnar joint), instead of transferring it directly to the humeroradial joint. This mechanism helps prevent dislocation or excessive stress in the humeroradial joint.The DRUJ is a critical component of the forearm and wrist, and the triangular fibrocartilage complex (TFCC) plays a vital role in its stability (Fig.<ref>(c)) <cit.>. Composed of the palmar radioulnar ligament (PRUL), dorsal radioulnar ligament (DRUL), and extensor carpi ulnaris tendon (ECU), the TFCC helps maintain the proper alignment and function of the joint. §.§ Performance of the biological elbow and forearm The average percentage of total body weight and length of forearm is 1.72% and 15.85% <cit.>. Table <ref> presents the range of motion and output torques of the biological joints. Taking into account the dimensions and weight of the human arm, it becomes evident that the human arm can be regarded as an impressively powerful mechanism.This section concludes that contemporary robotic arm designs possess shortcomings, including the compromised safety of rigid robotic arms and instability in highly biomimetic variants. These issues are effectively resolved in the human arm, providing a blueprint for refining robotic arm design. Therefore, the forthcoming section will centre on replicating human arm characteristics to enhance robotic arm configuration.§ BIOMIMETIC DESIGN OF THE ELBOW-AND-FOREARM SYSTEM The preceding section delineated the intricate structure and properties of the human arm. This section will introduce a novel, highly biomimetic robotic arm design, informed by the comprehensive understanding of bones, ligaments, and other soft tissues detailed earlier. §.§ Design of the skeletal structure In the proposed design, the elbow and forearm comprise the humerus, ulna, and radius, as shown in Fig. <ref>(a). Each joint within the skeletal structure is characterized by a thin layer of cartilage coating the contact surface. Additionally, the ligament systems encompassing TFCC, IOM, LCL, MCL, and the annular ligament are replicated within the robotic elbow and forearm.The primary motion of the ulna is rotation around the humerus, which can be simplified to a hinge joint. The humeroulnar joint can achieve sufficient lateral and axial stability by relying on the MCL, LCL, and olecranon process. The radius can rotate relative to the humerus and rotate around the fixed ulna around the axis (shown in red in Fig. <ref>(b)) to achieve forearm rotation. Their unique geometry enables a wide range of motion in forearm rotation. The distal radial head, shown in red in Fig. <ref>(a), maintains an effective distance between the radius and ulna, preventing interference and maximizing the range of motion (Fig. <ref>(b)). This increased distance also enhances output torque during forearm rotation. The curved middle portion of the radius (Fig. <ref>(b)) and the downward curve of the ulna near the elbow joint (Fig. <ref>(c)) create space between them and the rotation axis (dashed red line), allowing the radius to rotate around the ulna without contact interference. This configuration significantly enhances the mobility of the radius. However, this increased mobility also causes instability in the radius across various directions. Consequently, in our design, we will incorporate essential soft tissues, drawing from anatomical features, to achieve stability for the forearm. §.§ Design of the soft tissues The design of soft tissues was optimised, congruent to human anatomical structures, facilitating their emulation through engineered materials. Fig. <ref> demonstrates the spatial distribution and the architectural design of these soft tissues.§.§.§ MCL To mimic the hinge function of a biological elbow in the robotic counterpart, the MCL complex is subdivided into three segments: anterior, middle, and posterior, as shown in Fig. <ref>(e). The anterior segment originates above the elbow rotation centre, the middle segment at the centre, and the posterior segment below it. This arrangement allows the middle segment to offer stability throughout the elbow rotation while the tension in the anterior and posterior segments increases significantly near full extension and flexion, limiting the maximum motion range. Video 1.4 in the supplementary material presents the MCL during elbow flexion/extension. By replicating the biological MCL complex, the robotic elbow attains joint stability and a range of motion comparable to that of a human elbow joint.§.§.§ Annular ligament The annular ligament is essential for stabilizing the PRUJ in the robotic forearm. As shown in Fig.<ref>(e) and (d), it comprises multiple fibres woven into a short circular tube, originating from the ulna, encircling the radial head, and reinserting into the ulna.§.§.§ LCL In the robotic elbow, the LCL comprises the RCL and LUCL (Fig. <ref>(c)). The RCL connects the lateral epicondyle to the annular ligament, while the LUCL links the lateral epicondyle to the ulna. Together with the MCL, they hinge the forearm to the humerus, contributing to the elbow joint's stability, as depicted in Fig. <ref>(d).§.§.§ TFCC The TFCC in the robotic forearm (Fig. <ref>(b), consisting of DRUL and PRUL, originates from the ulna and inserts into the radius. It stabilizes the DRUJ, while the annular ligament secures the PRUJ, enabling the radius to be hinged to the ulna.§.§.§ IOMFig.<ref>(a) illustrates the arrangement of the seven major portions of the IOM in the proposed design. The IOM can reduce friction in the humeroradial joint and between the annular ligament and radial head, decreasing resistance during forearm rotation. As shown in Fig.<ref>, without the IOM, the annular ligament and DRUL/PRUL restrict the radius's axial movement, generating friction when distracting forces are applied to the radius distal head. The radius proximal head is pressed against the humerus, resulting in significant friction in the humeroradial joint when compression forces are applied. The IOM can distribute these forces across its seven portions, reducing friction and enabling smoother forearm rotation.This section delves into the application of engineering materials, mirroring human arm constituents such as bones, ligaments, and cartilage, in the design of the robotic arm. Preliminary tests indicate that the arm can replicate the motion functions of the human counterpart and maintain joint stability. The ensuing section will decode the biological principles inherent in these designs.§ MODELING AND STABILITY ANALYSIS OF THE RADIUS-ULNA JOINTS In the previous section, the robotic forearm and elbow, inspired by the human skeletal ligament system, were introduced. In the design, the ulna is firmly hinged to the humerus due to the MCL and LCL, providing considerable stability. The radius has two degrees of freedom, enabling a wide range of motion, which makes it more susceptible to dislocation compared to the ulna. The key to stabilizing the radius as it rotates around the ulna is to hinge it on its rotation axis, connecting it to the stable ulna. As shown in Fig. <ref>(a), the forearm rotation axis (red dashed line) passes through the rotation centres of the humeroradial joint, PRUJ, and DRUJ. The stability of these joints is achieved through mechanisms formed by soft tissues and joint surfaces. Several mechanical features and principles, derived from studying the human arm, have been identified as potentially contributing to the high stability of the radius. These include the ball and socket structure of the humeroradial joint, TFCC stabilising the DRUJ, improving forearm stability through IOM, and variation in MCL strain during elbow movement. This section will theoretically analyze how the proposed design's mechanisms anchor the radius to the axis of forearm rotation and sustain stability.§.§ Ball and socket structure of the humeroradial jointThe humeroulnar joint and PRUJ work in conjunction to stabilize the proximal radius. The interplay of the annular ligament, radial head, capitulum, and RCL aids in maintaining the radial and axial position of the proximal radius. The PRUJ's rotation centre, situated on the forearm rotation axis, is constrained by the annular ligament and RCL (Fig. <ref>(a)), assisting in the prevention of lateral dislocation of the radius. The humeroradial joint, located between the radial head and capitulum, operates as a ball-and-socket joint, with its rotation centre also residing on the forearm rotation axis. As shown in Fig. <ref>(c) and (d), the RCL and annular ligament apply pressure to the socket (radial head), which in turn pushes it against the ball (capitulum), thus enhancing the lateral stability of the radius. The humeroradial joint can be simplified as Fig. <ref>(a). Point A is the articulation point of the annular ligament and radius. Point O is the spherical centre of the capitulum. The LCL can be simplified to a spring with high stiffness, presented by OA. Point T is the contacted endpoint of the radial head and capitulum. Since the radial head is not a complete socket, there is an initial angle between TO and the horizontal line, denoted as θ_s. The joint is more stable as θ_s increases, but the range of motion will be limited and vice versa.When an external force F_e is applied to the distal end of the radius, only lateral forces are considered, as shown in Fig. <ref>(b), the humeroradial joint will start to dislocate. The joint contact point slides from T to T'. As the radial head is retained by the annular ligament, the radius and the annular ligament can be approximated as hinged at point A. The annular ligament is fixed to the ulna, and assuming that the ulna is fixed, the annular ligament can only move a small distance in the horizontal direction. During the dislocating of the humeroradial joint, the position of TA will move to T'A'. The LCL will be stretched to OA'. Radius will deflect. The relationship between F_e and the elongation of LCL Δ l_s will be calculated.When F_e is applied to the radius, as shown in Fig. <ref>(b), the annular ligament will provide support force F_a. There will be a support force F_s from the contact point T', and the tensile force from the LCL through the annular ligamentF_t. According to the force balance:F_s cosθ_s1=k Δ l_sF_s sinθ_s1+F_e=F_aF_s l_s1 sinθ_s1=F_e l_e Where θ_s1 is the angle between TO and the horizontal, l_s1 is the length of OA', k is the elasticity coefficient of the ligament, l_e is the moment arm of the external force, Δ l_s can be calculated as: Δ l_s=l_s1-l_s0 Where l_s0 is the initial length of OA.Combine equations <ref> and <ref>, the relation between F_e, Δ l_s and θ_s1 can be obtained as function: F_e=f_1(Δ l_s, θ_s1) According to Fig. <ref>(b), in Δ T'OA', there are:α_1=π-γ-βcosα_1=l_s1^2+l_a^2-r^2/2l_s1l_a r/sinα_1=l_a/sinθ_s1 Where α_1 is the angle between TA and the horizontal line changes from α_0. γ is the angle between TA and the radius axis AR, which is constant. β is the radius deflect angle. l_a is the length of T̅A̅ and T̅'̅A̅'̅. r is the length of OT.According to equations <ref> and <ref>, the relation between β and Δ l_s can be obtained as function:β=f_2(Δ l_s) According to equation <ref>, the relation between θ_s1 and β can be obtained as a function:θ_s1=f_3(β) According to equations <ref>, <ref> and <ref>, the relation between F_e and Δ l_s can be obtained. The similation results between F_e and Δ l_s, θ_s1 and Δ l_s with different θ_s are shown in Fig. <ref>. The solid curves in the figure show that increasing the angle θ_s between the horizontal line and TO enhances the joint's capacity to withstand external forces F_e.When the joint is dislocated by F_e, as shown in the solid curves in Fig. <ref>, the force F_e required will increase first and then decrease after reaching the peak value when the LCL is stretched as Δ l_s increases. When Δ l_s<Δ l_p (Δ l_p denotes the value of Δ l_s when F_e reach the peak value), F_e needs to be increased continuously to make the joint dislocation more severe. At this stage,θ_s1 0, as shown in the dashed line in Fig. <ref>, the joint may recover automatically if the external force is withdrawn. When the LCL ligament stretches to Δ l_s>Δ l _p, even if F_e decreases or is removed, the joint dislocation will deteriorate, the joint may dislocate automatically until θ_s1=0. So the joint dislocation happens when Δ l_s=Δ l_p, even if θ_s1>0 in this stage, the joint is dislocated. §.§ TFCC stabilize the DRUJ The TFCC structure (Fig. <ref>(b)) constrains the DRUJ's rotation centre and, in conjunction with the PRUJ, enables the radius to maintain initial stability. Notably, the rotation centre of the TFCC and the joint surface rotation centre (on the forearm rotation axis) are not aligned. To address this misalignment, the ECU and PCU tendons, which actuate the hand, prompt the DRUL or PRUL to encircle them as the radius rotates. Consequently, even though the TFCC's rotation centre does not align with the joint rotation centre, the TFCC can still sustain tension and restrict the joint rotation centre. The simplified diagram of this structure during forearm rotation is shown in Fig. <ref>. DRUL is in contact with the ECU when the forearm rotates at θ_22=θ_ecu(θ_22 is the joint position). Point D and point P are the joint contact edge points. Point E represents the location of ECU. O_t is the rotation centre of TFCC. O_r is the rotation centre of the DRUJ joint contact surface.Before DRUL contacts with the ECU (θ_22<θ_ecu), the relationship between the length changes of DRUL Δ_d, PRUL Δ_p,and θ_22 can be calculated as:Δ_d=√((l_r^2+l_or^2-2l_r l_or cos(θ_d+θ_22)))-l_od Δ_p=√((l_r^2+l_or^2-2l_r l_or cos(θ_p+θ_22)))-l_op Where l_r is the length of the O_rD and is a constant. l_or is the length of O_t O_r. θ_d is the angle of ∠ DO_rO_t, it will increase to θ_d+θ_22 when the joint rotates. l_od is the length of O_tD when the forearm is in its initial position, i.e. the initial length of DRUL. l_op is the length of O_tP, which is the initial length of PRUL. θ_p is the angle of ∠ PO_rO_t, it will increase to θ_p+θ_22 when the joint rotates. After DRUL contacts with the ECU (θ_22≥θ_ecu), Δ_d varies with θ_22 as:Δ_d=√((l_r^2+l_re^2-2l_r l_re cos(θ_22-θ_ecu)))+l_te-l_od Where l_re is the length of O_rE, l_te is the length of O_tE (Fig. <ref>(c)).In the design, the length of the DRUL is adjusted to ensure tension when it contacts with the ECU. The relationship between the changes in the lengths of the DRUL and PRUL and the joint angle (θ_22) is illustrated in Fig. <ref>. It can be observed that when θ_22<θ_ecu, the DRUL (blue) is almost not stretched. When it comes into contact with the ECU, it rapidly stretches, effectively limiting the maximum position of θ_22. During forearm rotation, the PRUL (red) is initially relaxed and then stretched, with the total amount of relaxation and stretch not exceeding 2 mm. Thus, the TFCC structure is not able to stabilise the DRUJ and other soft tissues, such as the IOM, further measures are required to enhance stability. §.§ Improving forearm stability through IOM While the annular ligament, LCL, and TFCC structures offer initial stability to the radius, the TFCC does not maintain constant tension during forearm rotation, suggesting limited stability in the DRUJ and PRUJ. Besides these structures, the IOM significantly contributes to forearm stabilization by serving as a hinge between the radius and ulna. The membrane features distinct bundles with diverse orientations, enhancing axial and lateral stability. Since the membrane bundles' insertion points on the ulna and radius reside on the forearm rotation axis (Fig. <ref>(b)), the membrane does not generate resistance during forearm rotation. This enables a broad range of motion without sacrificing stability.Fig. <ref>(a) depicts the forearm with intact MCL, LCL, TFCC, IOM, and the annular ligament. When a lateral force is exerted on the distal end of the radius, the IOM aids in counteracting the external force and transfers it to the LCL and MCL. This subsection will explore the mechanism by which the IOM assists in resisting lateral external forces.Under external lateral forces, the IOM bundles in the same inclined direction transfer force through a similar mechanism. To examine the stability offered by the IOM from various directions, two IOM bundles in distinct orientations were chosen for analysis, specifically ligament 5 and ligament 7, as displayed in Fig. <ref>(a). The derivation process can be directly applied to other IOM ligaments.With only ligament 5 and ligament 7 retained, the forearm can be simplified to the configuration depicted in Fig. <ref>(b). The forearm rotation axis (red line) passes through the insertion points of ligament 5 and ligament 7 on the ulna, as illustrated in Fig. <ref>(a). The simplified representation in Fig. <ref>(b) displays the characteristics and parameters defining the ligaments' positions, while the remaining structures are simplified. The strain in ligaments 5 and 7 stays constant during forearm rotation, ensuring the simplified diagram accurately represents the geometric relations even as the forearm rotates. This allows the IOM to stabilize the forearm by transmitting lateral forces.As illustrated in Fig. <ref>(b), quadrilateral ABCD can be simplified into a planar configuration with hinges A, B, C, and D free to rotate. The radius and interosseous ligaments 5 and 7 are permitted to rotate around the axis AC. Segment CD represents the TFCC structure with a constant length. Consequently, ABCD can be regarded as an unstable quadrilateral with fixed side lengths. When an external force is applied to the distal radius, the ulna undergoes rotational movement as the radius rotates. First, the angular relationship between the ulna's (BC) rotation and the radius's (AD) rotation in the plane will be calculated.In Δ ABD, according to the cosine and sine law, there has:l_2^2=l_1^2+l_3^2-2l_1 l_3cosθ_d l_2/sinθ_d=l_3/sinθ_h Where l_1, l_2, and l_3 represent the lengths of segments AB, BD, and AD, respectively. Both l_1 and l_3 remain constant. θ_d represents ∠ BAD, which is variable. θ_h represents ∠ ABD.So, l_2 and θ_h can be obtained. In Δ BCD, according to cosine low:l_5^2=l_2^2+l_4^2-2l_2 l_4cosθ_c Where, l_4 and l_5 represents segments BC and CD, both are constant. θ_c represents ∠ CBD.θ_c can be obtained. θ_e (∠ ABC) can be calculated as:θ_e=θ_h+θ_c Combine equations <ref> to <ref>, the relation between θ_e and θ_d can be obtained, i.e. the angular relationship between ulna's consequent rotation when the radius is rotated. It can be denoted as a function:θ_e=f_RU(θ_d) In Fig. <ref>(b), ligament 7 is denoted by FG, while ligament 5 is represented by MN. The insertion points of ligaments 7 and 5 on the ulna are labeled as F and M, respectively, and their respective insertion points on the radius are designated as G and N. Both Δ BFC and Δ BMC remain undeformed during ulna deflection, and the angles ∠ CBF and ∠ CBM are constant. Similarly, Δ ADG and Δ ADN do not deform as the radius deflects, maintaining constant angles ∠ DAG and ∠ DAN. The angles θ_a (corresponding to ∠ BAG for ligament 7 or ∠ BAN for ligament 5; in Fig. <ref>(b), it represents ∠ BAG) and θ_b (referring to ∠ ABF for ligament 7 or ∠ ABM for ligament 5; in Fig. <ref>(b), it represents ∠ ABF) can be calculated as follows:θ_a=θ_d+∠ DAG(N)θ_b=θ_e-∠ CBF(M) Combined with equation <ref>, the relation between θ_a and θ_b can be obtained as a function:θ_b=f_ab(θ_a) When a lateral external force is applied to the distal end of the radius in a leftward direction (typically originating from the hand), the radius deflects clockwise around point A, as illustrated in Fig. <ref>(c). Due to the TFCC structure (CD), the ulna also experiences deflection around point B. However, the MCL becomes reinforced and resists the ulna's deflection. Given the high strength and stiffness of the MCL, it mitigates the ulna's deflection, thereby maintaining the forearm's stability.During the clockwise deflection of the radius and ulna, the quadrilateral ABCD undergoes deformation. As illustrated in Fig. <ref>(c), the quadrilateral ABFG also experiences deformation, transforming from the dashed line to the solid line ABF'G'. Based on equation <ref>, the relationship between θ_a and θ_b is established. Consequently, ligament FG is stretched to F'G'. In Δ AF'B, according to cosine and sine law, it has:l_6^2=l_1^2+l_7^2-2l_1 l_7cosθ_bl_6/sinθ_b=l_7/sinθ_f Where, l_6 represents the length of AF', while l_7 denotes the constant length of BF'. θ_f corresponds to the angle ∠ BAF'. l_6 and θ_f can be obtained.In the right triangle Δ AE'F', it has:θ_g=θ_a-θ_f l_9=l_6 sinθ_g l_10=l_6 cosθ_g Where, l_9 and l_10 corresponds to the length of E'F' and AE', respectively. θ_g represents the angle ∠ F'AE'.In a right-angled triangle Δ E'F'G, it has:l_8=l_g-l_10l_f=√(l_9^2+l_8^2) Where, l_8 represents the length of E'G', while l_f corresponds to the length of F'G'. l_g denotes the length of AG'.Combining equations <ref> to <ref>, the relationship between l_f and θ_a can be derived, denoted as l_f = f_fg(θ_a). This expression represents the connection between the length of ligament 7 and the deflection angle of the radius.As depicted in Fig. <ref>(d), when a lateral external force is applied to the distal end of the radius in the rightward direction, the LCL ligament is strengthened and counters the counterclockwise deflection of the radius. This deflection also causes the ulna to deflect via the PRUJ. Quadrilateral ABCD is deformed, transforming quadrilateral ABMN from the dashed line state to the solid line state, represented as ABM'N'. According to equation <ref>, the relationship between θ_a and θ_b is established, and ligament MN is stretched to M'N', increasing its strain. This strain resists deflection of the radius and ulna, contributing to the overall stability. Using a similar methodology, the relationship between l_m (length of M'N') and θ_a can be derived as function l_m = f_mg(θ_a).For other ligaments in the IOM, the position parameters (listed in Table <ref>) can be substituted into the above calculation. These parameters include the position of the insertion points on the radius l_g (AG or AN), ∠ DAG (or ∠ DAN), on the ulna l_m (BF or BM), ∠ CBF (or ∠ CBM), and the initial length of the ligaments l_f or l_m (FG or MN). The relationship between strain (the length change of FG or MN) and θ_a for each ligament can be obtained, as illustrated in Fig. <ref>. It is evident that when the forearm undergoes lateral deflection, the strain on IOM ligaments increases, providing resistance to the lateral deflection of the forearm. The experimental setup, depicted in Fig. <ref>, was employed to validate the simulation results. The positioning of the IOM bundle insertion points on the radius ulna corresponded with the simulation data. The humerus was kept stationary, while the radius and ulna were hinged to the humerus at points A and B, thus enabling rotation within the delineated plane under the influence of lateral forces. This experimental arrangement mirrored the schematic outlined in Fig. <ref>(b), with the TFCC hinge-connected to the radius ulna at points D and C. The lateral force can be recorded by the force sensor (DYHW-108, measuring range: 10kg, accuracy: 0.3%). The experimental procedures are demonstrably captured in Videos 1.1 and 1.2 in the supplementary videos.In Figure <ref>, triangles denote experimental results, with bracketed force values indicating the magnitude of the test force causing forearm deflection. Measurements of IOM length variance and forearm deflection angle were taken using ImageJ software, with these results generally aligning with simulation results.§.§ Variation in MCL strain during elbow movement As the elbow is flexed or extended, approaching its range of motion limits, the strain in the MCL increases. This strain generates a pulling force that presses the ulna against the trochlea of the humerus (indicated by the red arrow in Fig. <ref>(d)). Through the annular ligament, IOM, and TFCC structures, the ulna exerts a pulling force on the radius (depicted by the blue arrow in Fig. <ref>(d)), enhancing the stability of the ball-and-socket joint by drawing the radius towards the capitulum of the humerus. Figure <ref> displays the length changes of the three components of the MCL as the elbow joint angle fluctuates. The anterior part and posterior part are denoted by lines OA and OP, respectively. Composed of high-strength fibres, the ligaments exhibit spring-like characteristics. At the initial position of the elbow joint, with θ_21=90, all three components maintain their original lengths (Fig. <ref>(b)). The origin of the anterior part lies above the elbow rotation centre, at an eccentricity distance of l_oa (OO_a). As the elbow extends (i.e., θ_21 90), the anterior part lengthens from l_a0 to l_a1, transitioning from Fig. <ref>(b) to (a). The middle component's origin is situated at the rotation centre, maintaining a constant length. The posterior part's origin is positioned below the rotation centre, with an eccentricity distance of l_op (OO_p). As the elbow flexes (i.e., θ_21 90), the posterior part stretches from l_p0 to l_p1, as shown in the transition from Fig. <ref>(b) to (c).According to the cosine law:l_a1^2=l_oa^2+r^2-2l_oar cosθ_a1l_p1^2=l_op^2+r^2-2l_oprcosθ_p1 Where θ_a1 denotes ∠ O_aOA' (Fig. <ref>(a)), θ_a1=θ_a0+π/2-θ_21, and θ_a0 represents ∠ O_aOA (Fig. <ref>(b)). r refers to the length of OP. θ_p1 is described as ∠ O_pOP' (Fig. <ref>(c)), with θ_p1=θ_p0-π/2+θ_21 and θ_p0 denoting ∠ O_pOP (Fig. <ref>(b)).The strains in the anterior part ε_a, and posterior part ε_p can be calculated as:ε_a=(l_a1-l_oa)/l_oa ε_p=(l_p1-l_op)/l_op The variations in strain within the anterior and posterior portions as the elbow joint angle changes are shown in Fig. <ref>. As the strain in either the anterior or posterior part increases, a force is exerted on the ulna, resulting in the compression of the radius against the capitulum, further stabilizing the ball and socket joint between the radius and the humerus.This section conducts a theoretical analysis of the mechanical intelligence discerned from the human arm and applies these principles to the proposed robotic arm design. The efficacy of these ingenious designs in enhancing arm performance remains to be determined. Consequently, the subsequent section will engage in constructing a physical prototype to validate the potential advantages of these designs.§ SKELETON-LIGAMENT PROTOTYPE AND MUSCULAR-SKELETON ACTUATION SYSTEM In this section, a physical prototype is constructed based on the novel biomimetic robotic arm design delineated in the previous section. The primary focus is on replicating the soft tissues of the human forearm. Additionally, this subsection presents the actuation system and provides a computation of the output performance of the proposed robotic arm. §.§ Ligaments and adjustment mechanisms The ligament system is crucial for joint stabilization and restricting the range of motion. In the development of robotic forearms and elbows, ligaments exhibit a variety of shapes, sizes, and functions. For example, the annular ligament encircles the proximal head of the radius with a specific width. To increase strength and accommodate diverse shapes and sizes, ligaments are fabricated by intertwining multiple fibres, emulating the musculoskeletal system. Fig. <ref>(a) demonstrates an example of a braided structure created by interweaving seven fibres into a 2D configuration. To ensure the effective restrictive function of the ligaments, their lengths must be appropriately adjusted. For the MCL and LCL, five length adjustment mechanisms are employed, as illustrated in Fig. <ref>(a) and labeled as 1-5 in blue. The ligament bundles pass through the humerus' internal tubing and connect to the connectors in the adjustment mechanism, as depicted in Fig. <ref>(b). Rotating the micro wheel to manipulate the internal nut allows the adjustment screws to move axially within the slots, tightening or loosening the attached ligaments. This process enables the MCL and LCL to be adjusted to the optimal length for efficient functionality.For the TFCC, mechanisms labeled as 6-7 (illustrated in blue) are deployed on the radius, as depicted in Fig.<ref>(a), with mechanism 7 situated at the rear of the radius. The underlying principle is congruent with that presented in Fig.<ref>(b). Modulating the length of the TFCC facilitates a 'soft-feel-end', a condition where resistance escalates markedly as motion nears its limiting angle, contrasting with an abrupt halt due to rigid structures, during the rotational extremities of the forearm. Specifically, the Distal RadioUlnar Ligament (DRUL) is modulated to maintain tension upon interfacing with the Extensor Carpi Ulnaris (ECU).The position and length of each of the seven parts of the IOM can be adjusted using the embedded adjustment mechanism in the skeleton. Each adjustment mechanism's location is marked in orange box as 1-7 in Fig.<ref>(a). For instance, the No.2 adjustment mechanism is situated in the radius, as shown in Fig.<ref>(c). Adjusting the position of the nuts allows the length of the central band ligament (CB) to be modified in the axial direction up to 20mm. Adjusting the position and length of each portion is crucial to ensure that their inserted points are on the forearm rotation axis. §.§ Skeleton-ligament prototype and intelligent mechanisms The human musculoskeletal system serves as an ideal model for designing a robotic arm. To facilitate the design process, a 3D scanned model of human skeletons was optimized and utilized. The skeletons are printed with aluminium using SLM 3D printing technology, due to the low density and high strength of the aluminium. The articular cartilage, a thin and dense connective tissue covering joint surfaces, plays a crucial role in guaranteeing smooth joint contact and minimizing friction and wear during joint movements. To mimic the properties of articular cartilage, Formlabs durable resin is applied due to its durability, smoothness, and flexibility. As shown in Fig. <ref>(a), the articular cartilage is mechanically installed and glued onto the skeletons between each joint. To ensure adequate strength, the cartilage's average thickness is set at 1.5 mm while preserving the skeleton's original surface characteristics. Following the installation of the ligaments and the adjustment of their lengths to optimal positions, a prototype of the robotic elbow and forearm is developed, which emulates the human skeletal ligament system, as illustrated in Fig.<ref>(a).The intelligent mechanisms discussed earlier have been incorporated into the prototype. As shown in Fig.<ref>(b), the TFCC structure starts to bend upon DRUL making contact with the ECU, leading to a sharp increase in strain, which restricts the forearm's rotational range while maintaining tension. Fig.<ref>(c) showcases the LCL and annular ligaments of the human elbow joint, along with their replicated counterparts on the elbow prototype. Their synergistic action securely connects the radius head to the humerus and ulna, while the ball-and-socket structure between the radial head and capitulum considerably improves the joint's dislocation resistance. Fig.<ref>(d) shows the MCL ligament of the elbow prototype, separated into three segments. As the elbow joint rotates bidirectionally, the strain on the MCL intensifies, enhancing the stability of the humeroulnar joint and subsequently pressing the radial head into the capitulum to further stabilize the humeroradial joint. Fig.<ref>(e) demonstrates the IOM replication on the prototype, which comprises seven sections. The IOM helps to stabilize the radius when axial or lateral forces are applied to the distal forearm and stabilizes the forearm against radioulnar bowing or splaying by drawing the ulna and radius toward the interosseous space. The external force is distributed between DRUJ and PRUJ. A robotic hand is attached to the robotic forearm using 5 ligaments as shown in Fig.<ref>(e). §.§ Muscular-skeleton actuation system To replicate the human elbow and forearm, the robot prototype is equipped with the biceps, brachioradialis, triceps, supinator, pronator teres,and hand and wrist muscles, as shown in Fig. <ref>. For subsequent experiments, a simplified robotic hand is affixed to the robotic forearm using five ligaments, mirroring the skeletal system of the human hand. Each finger and thumb of the robotic hand is actuated by a pair of antagonistic muscles (flexor and extensor). Consistent with the human hand, all hand muscles are attached to the forearm.§.§.§ Elbow flexion/extension In this biomimetic arm prototype, both the brachialis and biceps contribute to elbow flexion. Drawing from our previous work<cit.>, a magnet-integrated soft actuator (MISA) with non-linear stiffness serves as the brachialis, originating from the humerus and connecting to the ulna. An external spring soft actuator with constant stiffness functions as the biceps, originating from the humerus and connecting to the radius. Another MISA operates as the triceps, originating from the humerus and connecting to the ulna, aiding in elbow extension. The actuation system configuration is shown in Fig. <ref>(a). As discussed in <cit.>, by utilizing two MISAs in an antagonistic configuration, the joint can achieve variable stiffness, effectively emulating the state of human joints as muscles tense and relax.In daily life, elbow flexion often requires the ability to output substantial joint torque for performing everyday tasks. As illustrated in Fig. <ref>(c), when the flexor maintains the maximum output force F_t1 and the extensor only maintains tension (no force output), the joint torque τ_21f (flexion) varies with the joint angle θ_21=π/2-θ_m (θ_21 denotes the joint position, θ_m is illustrated in Fig. <ref>(c)) due to the moment arm's variation. There are three stages, as shown in Fig. <ref>(c), during which τ_21f can be calculated as:τ_21f=F_t1(MR+Nr+PL) At stage 1 (θ_m>π/2-γ), M=1, N=0, P=0; at stage 2 (θ_m=π/2-γ), M=0, N=cosγ, P=sinγ; at stage 3 (θ_m<π/2-γ), M=0, N=sinθ_21, P=cosθ_m. Where γ=arcsin(R/√(L^2+l^2))-arctan(l/L).Given the application of synthetic muscles (brachialis and biceps), the output force of the muscles are represented as: F_t1 = 250 N. Figure <ref> illustrates the simulation results for joint torque of elbow joint flexion as the joint angle varies while the actuator output force remains at its maximum. The red curve shows the joint torque driven by the brachialis alone, the green curve represents the biceps alone, and the blue curve represents both actuators working simultaneously. The results indicate that the output joint torque decreases as the elbow joint approaches full extension and full flexion, which is consistent with human joint behaviour. The results shows the peak torque for elbow flexion excess 24 Nm.The elbow extension is actuated by the triceps (medial head, the output force is 250 N), and the joint torque τ_21e is constant (11.25Nm) as the joint angle θ_21 changes.§.§.§ Forearm pronation In the proposed design, forearm pronation is actuated by the pronator teres. As shown in Fig. <ref>(a), the motor for the pronator teres is located on the side of the humerus, with a pulley fitted inside the humerus to minimize friction. The rotation axis of the pulley coincides with the axis of the humeroulnar joint. The red tendon passes through the pulley, extends across the ulna and radius, and is ultimately fixed to the lateral side of the radius. Notably, elbow flexion or extension does not affect the length of the pronator teres tendon.When the pronator teres drives forearm pronation, the cross-sectional view of the structure in the plane perpendicular to the forearm rotation axis (Fig. <ref>(a)) can be simplified to Fig. <ref>(c). The red line represents the projection of the tendon, and the angle between the tendon and the sectional plane is denoted as θ_p (Fig. <ref>(a)). The tendon contacts the radius at point T and exerts a force that rotates the radius around the forearm rotation centre, marked in red as O_f in Fig. <ref>(c). The cross-sectional view of the radius can be approximated as a circle with centre O_r and radius r, which passes through O_f. As the radius rotates, θ_M=θ_M0-θ_22 decreases, where θ_M represents ∠ MO_f O_r, θ_M0 is the initial value of θ_M, and θ_22 is the radius rotation angle. The output torque τ_22p during forearm pronation can be calculated as follows: τ_22p=F_t2(l_1+l_2) Where l_1=r cos(θ_M0-θ_22) and l_2=r. F_t2 is the tendon force. The relationship between the rated torque τ_22p and the forearm rotation angle θ_22 is shown in red in Fig. <ref>(a) when the maximum output force of the motor is kept constant F_t2 = 734 N). It is noticeable that τ_22p attains the highest value when θ_22 approaches 100, which is 14 Nm.§.§.§ Forearm supination The forearm supination is driven by both the supinator (marked in blue in Fig. <ref>(b)) and the biceps (marked in green in Fig. <ref>(a) and (b)). The motor of the supinator is installed inside the ulna to reduce the size of the actuator. The tendon of the supinator passes over the outer side of the radius and is fixed to the inner side of the radius, while the tendon of the biceps wraps around the radial head and attaches to the proximal end of the radius. It is worth noting that the insertion points of the supinator and the pronator teres on the radius are located on the same intercept plane perpendicular to the forearm rotation axis. This ensures that the supinator and pronator teres can be balanced during forearm rotation, providing a stable and smooth movement.When the supinator drives the forearm supination, the section view of the structure can be simplified as in Fig. <ref>(c). The blue line is the projection of the tendon on the sectional plane. The tendon is contacted with the radius at point S, pulling the radius rotates around the forearm rotation centre O_f. θ_N=θ_N0+θ_22 increases as the radius rotates, where θ_N0 is the initial value of θ_N. The torque when the supinator drives the forearm supination is: τ_22s1=F_t3(l_3+l_4) Where l_3=rcos(θ_N0+θ_22) and l_4=r. F_t3 is the tendon force from the motor.Presuming the force values of F_t3 = 122 N, the correlation between τ_22s1 and θ_22 can be elucidated as depicted in Fig. <ref>(a), denoted by the blue marking.The insertion point of the biceps muscle is located on the radial tuberosity. To illustrate the relationship between the biceps and the radius, a section view of the radial tuberosity on a plane perpendicular to the forearm rotation axis is shown in Fig. <ref>(b). This view depicts the radial tuberosity as a circle with a radius r_t. The torque produced by the biceps to drive forearm supination can be calculated as follows: τ_22s2=F_t4r_tsin(θ_21) Where F_t4 is the tendon force of the biceps, θ_21 is the angle between the biceps and the radius, as shown in Fig. <ref>(d).The plot in Fig. <ref>(b) shows the relationship between τ_22s2 and θ_21 (F_t4 = 250 N). As θ_21 approaches 0, the value of τ_22s2 converges to 0, indicating that the forearm supination can only be driven by the supinator at this point. When θ_21=90, τ_22s2 reaches its maximum value.When θ_21=90, the joint torque when forearm supination, τ_22s=τ_22s1+τ_22s2 in relation to θ_22 as shown in Fig. <ref>(a) marked in green. τ_22s shows a small fluctuation during forearm rotation, with a maximum value close to 7.8 Nm.§ VALIDATION This section investigates the Soft-Feel-End mechanism of the robotic arm by soft tissues and assesses the individual contributions of these soft tissues to joint stability. Ultimately, a demonstration of the motion performance of the proposed robotic arm is presented. §.§ Validation of Soft-Feel-End mechanism in regulating forearm positioning In the foregoing analysis, it was determined that as the forearm rotation approaches its limited position, the resistance increases rapidly, functioning as a position-limiting mechanism. In this experiment, the resistance torque during the forearm rotation of the proposed skeletal model will be measured by passively driving the forearm rotation at various elbow joint angles.The experimental setup is shown in Fig. <ref>, where the humerus is secured to the base plate, and the ulna and radius remain unconstrained. The forearm prototype is positioned vertically. The initial forearm rotational position is set to θ_22=0, corresponding to full supination. The elbow joint position is set to θ_21=0, indicating full extension. The torque sensor (Brand: Dayang sensor, Model: DYJN-104, Capacity: 0-10 Nm, Rated Output: 2.0 mV/V) is attached to the radius using screws, and its rotation axis aligns with the forearm rotation axis. The gyroscope (Brand: Wit-motion, Model: WT901BLECL, Chip: MPU9250) records the rotation of the radius, while the torque sensor records the external torque required for rotating the radius.The experiment steps are:Step 1: Set the elbow joint to the initial position and maintain it.Step 2: Rotate the torque sensor manually so that the radius rotates from fully supinated to fully pronated. Record the radius position and the external torque applied.Step 3: Change θ_22 and repeat the experiment.The experimental results are presented in Fig. <ref>. It can be observed that, for any θ_21, as θ_22 approaches the limited position, the resistance torque increases, and the soft restriction is achieved. When θ_21 exceeds 100, indicating elbow joint flexion, the resistance torque displays a noticeable increase within the range of -30<θ_22<60. This might be attributed to the tensioning of the posterior part of MCL when θ_21>90, causing the radius to be pressed into the humeroradial joint and increasing frictional resistance, as discussed in <ref>.§.§ Validating the contribution of IOM, TFCC, annular ligament to forearm stability In order to evaluate the individual contribution of the different parts of the IOM ligaments to the stability of the forearm, an experiment was conducted to measure the deflection angle of the forearm when subjected to lateral external forces with partial IOM ligaments disabled. The experimental setup is shown in Fig. <ref>. The forearm rotation angle is set to θ_22=0, i.e., fully supinated, and remains in that position. The rotation of the radius was recorded by the gyroscope. The experiment steps are:Step 1: Keep all bundles of the IOM intact.Step 2: Apply a lateral force F_radius at the distal end of the radius, ensuring that the point of application and the maximum force is the same for each test. Record the external force and the deflection of the radius during the test.Step 3: Apply a lateral force F_ulna at the distal end of the ulna, keeping the distance between the application point of F_ulna and F_radius from the elbow rotation axis the same. The point of application and the maximum of F_radius are kept the same for each experiment. Record the external force and the deflection of the radius during the test.Step 4: Disable specific bundles of the IOM ligaments and repeat the experiment.The experimental results presented in Fig. <ref> demonstrate that intact IOM ligaments contribute to enhanced stability in the forearm, as evidenced by the smallest deflection angle measured when all ligaments are intact. However, when certain ligament groups are disabled, such as POC, DOAC, and DOB, the ability of the forearm to resist lateral forces is weakened, resulting in larger deflection angles when the lateral force is applied to the left. Disabling AB and CB ligaments lead to an increase in the counterclockwise deflection angle. Upon partial absence of the IOM, the angular deflection of the forearm experienced a notable increase when subjected to a leftward lateral force (F_ulna = 12N) as compared to the intact IOM scenario: 15.48% (CB), 24.32% (AB1), 30.1% (AB2), 63.69% (AB1, CB, AB2), and 72.19% (IOM). Similarly, when the forearm was exposed to a rightward lateral force (F_radius = 12N), the angular deflection experienced a marked increase: 24% (DOB), 42.55% (DOAC), 45.19% (POC), 92.55% (POC, DOAC, DOB), and 95.65% (IOM). These results suggest that each IOM ligament group plays a significant role in stabilizing the forearm under lateral external forces.Importantly, the experimental validation of the IOM ligament's function was performed using a forearm prototype printed with polylactic acid (PLA). Given the discrepancy in material strength, it's anticipated that the deflection angle observed in the experimental results would considerably exceed that of an equivalent forearm skeleton prototype fabricated from aluminium."The experimental validation of the contribution of TFCC and annular ligament (when the IOM is intact) to the lateral stability of the forearm was also carried out using the test rig shown in Fig. <ref>. The results are presented in Fig. <ref>. Without an annular ligament, applying a test force to the distal forearm and initiating a clockwise rotation causes the ulna and the radius to separate, leading to a complete disintegration of the forearm. Conversely, applying the opposite test force and executing a counterclockwise rotation results in a 153.1% increase in the deflection angle of the radius, compared to instances where the annular ligament is intact. The findings suggest that when both TFCC and annular ligament are intact, the deflection of the radius is minimal, and the forearm is stable. When the annular ligament is disabled, the deflection is the largest, indicating significant instability of the forearm. §.§ Validation of performance The dimensions of the forearm prototype approximate those of a human forearm, with a length of 26 cm and a circumference of 20 cm. Initially, the range of motion of the robotic forearm and elbow is assessed. Fig. <ref> demonstrates the range of motion for elbow flexion/extension and forearm rotation. As listed in Table. <ref>, the elbow flexion/extension exhibits a range of motion from 0 to 145 degrees, while the forearm rotation spans from -60 to 51 degrees. The motion tests are presented in videos 2.1-2.4 in the supplementary materials. The compactness advantages of the robotic arm, demonstrated through the manipulation of objects within limited space, are highlighted in Video 4.1-4.4.In order to demonstrate the load capabilities of the biomimetic robotic arm, a non-destructive experiment was conducted. The test involved lifting various weights using the fully assembled arm prototype. As depicted in Fig. <ref>, the robotic arm successfully lifted three different weights, specifically 2kg, 3kg, and 5kg dumbbells. This result showcases the efficient synergy among the MCL, LCL, IOM, annular ligament, and TFCC in maintaining the structural integrity of the elbow and forearm. Notably, no dislocations were observed during the lifting process, and the radius remained stably positioned.Further testing to ascertain the performance of the biomimetic robotic arm involved a flexion exercise using a 2 kg dumbbell. The elbow was flexed from an extended position to 120^∘ (Fig. <ref>(b)). The Maxon motor was equipped with a safety mechanism in the officially provided companion program designed to curtail any sudden accelerations or excessive speeds. Both the position (θ_3) and angular velocity (ω) of the forearm were documented within the bounds of the maximum permissible speed and acceleration (Fig. <ref>(b)). The results indicate that the arm achieved full flexion and lifted the 2 kg weight within 0.67 s, achieving a maximum frequency of reciprocal joint movements—defined as the number of complete flexions and extensions accomplished in 1 second—of over 0.74 Hz, excluding intervals of full flexion. When the angle (θ_3) was set at 50and the angular velocity (ω) at 3 rad/s, according to calculation, the joint torque peaked over 12 Nm, inclusive of gravity resistance. Concurrently, the peak power was recorded at 36 W (calculated from speed and torque).High-speed performance for a robotic arm's end-effector is an ongoing challenge within the field. A notable example of a high-speed manipulator is Barrett Technology's WAM Arm<cit.>, with a reported weight of 27 kg and a maximum end-effector speed of 3 m/s. In order to assess high-speed output performance, a table tennis-playing scenario was utilized for the robotic arm under discussion. This scenario involved simultaneous flexing of the elbow and shoulder joints for striking the ping pong ball, followed by a return to the initial position, as illustrated in Fig. <ref>. The trial was performed under minimal load on the compliant actuators, with the ping pong paddle weighing 238 g, thus allowing the actuators to operate at their peak speed of 110 mm/s. According to calculation, the end-effector attained an instantaneous speed of 3.2 m/s, with a duration of 188 ms from the onset of arm flexion to the moment of impact with the ping pong ball.§ DISCUSSION The proposed design, which replicates human biological structures, including bones, ligaments, tendons, and soft actuators with biological muscle performance characteristics, compared to the traditional robotic arm, offers several noteworthy advantages:Appearance: The prototype is designed to closely mimic the human forearm. Future iterations intend to incorporate artificial skin, further enhancing its resemblance to the human arm both in appearance and structure. Such a humanoid design can foster more intuitive human-robot interactions, reducing the intimidation factor. This is particularly advantageous in settings like healthcare or service sectors where close human-robot collaboration is imperative. A humanoid robotic arm is less likely to be perceived as an unfamiliar entity, promoting wider social acceptance, especially in communal areas. Emulating the human arm not only draws from the biomechanics and movement strategies of humans, informing robot design and control, but also ensures the robot is aptly equipped for tasks designed with human ergonomics in consideration—ranging from door operation to tool usage.Compactness: The biomimetic forearm structure, in which the radius rotates around the ulna, provides a compact design. With a forearm circumference not exceeding 20 cm, it accommodates over 12 linear actuators for the hand and wrist, each capable of outputting 50 N of force, ensuring dexterity and substantial hand joint output torque.Safety during Human-Robot Interaction: The system, hinged and fixed by soft tissues including MCL, LCL, and annular ligament, resembles a biological body's tension-compression system, exhibiting passive damping and flexibility when subjected to external forces. This feature greatly improves safety, as limited external forces can be absorbed by the soft tissues. In cases of excessive external force, the joint can dislocate and recover independently. For irreversible dislocations caused by extreme external forces, manual repairs can be performed without replacing any parts, similar to an orthopaedic doctor repairing a dislocated human joint.Output Torque: The design achieved a large output torque when compared with the biological joints. Pronation output torque is twice that of biological joint, and supination achieves 85.7% of its output torque. The entire elbow and forearm have a payload capacity of 4 kg (The testing was confined to a load of 3kg to preclude any damage to the prototype, foregoing trials under a 4kg load, a limit established through conservative estimation), higher than most comparable robotic arms (the total weight of the robotic arm is 4 kg, including the shoulder joint), which is listed in Table. <ref>.Compared to existing highly biomimetic robotic arms, the proposed design optimize the Load capacity. While conventional robotic arms using hinge joints easily achieve load ability, biomimetic designs with biological joints, such as ECCE<cit.> and Roboy robot<cit.>, can become unstable when the forearm experiences lateral loads. The inclusion of soft tissues in this design achieves lateral stability akin to hinge joints, resulting in an enhanced load-carrying capacity.The list of videos for testing the proposed robotic elbow and forearm and demonstrating the capabilities of the robotic arm is provided in Table <ref>. The supplementary video is accessible via the following link: <https://youtu.be/kpVZIUf0f5w>.§ CONCLUSION In conclusion, this study has developed and validated a novel robotic Elbow-and-Forearm system inspired by the biomechanics of the human skeletal and ligament systems. The research began with a comprehensive investigation of human joint anatomy, highlighting the importance of soft tissues in achieving a balance between compactness, stability, and range of motion. Based on this understanding, a prototype design was proposed, incorporating key soft tissues such as medial collateral ligament, lateral collateral ligament, triangular fibrocartilage complex, annular ligament, and interosseous membrane.A theoretical analysis of the role of soft tissues in joint stability was conducted, followed by the fabrication of a physical prototype. Through a series of experiments, the proposed skeletal model's resistance to lateral forces and the contribution of soft tissues to stability were assessed. The range of motion and load-carrying capacity of the robotic forearm and elbow were also evaluated, demonstrating the effectiveness of the prototype in replicating human joint capabilities.Experimental results showed that the range of motion achieved by the robotic forearm and elbow were comparable to human capabilities, and the prototype's ability to lift different dumbbell weights showcased its load-carrying capacity without dislocation or significant displacement. This research not only contributes to a better understanding of human arm biomechanics but also advances the development of more sophisticated robotic prosthetics and exoskeletons. The findings have the potential to pave the way for further innovation in the field. ./IEEEtran | http://arxiv.org/abs/2310.18299v1 | {
"authors": [
"Haosen Yang",
"Guowu Wei",
"Lei Ren"
],
"categories": [
"cs.RO",
"cs.SY",
"eess.SY"
],
"primary_category": "cs.RO",
"published": "20231027174222",
"title": "Enhancing the Performance of a Biomimetic Robotic Elbow-and-Forearm System Through Bionics-Inspired Optimization"
} |
A class of fractional differential equations via power non-local and non-singular kernels: existence, uniqueness and numerical approximationsThis is a preprintof a paper whose final form is published in Physica D: Nonlinear Phenomena (ISSN 0167-2789). Submitted 19-Jan-2023; revised 15-May-2023; accepted for publication 11-Oct-2023. Hanaa Zitane Delfim F. M. TorresCorresponding author. Center for Research and Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal =================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION In the intricate landscape of modern healthcare, medical image classification emerges as a pivotal task, driving crucial decisions in diagnosis, treatment planning, and patient management. This process involves the systematic categorization of various types of medical imagery—including X-rays, CT scans, MRIs, and ultrasound—into distinct classes that assist healthcare professionals in identifying anomalies, understanding physiological phenomena, and detecting diseases at early stages. The reliability and precision of image classification are paramount, given that these determinations form the bedrock upon which medical practitioners build their diagnostic and therapeutic strategies, directly impacting patient outcomes. With an increasing influx of complex imaging data and a growing need for rapid, accurate interpretation, the medical sector faces significant pressure to evolve beyond traditional analysis methods, necessitating innovative solutions that enhance the efficiency and accuracy of image classification.The advent of large foundation models in artificial intelligence has ushered in a transformative era of computational capabilities. These models, characterized by their extensive scale, diverse training datasets, and impressive adaptability, have demonstrated profound impacts across various domains. Within the realm of medical image classification, there is burgeoning curiosity around the potential applicability and benefits of these formidable tools. The traditional approach, reliant on Convolutional Neural Networks (CNNs) based architectures, such as VGG <cit.>, inception <cit.>, ResNet <cit.>, and DenseNet <cit.> have achieved noteworthy success in image categorization tasks <cit.>. However, these methods often require vast amounts of labeled data and substantial computational resources, besides lacking the intuitive adaptability inherent in human cognition. Besides training a neural network from end to end, some transfer learning and self-supervised learning techniques are also employed in the filed of medical image classification to improve the efficiency and performance <cit.>. But they are also limited by the predictive capability and few- or zero-shot learning ability <cit.>. Recently, large foundation models, with their sophisticated understanding of nuanced patterns, offer a promising alternative, hypothesized to enhance the precision and context-awareness in classifying medical images, provided they can be effectively adapted to understand and interpret complex visual medical data.This study ventures into the novel application of in-context learning strategies with GPT-4V, a derivative of the generative pre-trained transformer models, specifically oriented towards visual tasks. In-context learning allows the model to utilize prompts—minimal yet specific pieces of information or instructions—to guide its responses in performing a particular task, relying on its vast pre-trained knowledge base rather than traditional task-specific training. By harnessing this approach, we aim to tailor GPT-4V’s capabilities to interpret and classify medical images, an endeavor scarcely explored in existing literature. Our methodology involves the meticulous design of context-rich prompts that facilitate the model's understanding of medical imaging classifications. The preliminary results are striking, showing that our adapted GPT-4V model, when equipped with well-crafted prompts, can achieve classification accuracy comparable to established baseline models. This finding not only underscores the versatility of large foundation models in medical applications but also heralds a potentially more resource-efficient and adaptable future for medical image analysis.§ RELATED WORK In-context Learning: In-context learning (ICL) <cit.> is a paradigm that has recently gained prominence, particularly in the realm of LLMs <cit.>. This approach provides an efficient way for pre-trained models to understand and execute a task using demonstrations, e.g., input-output pairs, without resorting to extensive fine-tuning or retraining on task-specific data. The effectiveness of ICL is rooted in a phenomenon known as "few-shot learning." In traditional machine learning, models often require substantial amounts of labeled data specific to each task <cit.>. However, in few-shot scenarios <cit.>, a model uses only a minimal number of examples to understand a task. This process is akin to the way humans often learn—by relating new information to existing knowledge. In-context learning takes this a step further, often operating in "zero-shot" or "one-shot" contexts, where the model is either not provided with any task-specific examples or just a single one, respectively.In medical image classification, the application of ICL is still nascent. The potential of large language models like GPT-3 or GPT-4 to generalize learning from textual contexts to more complex multi-modal tasks, including image classification, offers promising avenues for exploration. The key lies in the careful design of prompts that succinctly yet comprehensively convey the task rules and criteria, enabling the model to apply its pre-existing knowledge effectively across domains.Medical Image Classification: Historically, before the rise of deep learning methodologies, the domain of medical image analysis was dominated by manual processes and intricate feature extraction techniques. These handcrafted features, exemplified by histograms of oriented gradients (HOG) <cit.> and local binary patterns (LBP) <cit.>, offered a structured but somewhat constrained approach to image interpretation. However, the deep learning revolution that began around 2006 introduced a transformative era, significantly altering the landscape of medical image classification. Modern models, trained on vast datasets, are adept at recognizing patterns and anomalies within medical imagery, such as X-rays, MRIs, and CT scans. The primary emphasis has shifted towards enhancing accuracy, improving interpretability, and sharpening the ability to detect subtle indicators of medical conditions. This facilitates healthcare professionals in making more informed decisions. For instance, in the classification of retinal fundus images <cit.>, the use of strategies like data augmentation and transfer learning, especially with renowned pre-trained models like VGG19, has notably improved the detection accuracy of fundus diseases. Another pivotal task has been the detection of COVID-19 from chest X-rays. Recent advancements, which leverage the feature fusion of DenseNet and VGG, have demonstrated superior performance in detecting COVID-19 patients <cit.>. § OUR METHOD Our method's intuition lies in the meticulous crafting of prompts that serve as input for the advanced GPT-4V model, guiding it to make accurate medical image classifications. By employing in-context learning, we harness the model's expansive pre-existing knowledge base, prompting it with both text and images for robust task performance.The experiment contrasts two central methodologies. The baseline involves traditional medical image analysis using ResNet models. Our approaches start with the naive zero-shot prompt on GPT4V, then enhance the naive prompt with more sophisticated approaches, meticulously structured to "complete the story" that the image tells.In our experiments, we primarily utilized three categories of prompts to engage GPT-4V in the classification tasks:* A straightforward method, wherein GPT-4V is directly instructed to classify the presented images, representing the naive approach.* An in-context learning strategy, which involves providing GPT-4V with multiple labeled examples to facilitate guided learning.* A more nuanced in-context learning process that incorporates reasoning, requiring us to elucidate the relationships between the images and their corresponding labels for GPT-4V. The specific prompts employed for these methodologies are detailed subsequently:Naive approach. Zero shot inference on GPT4V:Upload an image on GPT4V.Prompt: “Instruction: classify the image into two classes class1, class2. Please first output one line for the label of the image. In the subsequent line, please provide a comprehensive explanation of your classification." In-context learning 1 (ICL1).Upload three images separately on GPT4V, two for in-context learning and one for predicting.Prompt: “Instruction: classify the images into two classes class1, class2Example: the label of the above images:Image 1: class1Image 2: class2Please first output one line for the label of image 3. In the subsequent line, please provide a comprehensive explanation of your classification."Drawback: the attention of GPT4V may concentrate on certain image and lead to biased results. Thus, in the next ICL prompt we will try to put all images in one figure.In-context learning 2 (ICL2).Combining three images on one figure and uploading it on GPT4V, two for in-context learning and one for predicting.Prompt: “Instruction: classify the images into two classes class1, class2Example: the label of the above images:Image 1: class1Image 2: class2Please first output one line for the label of image 3. In the subsequent line, please provide a comprehensive explanation of your classification." In-context learning 3 (ICL3).Combining three images on one figure, two for in-context learning and one for predicting. Upload 3 combined figures (batch size = 3) on GPT4V.Prompt: “Instruction: : classify the images into two classes for each group class1, class2, generate 4 results. Example: the label of the above images:Image 1: class1Image 2: class2Please first output one line for the label of image 3. In the subsequent line, please provide a comprehensive explanation of your classification."In-context learning 4 (ICL4).Combining 9 images in one figure and upload it on GPT4V, 6 of them are for in-context learning and the rest 3 for predicting.Prompt: “Instruction: classify the images into two classes class1, class2Example: the label of the above images:Image 1: class1Image 2: class1Image 3: class1Image 4: class2Image 5: class2Image 6: class2Please first output one line for the label of image 7, image 8 and image 9. In the subsequent line, please provide a comprehensive explanation of your classification." In-context learning with reasoning 1 (ICL-R1).Upload three images separately on GPT4V, two for in-context learning and one for predicting.Prompt: “Instruction: classify the images into two classes class1, class2Example: the label of the above images:Image 1: class1Image 2: class2Explanation: In image 1 we can observe ..., but in image 2 we don't have such observation. Thus we classified image 1 as class1 and image 2 as class2. Please provide the classification of Image 3 in one line, taking into account the observed patterns in Image 3. Following that, offer a detailed explanation step-by-step.In-context learning with reasoning 2 (ICL-R2).Combining 9 images in one figure and upload it on GPT4V, 6 of them are for in-context learning and the rest 3 for predicting.Prompt: “Instruction: classify the images into two classes class1, class2Example: the label of the above images:Image 1: class1Image 2: class1Image 3: class1Image 4: class2Image 5: class2Image 6: class2Explanation: In image 1-3 we can observe ... but in image 2 we don't have such observation. Please first output one line for the label of image 7, image 8 and image 9. In the subsequent line, please provide a comprehensive explanation of your classification."The crux of our method lies in this nuanced interaction with the model. Our results elucidate that the strategic construction of prompts—capitalizing on the model's inherent language and reasoning capabilities—enables GPT-4V to perform on par with established medical image classification benchmarks. This finding not only underscores the versatility of large language models but also heralds a potentially transformative application in the medical imaging domain. § EXPERIMENTIn the experiment, we test our proposed prompt on the open sourced Kaggle COVID-19 lung X-ray dataset. This dataset contains 181 training examples, 111 of them are COVID and the rest are normal case. There are total 46 examples in the test set, where 26 of them are COVID case and the rest are normal case.Baseline Settings. We construct baselines using Convolution Neural Network based backbones to demonstrate the effectiveness of our method in the few-shot learning setting. Following previous works <cit.>, we use ResNet-18(RN-18) <cit.> and VGG-16 <cit.> pre-trained on ImageNet-1k <cit.> as our image classifier. Output dimension of the final fully-connected layer is set to 2 to fit the binary classification task. During training, we optimize the model using SGD for 20 epoches with batch size of 2, and decrease the learning rate by 5 after epoch 10 and 15 respectively. For better convergence, initial learning rate is set to 0.1 for ResNet-18 and 0.001 for VGG-16. We also apply a simple augmentation technique of random rotation and center cropping to improve model robustness. For the few-shot setting, we randomly select 6 images (3 covid, 3 normal) for training. We repeat the experiments for 5 times with different random seed and report the average result.Result analysis. Consolidating all images into a single figure has demonstrated enhanced performance compared to uploading them individually. This improvement could potentially be attributed to the focused attention mechanism of GPT-4V, which, when presented with separate images, might concentrate disproportionately on specific images, consequently leading to biased outcomes.GPT-4V exhibits superior performance compared to the few-shot baseline when provided with an equivalent number of training instances. However, it does not yet match the efficacy of the comprehensive baseline model that benefits from training on the complete set of examples. This indicates that while GPT-4V's adaptability is promising, certain optimizations might be necessary to fully realize its learning potential.Contrary to expectations, supplementing the GPT-4V prompts with reasons underlying the classifications does not yield an improvement in results. This may be due to a misalignment between the provided reasoning and the model's processing capabilities, suggesting that the reasons integrated into the prompts were either not appropriately formulated for GPT-4V's comprehension or that the model currently lacks the capacity to incorporate such reasoning effectively into its decision-making process. Further investigations are needed to uncover the intricacies of this observation.§ CONCLUSIONIn conclusion, we take the first step into the application of GPT-4V for medical image classification. By employing in-context learning, this study circumvented traditional limitations associated with deep learning models, particularly the necessity for extensive, task-specific training and vast labeled datasets. The tailored prompts guided GPT-4V to effectively interpret and analyze medical images, achieving a level of accuracy on par with conventional methods. This finding underscores the versatile potential of large foundation models in medical diagnostics and opens the door to further innovations that could reshape the landscape of healthcare, making it more intuitive, accessible, and reliable. Beyond just technical implications, the success of this approach advocates for a future where AI's role extends from being a mere tool to an adaptable ally, capable of navigating the nuanced and critical terrains of patient care.unsrt | http://arxiv.org/abs/2310.18498v1 | {
"authors": [
"Ruibo Chen",
"Tianyi Xiong",
"Yihan Wu",
"Guodong Liu",
"Zhengmian Hu",
"Lichang Chen",
"Yanshuo Chen",
"Chenxi Liu",
"Heng Huang"
],
"categories": [
"eess.IV",
"cs.CV",
"cs.LG"
],
"primary_category": "eess.IV",
"published": "20231027212836",
"title": "GPT-4 Vision on Medical Image Classification -- A Case Study on COVID-19 Dataset"
} |
Gravity Exploration Institute, School of Physics and Astronomy, Cardiff University, Cardiff, UK, CF24 3AA, UK Albert-Einstein-Institut, Max-Planck-Institut for Gravitationsphysik, D-30167 Hannover, Germany Leibniz Universitat Hannover, D-30167 Hannover, Germany Department of Physics,University of Milano - Bicocca,Piazza della Scienza 3,I20126 Milano, Italy National Institute of Nuclear Physics INFN, Milano - Bicocca, Piazza della Scienza 3, 20126 Milano, Italy Dipartimento di Fisica, Universitá di Roma `La Sapienza', P.le Aldo Moro 2, I-00185 Roma, Italy INAF-Osservatorio Astronomico di Roma, via di Frascati 33, I-00078 Monteporzio Catone, Italy INFN, Sezione di Roma I, P.le Aldo Moro 2, I-00185 Roma, Italy Department of Physics,University of Milano - Bicocca,Piazza della Scienza 3,I20126 Milano, Italy National Institute of Nuclear Physics INFN, Milano - Bicocca, Piazza della Scienza 3, 20126 Milano, Italy INAF-Osservatorio Astronomico di Roma, via di Frascati 33, I-00078 Monteporzio Catone, Italy INFN, Sezione di Roma I, P.le Aldo Moro 2, I-00185 Roma, Italy INAF-Osservatorio Astronomico di Roma, via di Frascati 33, I-00078 Monteporzio Catone, Italy INFN, Sezione di Roma I, P.le Aldo Moro 2, I-00185 Roma, Italy We investigate the detectability of single-event coalescing black hole binaries with total mass of 100-600 at cosmological distances (5 ≲ z ≲ 20) with the next generation of terrestrial gravitational wave observatories, specifically Einstein Telescope and Cosmic Explorer.Our ability to observe these binaries is limited by the low-frequency performance of the detectors. Higher-order Multipoles of the gravitational wave signal are observable in thesesystems, and detection of such multipoles serves to both b the mass range over which black hole binaries are observable and improve the recovery of their individual masses and redshift. For high redshift systems of ∼ 200 we will be able to confidently infer that the redshift is at least z=12, and for systems of ∼ 400 we can infer a minimum redshift of at least z=8.We discuss the impact that these observations will have in narrowing uncertainties ontheexistence of the pair-instability mass-gap, and their implications on the formation of the first stellar black holes that could be seeds for the growth ofsupermassive black holes powering high-z quasars. § INTRODUCTION In the three observing runs by the LVK Collaboration, many tens of GW transient signals consistent with the merger of BBH have been detected<cit.>. The majority of them, observed at z≲ 1, are found to have one or both component masses between 20 and 50 and total mass ≲ 80.[All masses in this paper are referred to as measured in the source-frame.] A variety of channels have been proposed for their origin: formation as field binaries primarily in low-metallicity galaxies at high redshifts, formation in dense stellar systems, in AGN discs, or after generations of repeated mergers <cit.>. The next-generation of ground-based GW observatories, specifically the Einstein Telescope<cit.> and Cosmic Explorer <cit.>,will open the prospect of detecting the GW signatures of merging BBH over a wider mass range and deeper redshifts, extending the realm of observations to BBHout to z∼ 30, when the first stars began to shine, and into the intermediate-mass range O(100-1000) <cit.>.Beyondredshift z∼ 30-40, merging primordial black holes of O(10), formed by quantum processes in the early Universe <cit.>, may also be detected and studied <cit.>.In this paper we study systems with individual masses extending from 60 to 480, and totalmasses of 180 to 600, covering a mass range that is relevant for several reasons, as we explore below. Measuring such systems is most interesting at cosmological distances, which is only possible due to the enhanced sensitivity of ET and CE at frequencies below 10 Hz.The formation of such heavy stellar-mass BH requires the presence ofstar forming regions of zero or extremely low metallicity,where fragmentation and cooling of the parent gas cloud, and mass loss from stellar winds are strongly suppressed <cit.>. These are conditions that occur in the high-redshift Universe, and are expected to result in a top-heavy mass function where stars heavier than 150-300 are more common than in the conventionally adopted,Kroupa-like,stellar IMF (seefor a recent review on the first stars). At the highest redshifts, these heavy BHs may represent systems not yet affected by accretion of surrounding gas <cit.>, and hence their masses reflect their birth conditions.Detecting GW from these heavy stellar-mass binarieswill let us constrain their merger rate which is intimately related to the rate of formation of massive stellar binary systems in pristine star forming galaxies <cit.>.Some of the BBH masses we investigate reside within the so called pair-instability mass-gap (often referred to as upper-mass gap or Pair Instability Supernova (PISN) gap. This gap is between about 65 and 135where no BH is expected to form in evolution models of isolated stars. This mass-gap is attributed to a pair-instability, arising in metal poor, massive stars betweenabout 130 and 250, which leads to a supernova explosion due to uncontrolled^12 C(α,γ)^16 O nuclear burning, leaving no remnant <cit.>.During the third observing run of LVK a short duration signal, GW190521, was detected and estimated to be consistent with the merger of two BH with component masses of about 85 and 66 and total mass of 142 <cit.>. This is the heaviest BH observed in GWs to date, with an intermediate-mass remnant and a primary component residing within the pair-instability mass-gap.[Alternative analyses, <cit.> find that GW190521 could instead be ∼170 black hole with a companion of ∼20, suggesting that the primary is already an intermediate-mass BH, with a mass beyond the mass gap <cit.>.] Detecting the GW signal from high-redshift heavy stellar BBH mergers, where one or both components are in the upper-mass gap or straddling it, would be highly informative. Various mechanisms could lead to the formation and coalescence of suchbinaries, and among them, evolution in isolated binaries, dynamical encounters in star clusters or a chain of Nth generation mergers.[There are several proposed channels for the origin of the components of GW190521-like systems observed at low redshift. For instance: mergers from the relics of the first stars (known as Population III stars) <cit.>, isolated binary evolution with no-gap <cit.>, stellar collisions in star clusters <cit.>, andhierarchical mergers <cit.>.] But at redshifts as high as ∼ 10 the contributions from the two dynamical channels appear to be negligible <cit.>. Consequently, observing these systems at high z would allow us to better probe the physics of the isolated binary channel, the potential existence of an upper-mass gap and its imprint on the mass function of the earliest stellar BHs.Estimates of the location and width of the upper-mass gap are at best approximate. Current uncertainties on the reaction rate, on rotation, and on the presence of rich hydrogen envelopes may shift the instability interval for the explosion to higher masses, and narrow further the gap<cit.> or even fill it entirely <cit.>.Testing the existence of this upper-mass gap and inferring it properties from GW observations depends critically upon the accuracy with which the masses of the individual BHare measured from the merger signal. Here and for this purpose we carry out a parameter estimation on BBH with component masses which touch the edges of the upper-mass gap, recognizing that all of the above arguments become compelling if the redshift of the observed systems is z≳ 10.Determining the lowest redshift one can claim the source to be beyond, and inferring posteriors for the distribution of the component masses is of paramount importance <cit.>.A key challenge, then, is to accurately infer both the masses and redshift of the binary.There is a well-known degeneracy between measurements of the distance to and inclination of a binary from GW observations <cit.>.At high redshifts, this degeneracy further impacts our ability to infer the masses of the binary. In GW observations, masses and redshift are degenerate, and only the redshifted masses, m_1, 2 (1 + z), can be inferred from the signal.Given a cosmological model, the measured distance can be used to obtain the redshift and hence the source masses.However, if the distance is poorly constrained, this leads to significant uncertainties on the mass. For example, it is not unusual to have an uncertainty of ∼ 50% in the distance measurement of BBH signals <cit.>.At a z=10 this translates to a redshift uncertainty of ± 4 and consequently an uncertainty in the masses of 40% due to redshift effects alone.The ability to accurately infer redshifts and masses is improved by a detector network, which can provide more accurate localization and distance measurements <cit.>, as well as the observation of HoM in the GW signal which help break the distance-inclination degeneracy <cit.>.The paper is organized as follows.In Section <ref>, we discuss the observability of high-mass, high-redshift binaries with a focus on the HoM.In Section <ref>, we provide detailed parameter estimation results for a number of astrophysically interesting simulated BBH merger signals and in Section <ref> we summarize our results.We include two appendices. Appendix <ref> provides additional figures showing detector sensitivity for binaries of varying mass ratio and Appendix <ref> gives parameter estimation accuracy for low SNR systems. § THE IMPORTANCE OF HIGHER ORDER MULTIPOLESHigh-mass, high-z BBH coalesencesare intrinsically low-frequency GW sources. This is illustrated in Fig.<ref>, where we show the frequency evolution of the GW strain amplitude for a BBH of (120-60) placed at redshift z=14, with an inclination ι of 60 between the orbital angular momentum and theofline of sight.The gravitational waveform for this signal only extends to 15 Hz and is therefore outside the sensitive frequency range of current GW observatories.The leading GW emission from the source, emitted in the (2, 2) multipole at twice the orbital frequency, extends to only 7 Hz in the detector, making discovery challenging.[We recall that for all BBH observed to date, the (2, 2) multipole, which is emitted at twice the orbital frequency, has been the dominant multipole detected in the GW signal.Additional multipoles of the GW signal have been observed for a handful of events <cit.> but, as their amplitudes are lower, they are generally not identified for the majority of sources.] Although the (3, 3) and (4, 4) multipoles are intrinsically lower amplitude, they extend to higher frequencies (∼1.5 and 2 times the frequency of the (2, 2)) and can therefore contribute significantly to the observed SNR. This improves the prospects of detecting such a system. Furthermore, the identification of these higher-order (higher-frequency) multipoles in the signal can significantly improve the ability to infer the parameters of the system, as they enable us to break measurement degeneracies that exist with observation of only a single multipole. There are several well-known degeneracies in the emitted gravitational-waveform, leading to some parameters being very well measured while others being not.For our purposes, we are most concerned with a degeneracy between the observed distance toand inclination of a binary, as discussed in <cit.>.When onlythe (2, 2) multipole is observed, the amplitude gives a good measurement of cosι/d_L where ι is the binary inclination and d_L is the luminosity distance. However, in many cases, the binary inclination is only restricted to the range ι∈ [0, 60], leading to a factor of two uncertainty in distance due to this degeneracy alone.When the binary is observed at a high redshift, the measurement of the masses also becomes degenerate withdistance and inclination, and a factor of two uncertainty in distance can lead to a similar uncertainty on the masses. The observation of a second GW multipole can serve to break this degeneracy <cit.> as the relative amplitude of the different multipoles depends upon the orientation of the binary[The ratio of the amplitude of the (3, 3) multipole to the (2, 2) scales as sinι while the (4, 4) multipole scales as sin^2 ι relative to the (2, 2).].In Fig. <ref>, we show the variation of the SNR with binary mass ratio q = m_1/m_2 (assuming an inclination ι = 60^∘) and inclination (assuming q=2) in each of the multipoles for a binary of total mass of 180 at z = 14 observed by ET. The SNR of the (2, 2) multipole is greatest for face-on signals (ι = 0^∘) with equal mass components (q = 1).For a face-on signal, the (2, 2) multipole is circularly polarized and, as the inclination increases, the amplitude of both polarizations decreases to a minimum for edge-on systems, ι = 90^∘, whose emission is linearly polarized.For the other multipoles considered, the SNR vanishes at face-on and peaks at ∼ 50 for the (3, 3) multipole and ∼60 for the (4, 4) multipole.The binary would be observable in ET at any orientation.For inclinations ι≳ 10 or 30 the (3, 3) and (4, 4) multipoles would be identifiable, respectively.Since this waveform lasts only a few cycles in the detector band, the contributions from the different multipoles are not orthogonal. Consequently, the total SNR varies with the merger phase of the binary.The SNR of each different multipole, and the full signal, also varies with mass ratio.The (2, 2) multipole is largest for equal mass systems and decreases by a factor of two by mass ratio q=5, while the (3, 3) vanishes for equal mass and peaks around q=3.For this signal, the SNR in the (4, 4) multipole does not vary significantly with mass ratio.The (2, 2) and (4, 4) multipoles would be identifiable at any mass ratio, and the (3, 3) for binaries with mass ratio above ∼1.2.Identification of more than one multipole enables an improved measurement of mass ratio, as well as binary orientation.In Fig. <ref>, we show the same dependence of SNR with inclination and mass ratio for the CE detector.Since CE has sensitivity to the signal above 5 Hz, rather than 3 Hz for ET, the overall SNR is lower and the signal would be marginally observable. Furthermore, a broad range both in inclination and mass ratio, where the (3,3) multiple gives the dominant contribution to the SNR, becomes accessible. This provides a clear example of a signal where the HoM enable detection as well improved parameter recovery.Given the above discussion, we are interested in identifying the regions of the mass space where HoM can contribute to either the observability or parameter measurement accuracy of high-mass, high-redshift binaries. In Fig. <ref> we show the sensitivity of the proposed ET and CE observatories to BBH mergers with mass ratio of 2 as a function of redshift. We show the maximum redshift at which a binary can be observed, at an SNR of 8, and also the redshifts at which 10%, 50% and 90% of binaries, averaged over sky location and orientation, will be observed.The detector sensitivity is shown for both the (2, 2) multipole, in orange, and the full waveform, in blue. At low masses, the (2, 2) multipole dominates the observable signal and therefore the distance to which the full waveform can be observed is essentially equal to that of the (2, 2) multipole.However, at high masses, the (3, 3) and (4, 4) multipoles contribute more significantly and incorporating them increases the sensitivity of the detectors to these systems. When a system has been observed, the identification of a second multipole, at SNR above 3, can greatly improve parameter recovery by breaking degeneracies between distance and inclination and improving mass ratio measurement. The range of masses and redshifts for which the binary would be observed with SNR above 8, and with SNR above 3 in at least two multipoles, is shown in black in Fig. <ref>. For example, in ET a 4,000 system is visible at z≈ 1 with the (2, 2) multipole but up to z ≈ 2 with the full waveform.Remarkably, for the majority of binaries with M ≳ 100 observed by ET, and M ≳ 30 observed by CE, a second multipole will be observable.At lower masses, it is the (2, 2) and (3, 3) multipoles which contain most power, while at high masses it is the (3, 3) and (4, 4) multipoles that are observed, with the (2, 2) multipole power occurring at frequencies below the instrumental sensitivity.The picture is similar at different mass ratios, and figures showing the sensitivity to binaries with q = 1, 2, 4 and 10 are provided in Appendix <ref>, for ET in Fig. <ref> and CE in Fig. <ref>.The most significant difference occurs for equal mass binaries, where the (3, 3) multipole vanishes and we therefore require both (2, 2) and (4, 4) multipoles to be observable.This limits the range for which two multipoles can be seen and increases the minimum mass at which we expect to observe two multipoles to ∼ 200 in ET and ∼50 for CE.Nonetheless, for the majority of high-mass, high-redshift binaries, we expect to observe multipole multipoles, and therefore obtain good estimates of both the masses and redshift of the system.In the next section, we investigate those expectations in detail through parameter recovery of a series of systems. § PARAMETER RECOVERY FOR HIGH-MASS, HIGH-REDSHIFT BINARIESObservation in a single GW observatory leads to large uncertainties in the sky location of the binary <cit.>, and this is again degenerate with the inferred distance and redshift.A network of detectors with comparable sensitivity can provide accurate localization <cit.> and therefore improved redshift and mass accuracy. Binaries with black hole spins misaligned with the orbital angular momentum will precess.In principle, the observation of precession can further improve parameter estimates. However, given that so few cycles of the waveform are visible in the detectors, the prospects for observing precession are slim <cit.>. Therefore, in what follows we neglect precession effects.To illustrate the expected performance of a next-generation GW network in observing and measuring these binaries, we perform a number of simulations and obtain parameter estimates with the LALInference <cit.> package and a uniform in comoving volume distance prior. We simulate four different binary mass combinations, denoting (in the source frame) with m_1 (m_2) the primary (secondary) mass and with M the total mass. We consider (120, 60) and (90, 90) binaries, chosen so that component BH lie in, or close to the upper mass-gap, and (240, 120) and (480, 120) binaries chosen to probe observability of high-redshift IMBH in binaries.In all cases, we simulate quasi-circular non-spinning BBH, but allow for non-zero, aligned spins when performing parameter estimation[The restriction to non-spinning BBH is solely to simplify presentation — all results presented here could be easily extended to aligned-spin BH.]. This is important as the degeneracy between the binary mass ratio and BH spins <cit.> greatly impacts the accuracy with which mass ratio can be measured.The simulated signals are added to data from a three-detector network of observatories with sensitivity matching ET <cit.> and CE <cit.>.Specifically, we use a single, triangular ET detector located in Europe and two 40 km CE observatories, one in the US and one in India.The simulations are performed at the optimal sky location for the network.Given the greater low-frequency sensitivity of ET, this leads to the binaries being essentially overhead ET.The signals are generated at varying inclination angle, to enable us to investigate the importance of HoM.We choose the redshift of the sources to ensure a fixed SNR for all signals.In the main text, we use an SNR of 30, while in Appendix <ref> we investigate quieter signals with an SNR of 15. §.§ Observing mass-gap objectsA mass gap in the BH mass distribution is expected due to the presence of the PISN. <cit.> investigated the location of this pair-instability region as a function of the temperature-dependent uncertainty in the ^12 C(α,γ)^16O reaction rate. Determining the value of the ^12 C(α,γ)^16O reaction rate is extremely important for tracing the evolution of massive stars. Thus, restricting this rate through GW observations would be of considerable astrophysical interest. According to <cit.>, the width of the mass-gap remains roughly constant as a function of the (unknown) reaction rate, but the mass-range where no black hole can form varies.At the lowest rate relative to the median, the mass-gap extends from ∼ 90 to ∼ 175. At the highest rate, the location of the mass-gap is between ∼ 60 and ∼ 120.Interestingly, there exists a region of BH masses between 90 and 120 where we should not expect any black hole to form, for any rate. We refer to this region as the forbidden strip. Consequently, we choose to investigate systems which host at least one member with mass touching this narrow strip. Then, if their masses were to be determined with sufficient accuracy, their detection could constrain the ^12 C(α,γ)^16O reaction rate to be at the extreme of the allowed range <cit.>.In particular, we focus on (120,60) and (90,90) binaries which have components at the lower and upper range of the forbidden strip.As seen in Fig. <ref>, BBH with masses (120,60) will be detectable at a maximum redshift of z∼ 25, for an optimally located and oriented system, with 50% of mergers at z ∼17 and the vast majority of events at z ∼ 10 being detectable. The sensitivity to (90, 90) systems is comparable. If (90,90) systems were to be observed, this will allow us to constrain the strength of the uncertain ^12 C(α,γ)^16O reaction rate. In particular, black holes with such masses would imply the rate to be at the lower end of the explored range. A binary with mass (120,60) would be challenging to form through stellar evolution.Specifically, allowing for variation in the reaction rate, the mass of the primary would require a very high reaction rate for ^12 C(α,γ)^16O, while the mass of the secondary would be compatible with a value below the median. Therefore this would be a system where only one of the two black holes could originate from stellar evolution and the other would require a different formation channel.In Fig. <ref> and <ref>, we show the recovered parameter accuracies for both mass and redshift for (90, 90) and (120, 60) binaries observed with SNR = 30.The first thing to note is that these high-mass, high-redshift systems could be identified with good accuracy by the next-generation GW network, as would be expected due to the relatively large SNR.For all events, there is, at most, a factor of two uncertainty in the mass of the systems and a 50% uncertainty in the redshift, with both numbers quoted at the 90% confidence interval.However, we also notice a substantial variation in the accuracy of parameter measurement between the systems.The parameters of systems close to face-on (ι = 0 or 30) are recovered with significantly larger uncertainties than those which are more highly inclined (ι = 60 or 90).When the binary is close to face-on, the uncertainty inthe component mass posterior is much greater and, for ι = 0, the 90% mass region for the (120,60)binary includes equal masses.As the inclination of the binary is increased, parameter accuracy improves significantly: already at ι = 30 the posterior for the (120,60)binary is inconsistent with equal masses, although large uncertainty in the mass ratio remains.For binaries inclined at 60 or 90 the parameter accuracy is excellent.In both cases, the mass ratio is very well constrained and uncertainty in total mass and redshift is ± (10-20)%. We can explain why the results of our parameter accuracy display the features they do, using our understanding of the importance of HoM discussed in Section <ref>.Let us begin by considering the (120, 60) system at z=14, inclined at ι=60, that we have plotted in Figs. <ref>, <ref> and <ref>.The signal is observed with high SNR in ET and the (3, 3) multipole is also clearly observed.In CE, the total SNR of the system is sufficient for it to be observed, with the (3, 3) multipole providing the dominant contribution.Since the event is observed in three detectors, it is relatively well localized,[Since these are very low-frequency systems, the localization is poorer than for events in GWTC-3 <cit.> as the localization depends upon the frequency bandwidth of the signal <cit.>.] with a 90% localization area of 300^2. The observation of HoM in the waveform enables the accurate inference of both the binary inclination and mass ratio.Since the sky location, mass ratio and inclination are well measured, this enables accurate inference of the distance to the binary and consequently the redshift of the source.In the top panel of Fig. <ref> we show the recovered values for the redshift, binary inclination and mass ratio.All three are recovered with an accuracy better than 10%.Next, we consider a comparable system observed face-on (ι = 0) at the same SNR, and at a redshift of 17.In that case, the power in both the (3, 3) and (4, 4) multipoles vanishes.Consequently, the binary is no longer observable in CE as it has an SNR of 1.[The waveform shown in Fig. <ref> corresponds to a binary at z=14 and we are now considering the same mass binary at z=17.The larger redshift reduces the SNR primarily through redshifting the signal which lowers the frequency by 20%. This leads to an SNR which is lower than that shown in the figure.] At this SNR, the CE observatories are unable to provide localization of the source, with a 90% localization area of 10,000^2.Furthermore, the vanishing HoM means that only the (2, 2) multipole is observed in ET.Therefore, it is not possible to break the degeneracy between binary orientation and distance, nor to place a tight constraint upon the mass ratio.The bottom panel of Fig. <ref> shows a the recovered redshift, mass ratio and inclination for this system.The mass ratio is not accurately recovered and, indeed, the binary is inferred most likely be (close-to) equal mass, although the distribution does extend to 1/q=0.5.In addition, the binary orientation is not accurately recovered, with a broad distribution of ι≲ 25 — more inclined systems are excluded as they would have observable power in the (4, 4) multipole.The mass ratio–inclination distribution does show a secondary peak close to the simulated value (1/q ≈ 0.5 and ι < 10), however, the preference is for an equal mass system. Despite both mass ratio and inclination being offset from the true values, the inferred redshift matches well with the simulated value.However, due to the uncertainties in other parameters, the redshift uncertainty is now close to 25%.The comparisonof parameter accuracy for these two systems highlights the importance of both a network of detectors and also observability of the HoM in accurate inference of binary properties.It is worth noting that our intuition from current GW observations that the majority of sources are close to face-on (or face-off) no longer holds in the next-generation network <cit.>. In the nearby Universe, where sources are approximately uniform in volume, a signal observed with a given SNR is most likely to originate from a distant binary which is close to face-on (or face off) as the number of sources increases as d_L^2.For a high-redshift source, whose redshift is past the peak of the redshift distribution— likely around a redshift of z ≈ 2 at whichstar formation peaks — this is no longer the case.Now, the most likely origin is from a binary which is at lower redshift, where the intrinsic rate is higher, and is either poorly oriented or from a region of the sky where the detectors have lower sensitivity.Thus, the results from sources inclined at 60 and 90 are more typical of the observed population.Let us return to the implications for probing the location of thePISN mass-gap.For both mass pairs, binaries inclined at 60 or 90 are those which provide the best mass measurements.For the (90, 90) system, we have m_1 and m_2 measured in the interval ∈ [70, 100].So, this system is consistent with both components lying below the mass-gap provided the ^12 C(α,γ)^16O rate is low.For the (120, 60) system, we have m_1∈ [100, 140] and m_2∈ [40, 80].The masses are consistent with one above and one below the gap, provided the reaction rate is high.If both signals were observed, this would be inconsistent with our current understanding of the PISN mass gap.To investigate the observability at even higher redshifts, we have simulated a second set of signals, with the same masses and inclinations but with a lower SNR fixed at 15.For these systems, the redshifts range from z ≳ 20 for face-on systems to z ≈ 15 for edge on systems.Broadly, the results are consistent with those in Fig.<ref> and <ref>, but with larger uncertainties due to the lower signal amplitude.In particular, for all but the face-on systems, we are able to clearly identify that the (120, 60) binary is of unequal mass, due to the observed power in HoM.For the inclined systems, the uncertainty in total mass and redshift is around a factor of two (from 150 to 300 and z=12 to 25).Thus, while it is possible to identify these systems as unambiguously high-mass and high-redshift sources, the uncertainties in masses and redshifts make it difficult to perform precision astrophysics.For the (90, 90) system, it is only at ι = 60 or 90 that the parameters are well recovered.These results are shown in Fig. <ref> and <ref> in Appendix <ref>.For the face-on systems, we see an interesting feature whereby the binary can be mistaken for a different system with very different properties.Fig. <ref> shows the inferred redshift and redshifted mass –M(1 + z) – distributions for a (120, 60) system at z=21.The primary peak is at z = 21 and M (1 + z) ≈ 4,000 corresponding to the simulated value.However, there is a secondary peak around z=5, with a redshifted mass around 6,000 corresponding to a binary with mass of 1,000.For this system, it is the (3, 3) multipole which is consistent with the simulated signal.This provides another example of the challenges which arise when identifying high-mass, high-redshift binaries.The signal would be observed only in ET, and have only one observable multipole.Not only does this lead to poor parameter recovery, but also the inability to distinguish between a 180 binary at z=21 and a 1,000 binary at z=5.Given the GW data alone, it would not be possible to distinguish between the two scenarios.The relative significance of the two will depend upon astrophysical knowledge of the mass and redshift distributions of BBH.Here, we have used priors which are uniform in comoving volume and component masses.Other choices might lead to different conclusions about the mass and redshift of the binary. In summary, for these two representative sources, and for high SNR we would confidently identify the systems as high-mass BBH at high redshift. These could be potential seeds for the growth of SMBH. The first system of (90,90) would be marginally consistent with being a binary formed at the lower edge of the mass gap, in correspondence of the lowest valueof the ^12 C(α,γ)^16O reaction rate. Inconsistent otherwise.The secondsystem of(120,60)would be consistent with one BH (the lightest) originating from the core-collapse of a massive star (provided the ^12 C(α,γ)^16O rate is low) and the second, in the midst of the pair-instability gap, would have a different origin. For a large fraction of the computed rates, the discovery of the latter systemwould be inconsistent with the explosion scenario implied by the pair instability that would predict no BHs, and therefore a different channel has to be called for, for both components. §.§ Observing intermediate mass black holes in binaries Next, let us consider higher mass systems, containing an IMBH.For concreteness, we consider binaries with masses (240,120) and (480,120) which would be observable up to z∼ 10.While the 120 BH would be at the border of the pair-instability gap, both BH are well above the gap. Their observation would either require a sufficiently high upper-mass end of a Kroupa-like IMF (extending to at least 300) or a top-heavy IMF <cit.>. Alternatively the primary BH (particularly the most massive one of 480) could have had time to increase its original mass due to accretion of gas from its surroundings.Thus for these systems, we are interested in determining whether the mass of the primary and the redshift can be accurately inferred in order to identify these early IMBH. Assuming the mass gap to be that predicted by the median of the^12 C(α,γ)^16O reaction rate, between ∼ 50 and 130, and a standard Kroupa-like IMF, in the interval between 0.1 and 150, all our systems are expected to have dynamical origin in dense star clusters(see, eg. ).[We warn that black holes with these masses could be of primordial origin or outcome of post-formation accretion, as mentioned in the Introduction.] Among these, Nuclear Star Clusters (NSCs) could be the sites for 3rd- or 4th-generation BBH with observed individual masses up to ∼ 600, mergingon timescalessmaller than 500 Myrcompatible with the redshift of observation.Formation in Globular Clusters (GCs) would be marginally compatible with our lightest systems.According to <cit.>, above z∼ 10 there are two available BBH formation channels, the isolated and the dynamical formation in young star clusters. However, the maximum masses for these channels are ∼ 50 and ∼ 100 respectively.This implies that detecting BBH with individual masses larger than ∼ 100 at z>10 could point to a top-heavy IMF, as predicted for the first stellar generation.Figures <ref> and <ref> show the accuracy with which we can measure the masses and redshifts of the events. The broad features are similar to what we have already observed for the lower mass systems, namely that the parameter recovery is significantly worse for face-on systems, due to the vanishing HoM.Even though the mass ratio of the systems is 2 or 4, both are inferred to be consistent with equal mass (or nearly equal mass) binaries when viewed face-on.Furthermore, the uncertainty in redshift and total mass is about a factor of two.For the inclined systems, the recovery of masses and redshifts improves significantly, particularly for ι≥ 60.In that case, component masses and redshifts are recovered with a ∼ 20% accuracy.In particular, the mass of the 120 BH will be constrained to be between 90 and ∼ 150 for all except for the face-on system.This is consistent with a black hole above the mass-gap and, due to uncertainties in the mass measurement, will not significantly restrict the ^12 C(α,γ)^16O reaction-rate.In Appendix <ref>, we also show results for events simulated at higher redshifts and at a lower SNR of 15.The results are comparable to those discussed above, with the masses and redshifts for inclined systems better measured, and masses constrained to be unequal.For face-on systems, the parameter recovery is significantly worse and we again see multiple peaks in the mass-redshift distributions corresponding to different multipoles matching with the signal.Remarkably, the next-generation GW observatories have the capability to detect and accurately identify mergers involving 240 BH at a redshift of 10, and confidently infer a minimum redshift of 7, and mergers involving a 480 BH at a redshift of 6, and infer a redshift of at least 4.These systems will be interesting to observe because we do not know if BH of those masses exist, and we can hope to shed light on their formation routes, either by accretion from lower-mass BH or by direct collapse of very massive stars.§ DISCUSSION The next-generation of GW detectors provide a unique way to probe the existence of heavy stellar black holes in the high-redshift Universe.Future GW observations of BH with masses above ∼ 50 at redshift z∼ 10-15will enable us to probe the properties of the first stars formingin the Universeand their initial mass function.If BH in the mass range explored here exist, they can contribute, as seeds [the so called “light seeds” explored in the literature <cit.>.], to the rapid growth ofthe population of quasars observed close to the recombination epoch,at z≈ 7.5, and housing accreting BH of O(10^8- 10^9) <cit.>. Whether and how the bridge between stellar and supermassive black holes was established when the first galaxies were forming is currently unknown <cit.>. The revolutionary data coming from JWST, with the recent discovery of more than 40 new faint accreting supermassive BH of O (10^5-10^7) at 4 < z ≤ 10.6 <cit.>, is an outstanding confirmation of the rich BH landscape at cosmic dawn predicted by theoretical models <cit.>. In the future, with the Laser Interferometer Space Antenna (LISA) in operation <cit.>, we will detect low-frequency GW from merging massive BH of O(10^4-10^6) out to z∼ 10-15.By combining and confronting statisticallyall observations of both merging and accreting BH,we will be able to shed light into the origin and evolution of the BH populations, from the stellar to the supermassive through the intermediate-mass ones,across the cosmic epochs <cit.>.In this paper, we focused on the observability of high-redshift stellar BBH with high masses, and, equally importantly, on the accuracy with which their masses and redshifts can be inferred.We have shown that both the observation of systems and the accurate measurement of their parameters depend critically on the inclusion of HoM in the GW waveform.At the highest masses and redshifts, HoM, which extend the signal to higher frequencies than the (2, 2) multipole, can significantly increase the sensitive range of the detectors.Across a broad range of masses and redshifts, we expect to see multiple GW multipoles in signals observed by CE and ET.Observation of more than one multipole, typically a HoM in addition to the (2, 2) multipole, enables the breaking of the degeneracy between binary inclination and distance, as well as a more accurate determination of the mass ratio.Additionally, a network of observatories is required for source's localizations.When a signal is seen in only a single detector, the sky location is poorly measured and, since the detector response varies significantly over the sky, this leads to large uncertainties in the distance.For very high-redshift sources, accurate distance/redshift measurement is vital for the measurement of the BH individualmasses, as the observed signal depends upon M(1+z).By performing full parameter estimation on a set of representative systems, we demonstrated that it will be possible to measure masses and redshifts with an accuracy of 10-20%, for signals at redshifts up to at least 15.Those systems which can be observed and accurately measured are typically seen in both CE and ET detectors, so they are well localized and also tend to be viewed away from face-on (or face-off) so that more than one GW multipole is observed.We examined systems with masses (120, 60) and (90, 90), which lie in, or around the pair-instability mass-gap.For the best-measured of the examples we investigated, at a redshift of 10, we could measure redshift and component masses with 10% uncertainly.This would, unambiguously, place these sources in the high-redshift Universe and serve to constrain the, currently unknown, location of the pair-instability mass-gap.We also investigated mergers of (240, 120) and (480, 120) binaries, which enable us to probe the observability of early IMBH. It will be possible to observethese IMBH at redshifts up to z=10 and constrain the redshift to be at least z=7.The results in this paper complement those in <cit.> which investigate lower-mass BH mergers in the next-generation GW network, and <cit.> whointroduced the concept ofan “inference horizon” which maps the redshift a source can confidently be placed beyond.In all cases, it is shown that next-generation GW network provides a unique capability to probe high-redshift black hole formation.The most critical feature of detector sensitivity for observing these systems is the low-frequency sensitivity of the detectors. In our study, we used a low-frequency limit of 3 Hz for ET and 5 Hz for CE.Even the relatively small change from 3 Hz to 5 Hz can have a profound impact on sensitivity to high-mass, high-redshift sources as shown in Fig. <ref> and <ref>.Achieving the instrumental design sensitivity at low frequencies has been challenging in the current LIGO and Virgo observatories.As the detailed technical designs of the next-generation observatories are finalised, the desire to probe the remnants of high-mass stars in the early Universe should be considered as a motivation to optimize sensitivity at low frequencies. § ACKNOWLEDGEMENTS We thank Riccardo Buscicchio for his careful reading and valuable comments. SF acknowledges the support of STFC grant ST/V005618/1 and a Leverhulme Trust International Fellowship. RS acknowledges support from the Amaldi Research Centre funded by the MIUR programme “Dipartimento di Eccellenza” (CUP:B81I18001170001). MC, RS, AT, RV acknowledge the INFN TEONGRAV specific initiative.MC acknowledges support by the 2017-NAZ- 0418/PER grant, and by the Italian Ministry for Universities and Research (MUR) program “Dipartimenti di Eccellenza 2023-2027”, within the framework of the activities of the “Centro Bicocca di Cosmologia Quantitativa (BiCoQ)”.AS acknowledges the financial support provided under the European Union’s H2020 ERC Consolidator Grant “Binary Massive Black Hole Astrophysics” (B Massive, Grant Agreement: 818691). MC and RS thank the Institut d'Astrophysique de Paris for kind hospitality. § DETECTOR SENSITIVITY In this Appendix, we show the sensitivity of ET and CE to binary mergers as a function of mass and redshift.Fig. <ref> shows the sensitivity to mergers with mass ratio of two.In Fig. <ref> we show the ET sensitivity for binaries with mass ratios 1, 2, 4 and 10.In Fig. <ref>, we show the same for CE. The maximum reach of the detectors is for equal mass binaries.However, at equal mass the (3, 3) multipole vanishes so there is a larger range for which only one multipole is visible.As we increase the mass ratio, the maximum sensitive redshift decreases, as the amplitude of the emitted GW also decreases.However, the relative significance of the HoM increases so that an increasing fraction of sources will be observed with at least two multipoles. § LOW SNR SIGNALS In this Appendix we present parameter estimation results for systems that would have a network SNR of 15 in the ET-CE network described in Section <ref>, to complement the results for SNR 30 presented in that section of the paper.Since these events are at a lower SNR, they are also at a higher redshift, with the (120, 60) and (90, 90) binaries at redshifts between 23 and 15 (depending upon inclination).At these redshifts, the signal is shifted to such low frequencies that it lies essentially outside of the sensitive band of CE — the SNR of these events in CE is less than 2 in all cases.Consequently, all sources are poorly localized on the sky, with a typical 90% localization of thousands of square degrees.In Fig. <ref> and <ref>, we show the accuracy with which the masses and redshift are recovered for the (120, 60) and (90, 90) binaries.The qualitative results are similar to those for the SNR 30 signals presented in Section <ref>, with broader posteriors as expected due to the lower SNR.Specifically, the masses and redshifts are poorly measured for face-on systems, and measurement accuracy improves for inclined systems (particularly ι = 60, 90) where there is observable power in the HoM.In the best-case scenarios, masses and redshifts are measured with ∼25% accuracy.For all systems other than ι=0 the (120, 60) system is clearly identified as having unequal masses.However, the mass distributions are broad enough that limited information about the location of the pair-insatability mass-gap can be extracted.For the ι=0 systems, and ι=30 for the (90, 90) binary, there is a bimodality in the recovered redshift.In addition, the inferred mass distribution is broader than that shown in Fig.<ref> and extends to ∼ 1000.For these events, there is zero (or limited) power in the HoM so only a single GW multipole is observable.The secondary peak at high masses and z≈ 5 corresponds to a binary configuration where the (3, 3) multipole has the correct amplitude and frequency content to match the simulated signal.This is discussed in more detail in Section <ref>, around Fig.<ref>.In Fig. <ref> and <ref>, we show the accuracy with which the masses and redshift are recovered for the (240, 120) and (480, 120) binaries.As for the lower-mass systems, the qualitative results are similar to those for the SNR 30 signals presented in Section <ref>, with broader posteriors as expected due to the lower SNR.Nonetheless, other than the face-on (ι=0) systems, the binaries are clearly identified as unequal mass systems containing an IMBH with minimum mass 200/400 for the two system.Redshifts are generally underestimated, likely due to the poor sky localization, and lower bounds on the redshift are no better than for the higher SNR systems shown in Fig. <ref>. Again, the face-on systems show significant bimodality with a second peak at much lower redshifts and higher masses.As before, this corresponds to a system where the HoM, rather that the (2, 2) multipole, are associated with the observed waveform. | http://arxiv.org/abs/2310.18158v1 | {
"authors": [
"Stephen Fairhurst",
"Cameron Mills",
"Monica Colpi",
"Raffaella Schneider",
"Alberto Sesana",
"Alessandro Trinca",
"Rosa Valiante"
],
"categories": [
"astro-ph.HE",
"gr-qc"
],
"primary_category": "astro-ph.HE",
"published": "20231027140553",
"title": "Identifying heavy stellar black holes at cosmological distances with next generation gravitational-wave observatories"
} |
Kinematic signatures of planet-disk interactions in VSI-turbulent protoplanetary disksM. Barraza-Alfaro et al. Max Planck Institute for Astronomy, Königstuhl 17, D-69117 Heidelberg, [email protected] Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USAPlanets are thought to form inside weakly ionized regions of protoplanetary disks, in which turbulence creates ideal conditions for solid growth. However, the nature of this turbulence is still uncertain. In fast cooling parts of this zone the vertical shear instability (VSI) can operate, inducing a low level of gas turbulence and large-scale gas motions. Resolving kinematic signatures of active VSI could reveal the origin of turbulence in planet-forming disk regions. However, an exploration of kinematic signatures of the interplay between VSI and forming planets is needed for a correct interpretation of radio interferometric observations. A robust detection of VSI would open the door for a deeper understanding of the impact of gas turbulence on planet formation. The objective of this study is to explore the effect of the VSI on the disk substructures triggered by an embedded fairly massive planet. We will focus on the impact of this interplay on CO kinematic observations with the ALMA interferometer. We conducted global 3D hydrodynamical simulations of VSI-unstable disks with and without embedded massive planets, exploring Saturn- and Jupiter-mass cases. We studied the effect of planets on the VSI gas dynamics, comparing with viscous disks.Post-processing the simulations with a radiative transfer code, we examined the kinematic signatures expected in CO molecular line emission, varying disk inclination. Further, we simulate deep ALMA high-resolution observations of our synthetic images, to test the observability of VSI and planetary signatures. The embedded planet produces a damping of the VSI along a radial region, most effective at the disk midplane. For the Saturn case, the VSI modes are distorted by the planet's spirals producing mixed kinematic signatures. For the Jupiter case, the planet's influence dominates the overall disk gas kinematics. The presence of massive planets embedded in the disk can weaken the VSI large-scale gas flows, limiting its observability in CO kinematic observations with ALMA.Kinematic signatures of planet-disk interactionsin VSI-turbulent protoplanetary disks Marcelo Barraza-Alfaro 1,2 Mario Flock1 Thomas Henning1Received X; accepted Y ============================================================================================ § INTRODUCTION The detection of thousands of exoplanets combined with recent observations of signatures of embedded planets in young gas-rich protoplanetary disks demonstrates that planet formation is an ubiquitous process in nature, and possibly fast and efficient. Nevertheless, the fundamental processes that pave the way from micron-sized dust particles to a planetary embryo are still far from understood. In particular, precise knowledge of the impact of turbulence on the planet formation process and disk evolution is still one of the missing links to connect the early stages of circumstellar disks to mature planetary systems <cit.>. Gas turbulence can drive angular momentum transport, and substantially affect the disk dust size and dynamical evolution, the formation of substructures, the disk thermo-chemical evolution, planetary accretion of gas and pebbles, and planet migration <cit.>. Therefore, understanding the origin of turbulence is crucial for further progress in planet formation theories and the interpretation of disk observations. As an example, turbulence is an essential Solar Nebula property to draw back the formation history of our Solar System <cit.>.Various disk instabilities have been studied to comprehend turbulence in circumstellar disks, evoking different disk local physical conditions; therefore, possibly operating within the disk in concert. The dominant source of turbulence at each location would depend, for example, on the disk environment, disk and stellar host properties, and the specific disk region's conditions <cit.>. Among the candidates, we can summarize the magneto-rotational instability <cit.>, the vertical shear instability <cit.>, the convective overstability <cit.>, and the zombie vortex instability <cit.>; for a summary of these instabilities see, for example, <cit.> and <cit.>. Therefore, unraveling the physical processes behind disk turbulence requires careful analysis, comparing directly resolved observations of protoplanetary disks against state-of-the-art numerical simulations.On constraining turbulence levels in protoplanetary disks ALMA dust and gas molecular line observations have been fundamental, providing evidence for weak turbulence in the probed disk regions <cit.>. In these regions, tens of au from the central star, purely hydrodynamical instabilities are consistent with the expected low disk ionization <cit.> and the observed turbulence upper limits. Among the candidates, the VSI is the most likely to operate on these outer regions, due to the predicted fast cooling rates <cit.>. Further, spatially and spectrally resolved CO kinematic observations have the potential to confirm such a scenario, detecting coherent corrugated flows driven by VSI <cit.>.Recent ALMA molecular line observations have demonstrated the feasibility of fully resolving the gas kinematic structure of planet-forming disks, revealing the signatures of perturbations of the disk (sub-)Keplerian gas flow <cit.>. These deviations from Keplerian rotation could be a dynamical manifestation of disk instabilities <cit.>, but embedded planets are often favored <cit.>. In addition to substructures present in dust observations <cit.>, this points towards an unseen population of fairly massive planets embedded in the observed protoplanetary disks <cit.>, with the exception of the directly observed PDS 70 b and c <cit.>.If massive planets are embedded in the disk (masses comparable to and above the thermal mass, seeand ), they can strongly modify the disk density and dynamical structure <cit.>. Interactions between massive planets and the gaseous protoplanetary disk produces a depleted gap <cit.>, spiral wakes around the planet's location <cit.>, large-scale spiral arms via Lindblad resonances and buoyancy resonances <cit.>, anti-cyclonic Rossby-wave vortices <cit.>, meridional flows <cit.>, and a circumplanetary disk <cit.>. This myriad of structures has a substantial impact on the disk gas velocities <cit.>. This implies that it has a direct imprint on the observed disk kinematic structure <cit.>. Moreover, planets could potentially affect the development of the VSI in the disk <cit.>; thus, the observability of VSI signatures.In this work, we study the kinematic structure resulting from the interplay between vertical shear instability and the structures triggered by a fairly massive planet. The paper is structured as follows. We describe the numerical methods for the hydrodynamical simulations in Section <ref>, and present their results in <ref>. The radiative transfer post-processing, simulated observations, and techniques to extract observables are detailed in Section <ref>, whose results are presented in Section <ref>. In Section <ref>, we discuss a potential approach to confirm VSI signatures observationally, and the limitations of our work. Finally, we summarize the main findings of our study in Section <ref>.§ HYDRODYNAMICAL SIMULATIONS: METHODS We performed 3D global hydrodynamical simulations using the Godunov grid based code PLUTO[<http://plutocode.ph.unito.it/>] <cit.>. We used the publicly available version 4.4 of PLUTO, solving the Navier-Stokes equations of classical fluid dynamics, without magnetic fields (HD module). ∂ρ/∂ t+∇⃗· (ρv⃗)=0 ∂ (ρv⃗)/∂ t+∇⃗· (ρ v⃗ v⃗^T)=-∇⃗P - ρ∇⃗Φ+∇⃗·Π,were ρ is the gas mass density, v⃗ is the gas velocity vector, P is the pressure, Φ is the gravitational potential, and Π represents the viscous stress tensor. The viscous term is included in the momentum equation only for our α-viscous simulations, while our VSI-unstable disk simulations are inviscid. In our set of simulations, the fluid is affected by the gravitational potentials of a star (Φ_⋆= -G M_⋆/r) and of embedded planets (see Eq. <ref>). The disk self-gravity is not considered in our simulations.The hydrodynamical equations were solved using a second-order accurate scheme with linear spatial reconstruction, with the least diffuse limiter implemented in PLUTO (monotonized central difference limiter). For the time stepping calculation, we chose the second-order Runge-Kutta time-stepping, while for the solver we use the Harten-Lax-Van Leer Riemann solver. The Courant number is set to 0.3.We ran a total of 5 locally isothermal simulations, 3 VSI-unstable disks simulations to compare the case without planets, with a Saturn planet andwith a Jupiter planet, and 2 α-viscous simulations of planet-disk interactions to confront the VSI-unstable cases. For all simulations, the initial conditions of the disk follow the equilibrium solutions of a disk with vertical shear from <cit.> (See also Section 2.1 in ). For the VSI-unstable disk simulations we run inviscid numerical simulations; therefore, the code solves the Euler equations. However, for the simulations including viscosity, we include the viscous stresses into the hydrodynamical equations, implemented as a parabolic diffusion term in the momentum equation. The viscosity depends on a shear viscosity coefficient (ν), which we set to follow the α viscosity prescription of <cit.>:ν=α c_s H,with α constant through the disks, set to α = 5× 10^-4. The quantities c_s and H denote the local sound speed and the disk pressure scale height. For the numerical integration of the diffusion term we chose the Super-Time-Stepping (STS) technique as implemented in PLUTO, which can accelerate the calculations compared to an explicit treatment.For the simulation grid, the computational domain extends from 0.4 to 2.5 code units of length in the radial direction (r), and in the azimuthal direction (ϕ) the grid covers the full 2π rad. In the meridional direction (colatitude, θ), the grid is set to cover ∼ 10 disk pressure scale heights at R=1.0, that is, 5H for each disk hemisphere. The grid follows a spherical geometry, logarithmically spaced in the radial direction, while evenly spaced in colatitude and azimuth. For the simulations presented, the resolution of the grid is (r,θ,ϕ)=(512,192,1024), which gives a resolution of ≈ 19 cells per scale height in the meridional direction at R=1.0.We re-scaled the code unit of length of the numerical simulations to 100 au, which together with the reference aspect ratio at the code unit of length of H/R=0.1, sets a model suited for the disk outer regions. Additionally, the stellar mass is set to be equal to 1M_⊙. Therefore, we use these reference values to re-scale our simulation results in all figures presented in Section <ref>, and also for the disk model used in the radiative transfer post-processing (Section <ref>).In the set of simulations presented in this work, we adopted boundary conditions that consist of enforced zero inflow in θ and r, and an extrapolated density and softened v_ϕ in the meridional direction (see more details inand ). In order to minimize wave reflections close to the inner and outer radial boundaries, we include buffer zones in which the gas density and radial velocity are damped to the initial profiles with a timescale of 10% of the local orbital period. We apply a parabolic damping as introduced by <cit.>. The radial extents covered by the inner and outer buffer zones are equal to 25% of the grid inner radius and 20% of the outer edge radius, respectively. §.§ Planets We ran simulation of disks with embedded massive planets for two cases, a Saturn-mass case (m_p/M_⊙=0.3× 10^-3), and a Jupiter-mass case (m_p/M_⊙=1.0× 10^-3). For the disk aspect ratio assumed in our simulations (H/R=0.1 at the planet's location), the planet masses correspond to 0.3 and 1.0 thermal masses, respectively, where the thermal mass is the value at which the planet's Hill sphere matches the sonic point for linear planetary spiral wakes <cit.>, defined as:M_th=c_s^3/Ω_P G = M_⋆(H_P/R_P)^3,where Ω_P is the planet's orbital frequency, M_⋆ is the mass of the central star, H_P is the pressure scale height at the planet's position, and R_p is the distance of the planet from the star. The 0.3 M_th planet mass case overlaps with the maximum planet mass studied in <cit.> of 100 Earth masses, while for planet masses equal and above 1M_th non-linear effects are expected to significantly affect the disk structure <cit.>. For the inclusion of planets, we use a standard approach of slowly inserting the planet as a gravitational potential of a point of mass, with a smoothing around the location of the planet: Φ_P=-GM_P/d, ford≥ d_rsm,-GM_P/d[(d/d_rsm)^4-2(d/d_rsm)^3+2(d/d_rsm)], ford<d_rsm,where d is the distance between a fluid element and the planet's position. A potential smoothing length d_rsm is used to prevent numerical artifacts at the planet's location. The value of d_rsm is set to be three cell diagonals evaluated at the planet location. This corresponds to around 56% of Saturn's and 37% of Jupiter's Hill spheres (r_Hill=r_p(m_p/3M_⋆)^1/3)). In order to avoid numerical artifacts during the inclusion of the planet, its mass is smoothly increased from zero to its final mass in 40 and 100 planetary orbits for the Saturn-mass and Jupiter-mass cases, respectively. To compare our simulations of VSI-unstable disks directly with a case without VSI, we run the same set of simulations including viscosity following the α prescription of <cit.> (see Eq. <ref>) for a viscosity value comparable to the effective viscosity driven by the VSI <cit.>. Including viscosity with α=5× 10^-4 is enough to damp the VSI in the disk and recover similar structures from previous studies of planet-disk interactions in isothermal disks (e.g.,and ). We inspected and post-processed the output after 300 orbits (at R=1.0) of evolution for the VSI simulation without a planet, while for the Jupiter and Saturn mass planets simulations we chose the outputs after 145 and 285 orbits after the inclusion of the planet, respectively. At these selected times, the planets have already carved a gap and the structures are in a quasi-steady state.A summary of the parameters used in the set of simulations is presented in Table <ref>.§ HYDRODYNAMICAL SIMULATIONS: RESULTS In this section, we present the results of our set of global 3D hydrodynamical simulations. First, in Section <ref> we inspect the simulation outputs of our three VSI-unstable disk simulations, showing the influence of the planets in the VSI-induced velocity and density structures. The comparison of our set of simulations is performed for the disk midplane layer and above 3 pressure scale heights (Z∼ 3H). Particularly important is to inspect different disk heights, since observations of optically thick CO isotopologues, such as ^12CO, trace upper layers of the disk. Optically thinner transitions of less abundant isotopologues, such as C^18O can trace deeper disk layers (see Figure <ref>). In Section <ref>, we compare the structures obtained in our VSI-unstable and viscous planet-disk interaction simulations, both in the midplane and surface layers. §.§ Simulations of VSI-unstable disks A comparison of the face-on view of the simulations at the disk midplane is shown in Figure <ref>, while Figure <ref> shows the behavior at three pressure scale heights above the midplane layer.In these figures, we present simulations without an embedded planet (first row), with an embedded Saturn-mass planet (second row), and with an embedded Jupiter-mass planet (third row). The columns indicate gas density relative to the initial density field (ρ/ρ_0; first column), radial velocity (v_r; second column), meridional velocity (v_θ; third column), and azimuthal velocity deviations from Keplerian rotation (v_ϕ-v_ Kep; fourth column). As mentioned above, the simulations have been re-scaled to physical units assuming a central Solar-mass star, and disk radii ranging from 40 to 250 au (1 code unit is 100 au). For reference, the midplane sound speed at 100 au for our assumed setup is ≈ 296 m s^-1, ≈ 10% of the local Keplerian speed. For visualization purposes, the colorbar limits are adapted to better cover the perturbation's magnitudes in each panel. In the gas velocity plots, negative values in the radial direction indicate gas moving towards the central star, in the meridional direction positive values indicate gas flowing downwards (e.g., positive values at Z∼ 3H represent gas moving towards the midplane). In the azimuthal direction positive means the gas is rotating at velocities larger than the Keplerian rotational velocity (i.e., super-Keplerian). In addition, the location of the planets is indicated by a circle with a radius equal to the planet's Hill radius, and its orbits are indicated by dotted black lines. The dashed black lines in the first panel mark the buffer zones where the parabolic damping is applied.In the top row of Figures <ref> and <ref>, we recover the characteristic velocity structure of a disk unstable to VSI, dominated by a corrugated circulation pattern <cit.>. In the disk midplane (Figure <ref>), the axisymmetric meridional flows dominate the disk velocity structure, while closer to the disk surface (Z ≈ 3H; Figure <ref>) strong velocity perturbations are seen in all three velocity components. In the simulations including a massive planet (second and third rows in Figures <ref> and <ref>), a gap depleted of gas is carved by the planet, deeper and eccentric for the Jupiter-mass planet case. The density contrast produced by the planet-carved gaps is significantly larger than any of the density perturbations produced by VSI alone, which are also only present in the surface layers. At the edges and inside the planet-carved gaps, rings of super- and sub-Keplerian gas are seen, while similar non-Keplerian flows are induced by the VSI-induced density perturbations at the surface layers; although, with a corrugated morphology. Additionally, asymmetric structures are triggered by the planet: spiral arms via Lindblad resonances are induced by the planets in the density, radial velocity and azimuthal velocity fields, clearly seen at the disk midplane (see also Figure <ref>). Around the planet's location, strong planetary spiral wakes are also produced by the massive planets. Finally, a large-scale vortex is produced at the outer edge of the gap carved by the Jupiter-mass planet, seen as a horseshoe-shaped gas overdensity; however, less prominent in the disk velocities. While VSI also induces asymmetries, as multiple vortices are also present in the disk without planets, these have smaller size scales compared to the Jupiter-induced one for the examined outputs. At the disk midplane, we observe that the planet-induced perturbations dominate the overall disk structure for ρ, v_r and v_ϕ. Interestingly, in the meridional velocities, corrugated meridional flows induced by the VSI unstable modes are still significant, however, only in the outermost region of the disk. Such velocity structure is a consequence of the damping of the VSI produced by the presence of the massive planets being more efficient at the midplane layer. The efficient damping towards the disk midplane can also be linked to the vertical shear rate (R ∂Ω/∂ z) increasing with disk height, therefore, maintaining stronger VSI motions towards the disk upper layers. The planet-induced damping of the VSI is apparent in the region inside the planet's radial location, along the planetary gap, and also at the gap's outer edge.In the disk's upper layers, damping of the VSI meridional flows seems still to be present for the Jupiter-mass case, whereas for the Saturn-mass case a mixture of VSI-induced structures and the planet-induced spiral arms is observed.The structure of v_θ and v_ϕ at Z≈ 3H for the simulations without a planet and a Saturn-mass planet appears to be similar overall. Global damping of the VSI-induced flows is produced by the Saturn planet, inducing perturbations that reach lower velocities for all components. Finally, in the disk surface layers influenced by the Jupiter planet, the localized velocity flows around the planet's location are the strongest features in the radial and meridional directions, while in the azimuthal direction a ring of Super-Keplerian gas at the outer gap edge is the most prominent velocity perturbation. From kinematic observations, identifying the symmetric and asymmetric velocity and density structures in a resolved view of the disk is required to separate different scenarios (see Section <ref>). To highlight the stronger damping of the VSI at the disk midplane produced by the presence of the embedded planets, we show a Z-R view of our set of VSI-unstable disk simulations in Figure <ref>. We present the azimuthally-averaged fields, following the same order of presentation as the panels of Figures <ref> and <ref>. The vertical sliced view of the disk gas velocities shows that the damping of the VSI is more effective in the region below three pressure scale heights from the disk midplane, marked by black dotted lines. In the outermost regions of the disk flows induced by the VSI are still active, characterized by columns of gas moving upwards or downwards. A sketch of the meridional velocity structure in a VSI-unstable disk with an embedded massive planet is presented in Figure <ref>. The different symmetry of the flow direction with respect to the midplane can be exploited to separate between VSI- and planet-induced perturbations (discussed in Section <ref>). In the radial and azimuthal directions, the VSI flows are symmetric with respect to the disk midplane, while planet-induced flows are anti-symmetric. On the contrary, in the meridional direction the VSI flows are anti-symmetric with respect to the midplane, while planet-induced flows are symmetric; that is, at a particular radius, gas moves towards the midplane or away from the midplane at both disk hemispheres. We isolate such an effect in Figure <ref>, showing the radial profiles of the velocity at Z∼ 3H from the midplane for both disk hemispheres. A clear difference in symmetry relative to the midplane is seen from the (anti-)correlations of the flow directions. Lastly, we stress that, while a net meridional flow towards the gap carved by the massive planet is seen in the v_θ azimuthal averages (Figures <ref> and <ref>), they are relatively weak even at Z∼ 3H. Such low magnitudes are consistent with larger planet masses needed to explain observed meridional flows towards gaps <cit.>. Moreover, these planet-induced meridional flows are not of an axisymmetric morphology in the r-ϕ plane for VSI-turbulent disks. Such characteristic morphology might be of importance when interpreting resolved 2D maps of the line-of-sight velocity (e.g., Figures <ref> and <ref>, see following Section <ref>). To visualize the planet-induced damping of the VSI, we explored the time evolution of meridional velocity perturbations in our VSI-unstable disk simulations. In Figure <ref>, we show the azimuthal average of the midplane meridional velocity (⟨ v_θ⟩_ϕ at Z=0) at each orbit, for 300 planetary orbits starting at the time when the planets are included in our planet-disk interaction simulations. The axisymmetry of the VSI unstable modes in the azimuthal direction allows us to follow the mode evolution in the azimuthal averages. In the first row of Figure <ref>, we show the time evolution of a simulation without an embedded planet, in which the VSI is operating in its saturated state. Radial migration of the VSI modes towards the central star is observed, on top of narrow radial regions of low velocities that migrate outwards. These results are consistent with previous findings on the time evolution of VSI unstable modes <cit.>. Note that the velocities close to the inner grid edge are damped by the effect of the simulation buffer zones. In the second and third rows of Figure <ref>, we show the time evolution of the VSI unstable simulations with an embedded Saturn-mass planet and the simulation with an embedded Jupiter-mass planet, respectively. The planets are in orbit at 100 au from the central star, indicated by the horizontal black dotted line. From the weakened meridional velocity perturbations, we observe that the planets produce a damping of the meridional flows induced by the VSI, particularly strong in the regions inside its radial orbit, along the gap region and gap outer edge. The damping produced by the Saturn-mass planet is less efficient than for the Jupiter case. The VSI motions are still vigorous in most of the outer regions of the disk after 300 orbits (r≳ 150 au), and show an apparent convergence to a steady state. Due to its stronger influence on the disk structure, the embedded Jupiter produces a more effective damping of the VSI, in which the VSI motions are damped completely up to ∼ 200 au from the star. Contrary to the Saturn-mass simulation, the Jupiter case has yet to fully converge by the end of our simulation (300 planetary orbits), where the damped region could still grow in radius. A longer simulation run with a larger radial domain is needed to further study the steady state of the gas dynamics of VSI-unstable disk with a Jupiter planet.In order to confirm that the smaller values of the azimuthally-averaged meridional velocity are not exaggerated due to a break of the VSI axisymmetry by the planet-disk interactions, we validated this result by exploring the time evolution of the azimuthally-averaged absolute value of v_θ, showing the same behavior as presented above.Our results of planet-induced VSI damping are consistent with previous findings by <cit.> and independent simulations by <cit.> and <cit.>, in which we found a stronger damping of VSI motions for a more massive planet. These results can be extrapolated to planets with larger masses, in which planets above one thermal mass (equal to one Jupiter mass for our disk model, see Eq. <ref>) would strongly damp the VSI and dominate the overall disk gas dynamics. Moreover, our results are consistent with the findings of <cit.>, where VSI is weakened inside disk pressure bumps.Regarding the origin of the damping, we did not find a direct correlation of the dampened regions with other quantities. However, the time scale of the damping matches the times-scale of the gap opening by the planets. Previously, <cit.> attributed the damping to the formation of vortices. On the contrary, <cit.> found that VSI triggers and coexists in large-scale anti-cyclonic vortices <cit.>. While we did not explore this further, the influence of the planet on the vorticity field might play an important role in the VSI damping, by creating ring structures on the vorticity field. The origin of damping is hard to isolate, since the planets have significant effects on the gas density and pressure structure, from the gap opening and the launching of Lindblad spirals. Therefore, we can only conclude that the VSI is affected by a combination of the effects mentioned above.Finally, the planet-induced damping of the VSI can substantially impact the settling of dust grains towards the midplane, where the damping is strongest. Therefore, a global lower dust scale height is expected for VSI-turbulent disks with embedded planets, relative to a disk with VSI alone, which would vertically mix solid particles <cit.>. However, for particular dust rings at the outer edges of planetary gaps the vertical mixing would depend on the planet's mass, since planets massive enough to create meridional flows can also lift up dust pebbles <cit.>. In addition, the VSI damping can strongly modify the turbulent stresses produced by VSI, which define its ability to transport angular momentum <cit.>. For kinematic observations of rotational lines, it is expected that the planet-induced damping reduces the chances of detecting VSI signatures near a planetary-gap region, especially relevant for molecules tracing layers near the disk midplane (e.g., C^18O(3-2), see Section <ref>). §.§ VSI-unstable disks vs α-viscous disksIntending to highlight the structures resulting from the interplay of VSI and massive planets, we compare turbulent VSI-unstable disk simulations against viscous αdisks simulations. As above, we run the cases of embedded Saturn-mass and Jupiter-mass planets, examining the perturbed density and velocity fields for both the midplane layers (Figure <ref>), and surface layers (Z≈ 3H; Figure <ref>). Comparing these sets of simulations is a simplified approach to contrast planet-disk interactions with and without VSI operating in the disk, in which the viscosity included in the α models prevents the growth of the VSI <cit.>. We present the face-on view of the gas density and velocities in the same order of columns as Figures <ref>, <ref> and <ref>. While the first and third rows display the VSI-unstable disks, overlapping with the results shown in Figures <ref> and <ref>, the second and fourth rows display the α disk simulations outputs for α = 5 × 10^-4 (see Section <ref>).From the direct comparison, it is clear that the VSI induces additional fine structure in all velocity fields, while the structures in the α-disk simulations are smoothed by the viscous diffusion. The simulations including α viscosity show slightly lower velocity magnitudes of the flows localized around the planet, and the super-Keplerian ring at the outer edge of the gap induced by the planet. From the presence of a less depleted gap in the α-disk models, better seen in the Saturn-mass case, it is certain that these differences are the result of the VSI effective turbulent α being slightly smaller than the value set for the α-disk simulations. Due to the difficulty of entirely suppressing VSI motions in an α-disk with a lower α value in locally isothermal simulations, we concentrate on the morphological differences of the coherent large-scale motions, with potential distinct observational signatures in kinematic CO line observations (see Section <ref>).Differences between VSI and α-disk simulations are seen in the meridional velocity structure, in which the additional meridional quasi-axisymmetric rings induced by the VSI are present, evident in the midplane and surface of the disk. Moreover, inside the gap carved by the Jupiter planet the VSI-induced turbulence disrupts the meridional flow structure at the surface layers, contrary to the smoother ringed flows along the gap in the viscous case. In the radial direction, the VSI adds additional spiral-like perturbations at Z≈ 3H, likely from the interaction between VSI-unstable modes and the Lindblad spirals driven by the planets. Here the VSI also disrupts the Lindblad spiral triggered by the Saturn-mass planet, creating arc-like features. Finally, in the azimuthal velocities, the VSI induces additional sub- and super-Keplerian rings at the outer disk surface layers, evident for the Saturn-mass case. In addition, we observe that in the VSI-unstable case the Jupiter planet triggers a strong anti-cyclonic vortex at the outer edge of the gap, also visible in the perturbed gas density. Such difference might be caused by the fact that the effective VSI turbulent α is slightly smaller than the assumed value of 5× 10^-4 for the viscous disk. Additional simulations assuming lower constant α could solve this discrepancy. Such a simulation would require an alternative method to damp the VSI.§ RADIATIVE TRANSFER AND SIMULATED OBSERVATIONS: METHODS§.§ Radiative Transfer Setup To produce synthetic images of molecular line emission of our set of hydrodynamical simulations, the outputs are post-processed with the Monte-Carlo radiative transfer code radmc-3d[<http://www.ita.uni-heidelberg.de/ dullemond/software/radmc-3d>] <cit.> version 2.0. We constructed the radmc-3d input files from the simulation data following the procedure described in <cit.>. The scripts to construct the radmc-3d input files were partially based on an early version of fargo2radmc3d[<https://github.com/charango/fargo2radmc3d>] <cit.>, and radmc3dPy[<https://www.ita.uni-heidelberg.de/ dullemond/software/radmc-3d/manual_rmcpy/>]. The observables explored in this paper are the spatially resolved velocity centroid maps (also labeled as line-of-sight velocity maps), computed from synthetic CO line emission data cubes with good velocity resolution (see Section <ref>).We compute the synthetic data cubes for three different CO isotopologues: ^12CO, ^13CO and C^18O, in order to study the effect of probing different disk layers (see Figure <ref>). Particularly, we compute predictions for the J=3-2 rotational transition observable within ALMA Band 7. Our selection is motivated by the better spectral resolution available in Band 7 than for the J=2-1 transition (within Band 6); therefore, this is better suited for characterizing the velocity structure in kinematic observations. Nonetheless, predictions for the J=2-1 transition would result in an equivalent outcome <cit.>.As mentioned above, we used a similar model setup as presented in <cit.> to use the outputs of the simulations as inputs into the radiative transfer code. We re-scaled the simulation output radial grid to R_0=100 au (i.e., one code unit is re-scaled to 100 au), and assumed a 1 M_⊙ central star. We volume-averaged the simulation data onto a coarser grid, halving the grid resolution in each direction to speed up the radiative transfer calculations. We also extended our disk, including an inner disk that follows the equilibrium solution used as the initial condition in the simulation, which goes from 10 au to the simulation grid's inner edge of 40 au. Therefore, our full disk radiative transfer model extends from 10 to 250 au. Additionally, we removed the cells adjacent to the grid edges in colatitude, to prevent tracing the grid cells affected by boundary conditions. However, for the assumed gas density of the model, in which the total gas mass of the disk in molecular hydrogen is 0.05 M_⊙, the layers traced by the explored CO isotopologues are unlikely to be affected by the dynamics close to the boundaries in θ, as shown in Figure <ref>.The disk temperature is computed via dust thermal Monte Carlo radiative transfer. It is assumed that gas and dust have the same temperature. Since our hydrodynamical simulations only treat the gas dynamics, the dust is included manually adopting a gas-to-dust mass ratio of 100 through the disk. The dust is composed of a mixture of astrosilicates, amorphous carbon, and vacuum. The optical constant of the mixture was calculated using optool[<https://github.com/cdominik/optool>] <cit.>, applying the Bruggeman mixing formula and Mie theory to compute the dust opacities <cit.> (see, e.g.,and ). The computed opacities have a resulting dust intrinsic density of 2.0g cm^-3 <cit.>. We adopted a highly simplified dust structure in order to speed up the calculations by assuming only one representative dust size bin for grain sizes between 0.01 μm and 10 μm, following the same distribution as the gas. Our results are not significantly affected by these assumptions, since small grains dominate the resulting temperature structure, and we do not include dust in the image ray-tracing.For the calculations, we assume that the central star radiates as a perfect black body with an effective temperature of T_⋆=7000 K and a radius of R_⋆=1 R_⊙. We use 10^9 photon packages to compute the dust temperature via thermal Monte Carlo radiative transfer including absorption and scattering opacities (assuming Henyey-Greenstein anisotropic scattering), while 10^8 photon packages are used for the image ray-tracing. For all the presented images, we assume a distance to the source of 100 pc.For the molecular abundances, a constant fraction of ^12CO relative to H_2 of 1×10^-4 its assumed for the entire disk, while for ^13CO and C^18O the ^12CO abundance its scaled by ∼ 77^-1 and ∼ 560^-1 (see Section 3.1 in ). The line emission is computed assuming LTE, using molecular data from the LAMDA[<https://home.strw.leidenuniv.nl/ moldata/>] database <cit.>. Variations of CO abundance from photo-dissociation are not included in our models, while a simplified CO freeze-out is included by reducing the CO abundance by a factor 10^-5 in cold regions (T≤ 21 K). The synthetic data cubes are computed with a fine velocity resolution of 10 m s^-1. Then, the channels are averaged to obtain data cubes with a coarser resolution, matching a velocity resolution of 100 m s^-1, observable with ALMA. This procedure mimics telescope limitations without the need to include artificial micro-turbulence in the radiative transfer model. Images of an individual RAW (without spatial convolution) ^12CO channel map for the different models are presented in Figure <ref>, while a view of all channels is included as supplementary material. The synthetic data cubes are used to simulate ALMA observations.A summary of the parameters used in the radiative transfer predictions is compiled in Table <ref>. §.§ Simulated Observations To predict how the synthetic images of our models would appear in an interferometric ALMA observation, we simulated mock observations with the Common Astronomy Software Applications package (CASA[<https://casa.nrao.edu/index.shtml>] version 6.4; ).The simulated observations are computed in three steps: simulation of the observed visibilities, inclusion of noise, and cleaning of the dirty image. First, we use the taskto simulate the observed visibilities using our RAW synthetic data cubes as input images. The uv-coverage is computed for a combination of two antenna configurations, one extended (C-7, with longest baselines of 3.6 km) and one compact (C-4, with longest baselines of 784 m). The adopted antenna configuration is similar to the configuration of the large program MAPS <cit.>; however, for ALMA Band 7, resulting in a spatial resolution of 84×62 mas (8.4× 6.2 au). The simulated visibilities consider an integration time on the compact configuration of 25% of the extended one (10h and 2.5h, respectively). The combination of two different antenna configurations cover both short and long baselines, in order to recover information from large and small spatial scales, respectively. Relatively long on-source integration times are used in the set of simulated observations, necessary to have good uv-coverage, crucial for a final image with the fidelity to extract the kinematic information. Furthermore, such long integration is also needed due to the difficulty of getting fairly good signal-to-noise in high-resolution (spatial and spectral) CO observations.Second, we use the task sm.corrupt() (simulator.corrupt) to corrupt the simulated data, adding errors in the visibilities. We include errors with an RMS of 1 mJy/beam per channel corresponding to our assumed long integration times, calculated with the ALMA sensitivity calculator[<https://almascience.eso.org/proposing/sensitivity-calculator>]. These noise calculations assume particular atmospheric conditions, that significantly affect the time required to reach a particular sensitivity. Third, we applied CASAto reconstruct the image from the modeled dirty image visibilities, following the CLEAN algorithm. In this process, we used the multi-scale mode and a briggs weighting scheme. Moreover, the cleaning was performed using non-interactive Automasking (auto-multithresh; ), which automatically generates the masks used during the process. Such automatic masking is possible due to the known morphology of the emission from the radiative transfer models; however, in real observations masking the image manually is still recommended. As a final product, a cleaned spectral cube with the expected artifacts from a real ALMA observation is obtained. Further details on our method to simulate ALMA observations are presented in Section <ref>. §.§ Kinematic Analysis Tools The kinematic signatures of the simulated observations are extracted in two steps. First, the line-of-sight velocity map is computed from the data cube. Second, the best fit Keplerian model to the line-of-sight velocity map is determined. This is then subtracted from the original velocity map to reveal coherent non-Keplerian gas flows. For the first step, we compute velocity centroid maps (v_0) from the data cubes using a Gaussian function to fit the CO line emission in each pixel of the collapsed image. For this we use the publicly available Python package Bettermoments[<https://github.com/richteague/bettermoments>] <cit.>. This package robustly computes the centroid maps of spectral line data for a variety of methods, and their respective statistical uncertainties. A Gaussian function is chosen as it gives the best results for our particular set of synthetic models. We extract the disk velocity perturbations from the map of velocities projected into the line-of-sight. For this purpose we use the Extracting Disk DYnamics Python suite Eddy[<https://github.com/richteague/eddy>] <cit.> to obtain the best fitting Keplerian disk model for the ^12CO(3-2) velocity centroid maps of the simulated ALMA observations. We used a model that assumes a geometrically thick disk with an elevated emission surface, in which the emitting surface is parameterized by:z(r) = z_0 ×(R/1^'')^ψ×exp(-[R/R_ taper]^q_ taper),where R is the disk cylindrical radius in arcseconds, ψ dictates the flaring of the emission surface, and z_0 and R_ taper the reference disk aspect ratio at 1 arcsecond and exponential taper reference radius in arcseconds. However, for our fitting we assumed the limit R_ taper=∞, which better fits our disk models, also reducing the number of free parameters.For the fitting of the disk rotation, we assume that the rotation curve follows a Keplerian profile accounting for the altitude of the emission height:v_ Kep = √(GM_ starR^2/(R^2+z^2)^3/2),with M_ star the mass of the central star. In this case, the cylindrical radius R and the emission surface height z are in meters, adapted using the distance to the source in our model of 100 pc. In the following, the disk velocity model is projected into the line of sight considering the contribution of the azimuthal velocity component only:v_mod = v_ Kep·cosϕ·sini+v_ LSR,where ϕ is the polar angle of the image pixel (measured east of north relative to the red-shifted major axis) and v_ LSR is the systemic velocity, set to zero in our models. For the fitting procedure, we fix the disk inclination to the input model inclination and the distance to the system to 100 pc, and considering as free parameters M_ star, disk PA, z_0, ψ,v_ LSR, x_0 and y_0. Then, a series of MCMC chains are run to find the best-fit parameters of the geometrically thick Keplerian disk model. For this paper, we used 128 walkers that take 2000 burn-in steps and additional 500 steps to sample the posterior distributions for the model parameters. A delimited radial region of the disk is considered in the model fitting, set to [0.55,2.0], [0.58,1.85] and [0.6,1.7] arcseconds for inclinations of 5, 15 and 30 degrees, respectively. Finally, the velocity perturbations are extracted by subtracting the velocity centroid map of the best-fit disk model (v_mod) from the original (v_0).An alternative way to look at the disk kinematic structure is to obtain an azimuthally averaged view of the disk velocities (radial profiles); however, in this paper we only studied the two-dimensional view of the deviations from Keplerian rotation. In principle, the axis-symmetry of the VSI flows could be exploited with such approach, while also boosting the signal-to-noise of the simulated observations, reaching higher precision in velocity. Unfortunately, a degeneracy between the flows produced by the VSI and a massive planet might be faced when exploring the radial velocity profiles of the upper ^12CO(3-2) emission layer only, as suggested by our simulations (see Figure <ref>). Moreover, the extraction of the radial profiles is extremely sensitive to systematic errors and more computationally expensive. Nevertheless, looking at the velocity radial profiles has enormous potential to unravel VSI motions, by allowing further exploration of flow correlations among velocity components for both disk layers (see Section <ref>).Finally, we highlight that alternative tools to Bettermoments and Eddy are also available, such as GMoments[ <https://github.com/simoncasassus/GMoments>] <cit.> and Discminer <cit.>. Differences between methods can be found in terms of the flexibility of the models, specific features, and varying performance for particular targets (see, e.g., ).§ RADIATIVE TRANSFER AND SIMULATED OBSERVATIONS: RESULTS§.§ Kinematic signatures: An idealistic view First, we analyze the kinematic signatures of our disk models in an idealistic case, for images without beam convolution and noise, and with a velocity resolution of 5 m s^-1. We first study this case in order to have a reference of what would be extracted from our disk model synthetic predictions in an ideal case of unrealistically deep observations and perfect modeling. For this purpose, we extract the deviations from Keplerian rotation in the line-of-sight velocity maps computed from our RAW synthetic radiative transfer images. In order to extract these deviations in the perturbed disk maps, we subtract a second line-of-sight velocity map computed for a disk model following the equilibrium solution used as the initial condition of the simulations. To avoid effects from variations in the traced CO emission layer in the residuals, we only change the model velocities to the equilibrium solutions keeping the same CO number densities and disk temperature structure as the perturbed case. Evidently, this approach is not feasible to apply in real observations; however, as aforementioned, is presented as an ideal picture of the non-Keplerian signatures. We present the ^12CO(3-2) predictions for our set of simulations in Figure <ref>, for three different disk inclinations (5, 15 and 30 degrees). On top of the 2D map residuals from Keplerian rotation a line tracing v_0=0 is overlaid, indicating the approximate location of the semi-minor axis and tracing the magnitudes of the distortions created by the non-Keplerian motions in the line-of-sight velocity at the systemic velocity. Again, the disk is oriented in the sky such that its rotation is clockwise and the near-side is at the North-East.In the first column of Figure <ref>, we present the kinematic signatures of our VSI-unstable disk model without an embedded planet, showing ring-like residuals tracing the meridional flows produced by the VSI unstable modes, recovering the results presented in <cit.>.In the second and third columns, we show the cases with a Saturn-mass planet embedded in a VSI-unstable and an α-viscous disk, respectively. We observe that the signatures from the VSI are not seen along the region affected by the planet-induced damping (as discussed in Section <ref>), while signatures from the VSI are still visible in the outermost parts of the disk. These signatures are mixed with signatures of the spiral driven by the planet via Lindblad resonances, breaking the axisymmetry of the VSI kinematic signatures. The Saturn-mass planet in the α-viscous disk produces smooth signatures of Lindblad spirals and spiral wakes around the planet, with weak velocity magnitudes overall. In columns four and five, we show the cases with an embedded planet with the mass of Jupiter for our VSI-unstable disk and an α-viscous disk, respectively. In these scenarios, the massive planet produces a strong signature at its location. Such a feature has been previously denominated 'Doppler-flip' (see, e.g., ) and has a significant contribution from the planet's spiral wakes. The planetary spiral wakes produce a super-Keplerian feature outside the planet's radius and a sub-Keplerian feature inside the planet's radius; consequently, creating a dipole pattern. Additional kinematic signatures are introduced by the planets' inner and outer Lindblad spirals, and the sub- and super-Keplerian rings of gas along the gap and gap edges. Similar to the case for the Saturn-mass planet, for the VSI-unstable disk additional kinematic features are seen in the outermost regions of the disk, in interplay with the planetary spiral arms, which gives a complex kinematic structure with arcs and spiral-like non-Keplerian flows. From the comparison of the different models varying disk inclination (see Figure <ref>), we find that the disk inclination can considerably impact the extracted kinematic signatures. As the disk inclination is increased the planet-driven kinematic signatures are more prominent, while the VSI signatures' velocity magnitude remains fairly constant. These results are expected, since the perturbations in the gas velocity of the disk produced by the planet are strongest in the radial and azimuthal directions, which contribute more to the velocity projected into the line of sight for higher disk inclinations.We also explored the kinematic signatures dependence on the traced disk height by computing the ideal view of the deviations from Keplerian for our ^13CO(3-2) and C^18O(3-2) predictions, presented in Figures <ref> and <ref>, respectively.Moving from the most abundant tracer (^12CO) to the less abundant (C^18O), deeper layers of the disk are probed, in which the change in the morphology of the kinematic structure for different CO isotopologues strongly depends on the disk inclination.In the case of the VSI-unstable disk without embedded planets, the morphology of the non-Keplerian signatures remains fairly consistent for different CO isotopologues, as previously shown in <cit.>. On the contrary, clear changes could be seen among the residuals for the three different CO isotopologues for the models including a massive planet perturbing the disk. For low disk inclinations, we observe that for less abundant tracers, the planetary-induced non-Keplerian flows are weaker, isolating the VSI operating in the outermost regions of the disk, but with lower velocity magnitudes compared to the ^12CO(3-2) predictions. In addition, the contribution of the Lindblad spirals to the Keplerian model residuals weakens for less abundant tracers independent of disk inclination. For a disk inclination of 30^∘ the Doppler-flip at the planet location and the super-Keplerian ring at the gap's outer edge remain prominent independent of CO tracer, particularly in the Jupiter case. These features could be explained by the planetary spiral wakes being the strongest dynamical feature at the disk midplane layers, with vigorous azimuthal and radial flows. Similarly, the planet-induced Super-Keplerian ring of gas is fairly independent of disk height (see Figure <ref>). In contrast, VSI flows reach larger velocities at the disk's upper layers, and have a dominant meridional velocity (see Section <ref>).§.§ Kinematic Signatures: ALMA simulated observations In order to study a more realistic picture of the kinematic signatures that could be observed in VSI-unstable planet-forming disks, we produced simulated ALMA observations for our three VSI-unstable disk models. In Figure <ref>, we show the deviations from Keplerian rotation extracted from mock observations of ^12CO(3-2) using Eddy (see Section <ref>), for three different disk inclinations (i=[5^∘, 15^∘, and 30^∘]). As described in Section <ref>, the simulated observations are performed for a combination of ALMA configurations 7 and 4, with a resulting spatial resolution of 84×62 mas (8.4× 6.2 au). In terms of spectral resolution, the simulated observations are produced for a velocity resolution of 100 m s^-1, and a noise level with an RMS of 1 mJy beam^-1 per channel. These predictions are optimistic and follow the ideal design for kinematic detection of embedded planets <cit.>.While we assume an integration time that gives excellent uv-coverage, to reach the assumed noise levels larger integration times would be needed; for example, at 345 GHz (approximated frequency of the J=3-2 transition of ^12CO), for a column density of water vapor of ≈ 0.9 mm, such observation would take approximately 40 hours on-source. Nevertheless, these ambitious observations are the goal of the community studying the kinematic structure of protoplanetary disks, which are needed to fully resolve the substructures in the disk gas velocities.The model residuals presented in Fig. <ref> show that the deviations from Keplerian rotation induced by the VSI would be observed with clarity only in the case without embedded planets (first column). Arcs of VSI-induced red- and blue-shifted gas are also seen in the outermost regions of the Saturn mass planet case for disk inclinations of 5^∘ and 15^∘; however, their velocity magnitude is weaker due to the global damping of the VSI induced by the planet, as discussed in <ref>. For the highest inclination explored, spiral-like signatures would be observed mixed with VSI arc-like residuals in the outer disk, which would be difficult to differentiate, for example, from signatures of spiral arms triggered by buoyancy resonances <cit.>. Nevertheless, we discuss a potential approach to disentangle VSI signatures in Section <ref>. In the case of an embedded Jupiter-mass planet, the planet-induced kinematic signatures stand out in the Keplerian model residuals. Important features are a Doppler-flip around the planet and large-scale Lindblad spirals. Unlike the ideal case, super- and sub-Keplerian signatures at the gap edges are weaker, possibly due to the missing modeling of the drop in the emission surface height at the gap region. On top of that, global patterns in the residuals appear due to errors in the model, demonstrating that even with tightly constrained initial values for the free parameters, the fitting can drive errors due to the limitations of the disk model. In particular, a quadrupole pattern is seen in the residual maps near the central region, due to errors in the model center. This set of simulated ALMA observations suggests that VSI signatures would be easy to identify only in disk regions unperturbed by fairly massive planets, limiting the chances of robust detection of VSI. Also, our results indicate that VSI-turbulent gas motions would not prevent the detection of a Jupiter planet in resolved gas kinematic observations. Finally, the signatures from a VSI-unstable disk with an embedded Saturn-mass planet would be difficult to observe with the current ALMA capabilities and challenging to interpret, so further analysis and observations might be required.Additional substructures could be extracted from exploiting the information of the line profiles, which combined with the line-of-sight velocity maps could potentially disentangle between scenarios. Variations of the line intensity peak and width relative to the disk background could trace deviations of the gas temperature and density for optically thick tracers, possibly tracing spiral arms and gaps produced by embedded planets <cit.>.In the case of line peak intensity maps of ^12CO, our set of simulations is not suitable to explore variations on the temperature structure self-consistently, where 3D global simulations including radiative effects are required <cit.>. In preliminary tests using the temperature structure provided by the thermal Monte Carlo calculations, we obtain spiral-like features, mostly tracing the planet gap region and Lindblad spirals. However, these variations reach values below 1 % relative to the disk background, while in recent observations relative variations up to 5% are found <cit.>. Therefore, additional simulations of embedded planets in turbulent protoplanetary disks including radiation-hydrodynamics are needed to robustly explore this observable, and connect it to velocity deviations from Keplerian rotation.In an exploration of line width maps of ^12CO(3-2), we find that variations of this quantity relative to the disk background can trace the planet's gap, and that non-thermal broadening effects are most prominent around the planet's location, consistent with previous studies (e.g., ). Moreover, in the residual maps, asymmetries appear inside the gap region for the Jupiter-mass case, also in agreement with previous findings <cit.>. VSI turbulent motions, however, produce arc-like features in the line width maps residuals. These variations are relatively small, reaching values of a few tens of m s^-1, challenging to extract and interpret <cit.>. These small line width residuals are consistent with the negligible non-turbulent broadening found in <cit.> for integrated line profiles. In our line width maps, artifacts from the influence of the back side of the disk when using single-Gaussian fits are seen. Fitting both CO layers (front and back surfaces) is required to overcome such effects <cit.>. Careful self-consistent analysis of variations in the peak and width of the CO line will be provided in follow-up studies.§ DISCUSSION§.§ Can we confirm VSI as the origin of kinematic signatures? To distinguish VSI kinematic signatures from signatures of other mechanisms could be really challenging, due to the possible resemblance of their imprints <cit.>. Moreover, there are physical processes whose simulations suggest could induce similar structures to VSI, yet observational predictions of CO kinematics of these are still lacking. That is the case, for example, for magnetically-driven winds <cit.>. Therefore, we face the question: If we observe a kinematic structure that matches the expected signature from VSI can we robustly conclude that VSI is operating in the disk?A robust VSI confirmation might be possible by exploiting the information from both disk hemispheres, and invoking the symmetry of VSI flows relative to the midplane layer. As mentioned in Section <ref> <cit.>, flows induced by the VSI have a unique property; With respect to the midplane the VSI flows are: anti-symmetric in the meridional direction, and symmetric in the radial and azimuthal directions. Luckily, it is possible to explore such feature by extracting the kinematic information of both disk hemispheres using observations of CO isotopologues, in which two emitting layers are observed, separated by the colder midplane region (see, e.g., channel maps of the HD 163296 disk presented in ).Currently, great efforts are being made to develop techniques to extract both emission CO layers, the front (or upper) and the back (or bottom) surfaces. By fitting a double-Gaussian profile to the collapsed molecular line data at each pixel, the front and back layers have been successfully extracted in the disks HD 135344B <cit.> and HD 163296 <cit.>. However, deeper and higher resolution observations are needed to precisely disentangle both surfaces, which is crucial for kinematic analysis and study of the spatially resolved non-Keplerian flows.In order to demonstrate the possibility of exploiting the VSI symmetry relative to the disk midplane, we ran two models which only take into account one hemisphere of the disk at the time. That is, a model with only the upper hemisphere and a model with only the bottom hemisphere. By computing the expected deviations from Keplerian rotation for RAW ^13CO(3-2) synthetic images of each disk hemisphere, we can compare the front and back ^13CO layer residuals, shown in Figure <ref> for a disk inclination of 30^∘. Under the assumption that we could extract a resolved view of the back CO layer and a good fit of the emission surfaces, VSI quasi-axisymmetric rings of positive and negative residuals are recovered in both, front and back, ^13CO emitting layers. Again, the residuals are dominated by the meridional velocity component, as expected from VSI; therefore, these maps can be interpreted as the column of gas moving to the same vertical direction on both disk hemispheres, a unique feature of VSI. Such symmetry is better seen in a deprojected view of the residual maps, displayed in the bottom row of Fig. <ref> (deprojected using diskmap[<https://github.com/tomasstolker/diskmap>] ). Additional modulations are seen at the East and West from the disk semi-minor axis for the front and back sides, respectively, coming from azimuthal velocity perturbations moving to opposite directions (i.e., super-Keplerian on one disk hemisphere and sub-Keplerian in the other hemisphere, shown in Figure <ref> and <ref>).A visualization of the meridional flows moving towards the same direction is shown in a 'coherence' [We apply the term 'coherence' to refer to a gas flow moving in the meridional direction as a coherent structure through both disk hemispheres at a particular radius. In the opposite case, the flow would be divided into two distinct structures, both moving toward (or away from) the midplane.] map (last panel of Figure <ref>). We define the coherence of the deprojected line-of-sight velocities from the upper and bottom layers as:v_0, coherence = ( v_0, upper^ deproj.×v_0, lower^ deproj.) √(| v_0, upper^deproj.×v_0, lower^ deproj.| ), where v_0, upper^ deproj., and v_0, lower^ deproj. are the deprojected line-of-sight velocities of the upper and bottom layers, respectively. The coherence map shown in Figure <ref> displays the normalized coherence, where rings of positive values reveal coherence of the vertical flow. Particularly clear coherence is seen in the ring of interest enclosed by black dashed lines, with both sides moving away from the observer. When compared to the cases of VSI-unstable disks with embedded massive planets (shown in Figures <ref> and <ref>), we observe that VSI quasi-axisymmetric rings of positive coherence are only present in the outermost regions of the disks with planets; once again demonstrating the planet-induced damping of the VSI. Moreover, such a coherence map also highlights the perturbations induced by the Jupiter-mass planet around its location (last panel of Fig. <ref>), with potential use to point towards localized massive planet signatures.Nevertheless, exploiting both CO emission layers of the planet-forming disk is extremely challenging. On top of the high resolution required, mm-sized dust grains at the disk midplane can block parts of the emission from the back side layer, complicating even more its study. Despite our simplifications, it has been demonstrated that exploring both hemispheres of the disk in CO kinematic observations is possible. Thus, future deep high-resolution molecular line observations can confirm the symmetry of the VSI flows with respect to the midplane, a robust proof of the VSI operating in planet-forming disks.Current studies show that meridional flows induced by alternative mechanisms are directly correlated to the formation of a deep gap in the gas. Therefore, the gas would flow from the disk surface towards the gas-depleted region. Such a correlation is not typical for VSI signatures. As seen in Figure <ref>, the direction of the meridional flows is not tightly correlated with the density perturbations. Thus, quasi-axisymmetric meridional flows in a region without deep gaps in the gas would be consistent with VSI-induced kinematic signatures. Finally, in the opposite case, where deep gaps in the gas are correlated with the meridional flows, VSI is unlikely to be the origin, favoring massive planets or non-ideal magneto-hydrodynamical effects. §.§ Caveats In the following, we will discuss the limitations of our approach. First, the assumption of a locally isothermal equation of state in our hydrodynamical simulations is certainly a simplification. Under the assumption of a fast cooling disk (cooling timescales substantially shorter than the local orbital timescale), needed for the VSI to operate, this is still a valid approximation. However, the locally isothermal approach, which translates to an instantaneous cooling (t_ cool=0), is an extreme limit, which leads to vigorous VSI <cit.>. Even in rapidly-cooling disks (e.g., t_ cool∼ 0.01 t_ orb), the VSI will be slightly damped compared to a locally isothermal disk <cit.>. Nevertheless, simulations relaxing the isothermal assumption for fast cooling disks lead to similar VSI gas dynamics <cit.>, and planet-disk interactions <cit.> to the obtained in our simulations.On the other hand, simulations including regions with long cooling timescales, that do not fulfill the requirements for the VSI to operate, will result in a confined VSI-active layer <cit.> or almost complete suppression of the VSI, directly affecting the observability of VSI-signatures in gas kinematics. The cooling properties of the disk, thus, the ability of the disk to sustain VSI, is primarily regulated by the amount and properties of small dust grains in the disk <cit.>. Conditions of inefficient cooling in the outer disk can originate from infrequent dust and gas collisions near the disk atmosphere <cit.>, and/or an overall reduced dust-to-gas ratio of the small dust grains as a result of dust evolution <cit.>. Including regions with longer cooling times would also lead to additional planet-induced signatures from spirals generated via buoyancy resonances <cit.>.Second, the assumed thermal structure of the disk in the hydrodynamical simulations is not consistent with the theoretical predictions nor the observed structure from recent ALMA observations, in which a vertical gradient of the temperature is constrained. Such structure can modify the development of the VSI and planet-driven structures. Nevertheless, previous work has shown that in simulations including both, disk thermal structure and radiation-hydrodynamics, the VSI still operates vigorously in the fast cooling regions of the disk <cit.>. An additional set of simulations including planets in a VSI unstable disks with vertical temperature structure has still to be performed.Third, our radiative transfer models assume constant n(CO)/n(H_2) through the disk. Such an assumption is likely not to hold if the influence of VSI and planet-driven large-scale flows on the disk chemistry is taken into account. Mixing of material in the radial and vertical directions would modify the spatial distribution of molecules, therefore, changing the emitting surface, and abundance radial profiles of CO isotopologues <cit.>. Moreover, a planet-carved gap can significantly affect the density and thermal structure of the disk, altering the abundance of CO isotopologues around the gap location. These thermo-chemical effects are important to be included in future studies, as they would impact the observability of kinematic signatures in the circumstellar disk.Finally, in our work, a single Gaussian function is used to extract the information from the simulated ALMA observations. Such an approach could result in a line fitting merging information from the back and front emitting layers of CO, producing a velocity structure that does not fully reflect the disk gas dynamics. These effects are relevant for disks with intermediate inclinations, predominant in regions with lower gas densities (e.g., planet-carved gaps), while minimal along the disk semi-major and semi-minor axis. In our models, the obtained morphology of the velocity residuals from Keplerian rotation reflects the expected morphology from the front CO layer, as demonstrated in Figure <ref>. The implementation of routines to fit both emission layers at the same time is ideal for extracting the true velocity structures from the disk <cit.>. Moreover, as discussed in Section<ref>, such an approach could help to disentangle VSI signatures from planet-induced signatures. Exploring the effects of applying an improved line fitting procedure in our models is left for follow-up work.§ SUMMARY AND CONCLUSIONS In this paper, we presented a comprehensive study of the gas dynamics and kinematic signatures of planet-forming disks unstable to the vertical shear instability (VSI). Particularly, we explored the interplay between the VSI and structures induced by an embedded massive planet, and their resulting signatures observable in CO rotational line observations with ALMA. We performed this study by running global 3D hydrodynamical simulations, post-processed with radiative transfer calculations, to finally simulate mock ALMA observations.Specifically, we studied the effects on the disk dynamical structure of single planets with the mass of Saturn and Jupiter, and their imprints on the observable deviations from Keplerian rotation.We found that the presence of fairly massive planets embedded in the disk substantially affects the gas velocity structure produced by the VSI, damping the VSI unstable modes in the regions where the planets significantly modify the structure of the disk. Further, the damping is stronger by increasing the planet's mass, and is most effective in a region near the midplane layer of the disk.The effect of the planets on the VSI motions significantly alters the kinematic signatures. The observable kinematic signatures of the VSI are globally weakened, and only clearly visible tens of au radially outward from the planets' location. The VSI adds fine structure to the planet-induced kinematic signatures, with a complex interplay in the Saturn-mass case. For the case of an embedded Jupiter-mass planet, the planet-induced signatures dominate the kinematic structure of the disk, showing a clear Doppler-flip at the planet's location and spiral arms in the residuals from a Keplerian model.Furthermore, we compare simulations of VSI-unstable disk and simulations following a constant α viscosity prescription. This direct comparison highlights the predictions of the additional kinematic signatures produced by the VSI compared to the standard α viscous case. The more complex kinematic structure, found for the Saturn-mass planet case, showing a mixture of VSI modes and planet-induced spirals, might impede the identification of the planet and VSI in ALMA observations.Thus, simultaneous modeling of different CO isotopologues might be needed for robust planet detection, where the best strategy to isolate the planet signatures is to observe closer to the midplane of moderately inclined disks.Finally, we test an approach to confirm the presence of VSI motions in future high-resolution ALMA observations, by detecting the coherence of the perturbations with respect to the disk midplane. Such an approach is promising for revealing the VSI operating in disks.We conclude that the best chance to detect clear VSI signatures is to look for disk regions distant from observed deep continuum or molecular gas gaps, where the VSI-induced perturbations might still be active far from the influence of putative massive planets. In addition, exploring the flows' symmetries with respect to the disk midplane is a pathway to confirm VSI signatures in future CO rotational line observations.We highlight the potential of directly comparing deep ALMA CO observations with theoretical predictions of kinematic signatures. Robust interpretations could reveal the presence of embedded massive planets, signatures of disk instabilities, and constrain disk physical properties.In the near future, upgrades planned for the ALMA interferometer infrastructure <cit.> will significantly increase the sensitivity for line emission observations. This technological advance will allow a deeper study of planet-forming disks kinematics, revealing the fine structure of gas flows, probing regions closer to the disk midplane, and possibly resolving the circumplanetary region of embedded massive planets; thus, potentially revealing a comprehensive picture of planet-disk interactions in turbulent protoplanetary disks.We thank the anonymous referee for providing constructive comments on the manuscript. We thank the developers and contributors of the codes and software used throughout this work, including the developers of the Python packages Numpy <cit.>, Scipy <cit.>, Astropy <cit.> and Matplotlib <cit.>. M.B. thanks R. Teague and L. Flores-Rivera for providing constructive feedback on figures, and S. Andrews and N. Kurtovic for their advice in the use of CASA . M.B. thanks the exoALMA collaboration for fruitful discussions on protoplanetary disk kinematics. M.B. and M.F. acknowledge support from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 757957). T.H. acknowledge support from the European Research Council under the Horizon 2020 Framework Program via the ERC Advanced Grant Origins 83 24 28. The set of numerical simulations presented was conducted on the COBRA supercomputer, hosted by the Max Planck Computing and Data Facility (MPCDF). aa § ADDITIONAL FIGURES | http://arxiv.org/abs/2310.18484v1 | {
"authors": [
"Marcelo Barraza-Alfaro",
"Mario Flock",
"Thomas Henning"
],
"categories": [
"astro-ph.EP"
],
"primary_category": "astro-ph.EP",
"published": "20231027205551",
"title": "Kinematic signatures of planet-disk interactions in VSI-turbulent protoplanetary disks"
} |
4 The Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada. Trustworthy Edge Machine Learning: A Survey Xiaojie Wang1, Beibei Wang1, Yu Wu5, Zhaolong Ning13, Song Guo2, and Fei Richard Yu4 1 School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China. 2 The Hong Kong University of Science and Technology, Kowloon, Hong Kong, China. 3 Corresponding author: Zhaolong Ning; email: [email protected]. 5 School of Cyber Security and Information Law, Chongqing University of Posts and Telecommunications, Chongqing 40006, China. January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= The convergence of Edge Computing (EC) and Machine Learning (ML), known as Edge Machine Learning (EML), has become a highly regarded research area by utilizing distributed network resources to perform joint training and inference in a cooperative manner. However, EML faces various challenges due to resource constraints, heterogeneous network environments, and diverse service requirements of different applications, which together affect the trustworthiness of EML in the eyes of its stakeholders. This survey provides a comprehensive summary of definitions, attributes, frameworks, techniques, and solutions for trustworthy EML. Specifically, we first emphasize the importance of trustworthy EML within the context of Sixth-Generation (6G) networks. We then discuss the necessity of trustworthiness from the perspective of challenges encountered during deployment and real-world application scenarios. Subsequently, we provide a preliminary definition of trustworthy EML and explore its key attributes. Following this, we introduce fundamental frameworks and enabling technologies for trustworthy EML systems, and provide an in-depth literature review of the latest solutions to enhance trustworthiness of EML. Finally, we discuss corresponding research challenges and open issues. Trustworthiness, machine learning, edge computing, distributed training and inference, limited network resources. § INTRODUCTION With the development of Fifth-Generation (5G) wireless communication technologies, billions of wireless devices such as smartphones, sensors, and wearables can connect to the Internet. By 2025, the number of Internet of Things (IoT) devices are expected to reach 25.2 billion <cit.>, and these devices will generate enormous data with more than 79 Zeta bytes per year <cit.>.Driven by the IoT, big data, and powerful computing, Artificial Intelligence (AI) has made further breakthroughs in intelligent applications such as natural language processing <cit.>, computer vision <cit.>, and robotics <cit.>. Machine Learning (ML) technique is an important research area in AI, focusing on simulating human learning processes. It enables computer systems to learn from data and extract patterns for tasks like prediction and classification <cit.>. Recent years, ML models have been increasingly applied across various industries and vertical domains. In light of this trend, Edge Computing (EC) <cit.> has emerged to meet diverse application demands. Compared to cloud computing, EC deploys computing resources closer to users and data sources at the network edge, and thus enables low transmission latency and high communication efficiency. Therefore, EC is an essential component of the future computing system and serves as the capillary connecting AI demands with everything. Enterprises, including Google, Microsoft, Intel and IBM, have developed pilot projects to demonstrate the benefits of EC in paving the last mile of AI <cit.>. These efforts facilitate a wide range of AI applications, from real-time video analytics <cit.>, virtual reality, augmented reality to smart healthcare <cit.>, autonomous vehicles <cit.> and Unmanned Aerial Vehicles (UAVs) <cit.>. §.§ Overview of Trustworthy Edge Machine Learning Edge Machine Learning (EML) combines EC with ML techniques to fully leverage the advantages and potential of both. On the one hand, EC enables ML training and inference close to the data source, and allows ML algorithms to quickly process data and extract important features, thus improving real-time decision making. On the other hand, ML, with its powerful learning and inference capabilities, enables intelligent management of edge resources to meet different application requirements, such as precision and latency <cit.>. In the upcoming Sixth-Generation (6G) networks, interconnected intelligence encompasses numerous smart devices, sensors, and edge nodes, which engage in data collection, processing, and exchange across various domains <cit.>. This interlinks humans, objects, and intelligence, thus forming a vast networked ecosystem. Therefore, it is necessary to consider the impact of trustworthiness on the application and development of EML: First, the trustworthiness of data processing and decision-making at the network edge is of paramount importance. The 6G network aims to provide high-speed, low-latency communications across a broad range of scenarios, thereby placing increased responsibilities on edge devices for tasks, such as intelligent transportation and medical diagnostics. In these applications, the accuracy of predictions and the fairness of outcomes are crucial to prevent severe societal consequences. Second, one characteristic of 6G is its heightened emphasis on security and privacy requirements. Meanwhile, there can be an increasing amount of personal data for transmission and processing. Ensuring the trustworthiness of EML can further enhance data privacy and security, safeguarding against malicious attacks and unauthorized access <cit.>. Last, the 6G network integrates sensing, wireless communication, and distributed computing, enabling edge devices to quickly access and process multi-modal data, and acquire real-time feedback. Ensuring trustworthiness of EML can provide transparency to users through interpretable algorithms. Moreover, interpretable intelligent decision making and control facilitates the network to dynamically adjust resource allocation, thus improving the level of intelligent collaboration in the integrated network of communication, sensing and computing.§.§ Comparisons and ContributionsCurrently, a number of studies focus on trustworthy EML <cit.>. However, the development on trustworthy EML is still in its infancy. Also, there exists several reviews on trustworthy ML <cit.>.As shown in Tab. <ref>, authors in <cit.> provide different requirements for making ML trustworthy and their corresponding methods from a human-centered perspective. They also discuss various testing techniques to verify and validate the AI systems based on trustworthy requirements. Authors in <cit.> present a detailed review of representative techniques for trustworthy ML from a computational point of view and discuss their practical applications in real-world scenarios. Differently, authors in <cit.> present a theoretical framework for important aspects of AI trustworthiness from the perspective of the entire life cycle of an AI system. Meanwhile, they systematically introduce available methods for realizing trustworthy ML.In summary, these surveys focus on enhancing the trustworthiness of ML across various stages of its life cycle. In contrast to them, we focus on the definition, fundamental attributes, technologies, solutions and challenges of trustworthy EML by considering both different application requirements and the limited network resources at the network edge. To the best of our knowledge, this survey is the first to provide a comprehensive summary of trustworthy EML from the perspective of the combination of AI and EC. The contributions of this survey can be summarized as follows: * We discuss the necessity of trustworthy EML, and introduce attributes in achieving trustworthiness at the network edge. * We introduce fundamental frameworks and enabling techniques for trustworthy EML in terms of model training and inference, security and privacy, interpretability, and resource optimization, respectively. * We provide a comprehensive and in-depth investigation of recent studies on trustworthy EML based on its attributes and requirements, including optimality, reliability, interpretability, fairness, and incentives. We also discuss lessons learned for each kind of approaches, and present a series of open issues and key challenges of trustworthy EML.§.§ Organization As shown in Fig. <ref>, we introduce necessity and key attributes of trustworthy EML in Section 2. We present frameworks and technologies used to achieve trustworthiness in Section 3, and discuss solutions to realize trustworthy EML in Section 4. Challenges and open issues for trustworthy EML are provided in Section 5, and the survey is concluded in Section 6. For ease of reference, major acronyms used throughout our survey are listed in Tab. <ref>. § TRUSTWORTHY EDGE MACHINE LEARNING In this section, we first discuss why trustworthiness is needed for EML. Next, we provide a brief introduction to the definition of trustworthiness. Finally, we discuss basic attributes of trustworthy EML. §.§ Why Need Trustworthy EML In practical EML applications, there are many issues degrading system performance. Trustworthiness can minimize the negative effects of issues on EML, build trust between EML and users, and promote its positive effects in practical applications. In the following, we discuss the above mentioned issues in detail, which also motivate researchers to explore the trustworthiness.§.§.§ Imbalance issues Limited computational resources, constrained storage capacities, and finite energy supply of edge devices may lead to imbalanced performance metrics in EML systems, impacting user experience and satisfaction. First, for applications with real-time requirements, such as autonomous driving, it is necessary to reduce model complexity to accommodate resource-constrained edge devices. However, this can result in decreased model accuracy, which even may raise potential hazards in human life safety, social trust, and legal liability. Second, edge devices are usually supported by limited battery capacities, such as UAVs and robots. Energy-efficient strategies to reduce the computational load of models may increase inference latency. Last, to enhance accuracy of distributed training and inference, edge devicesrequire multiple training iterations and gradient updates, which may introduce significant communication latency and energy consumption.§.§.§ Unreliable issues The unreliable systems caused by security and privacy concerns can be an obstacle to the widespread adoption of EML systems. On the one hand, edge servers and devices deployed in a distributed environment are prone to security vulnerabilities and weaknesses. Uncontrollable environment factors, such as physical media, signal propagation, network topologies, and routing variations, may enable attackers to exploit communication vulnerabilities and gain access to sensitive data. Additionally, there are potential attack risks in training and inference process of ML, such as poisoning attacks <cit.> and adversarial attacks <cit.>, which are designed to undermine model availability. On the other hand, EML has been applied in different applications, including real-time video surveillance and healthcare diagnostics. However, an unreliable system can result in exposure of sensitive data, posing significant risks to user privacy.§.§.§ Uninterpretable issuesBlack-box models (e.g., Deep Learning (DL)) presuppose a large number of model parameters for training, with the result that the underlying principles of algorithmic decision making and prediction results are difficult to interpret <cit.>. EML is applied in mission-critical domains such as autonomous driving <cit.> and medical diagnostics <cit.>, where trustworthiness is required, since wrong decisions can lead to serious consequences. Lack of interpretability in models hinders their ability to identify the causes of errors, not to mention correcting them. Moreover, model decision-making must conform to legal and moral standards. The absence of interpretability makes it complicated to verify whether the model decision meets these standards, thus increasing legal and moral risks.§.§.§ Unfair issues ML-assisted decision-making offers advantages over human decision-making, such as reduced fatigue and ability to process large volumes of data for complex tasks. However, it is susceptible to biases and can result in unfair decisions <cit.>. To be specific, biases in data and algorithms can lead to discrimination against specific groups, resulting in unequal distribution of resources and opportunities. In areas such as life, health, and entitlements, unfair decisions can negatively impact individual well-being and raise moral and ethical controversies. For instance, the use of recidivism prediction software by U.S. courts has found to assign higher risk scores to African Americans compared to Caucasians with similar profiles <cit.>. People have reasonable expectations of fairness and equality in decision-making, and failure to meet these expectations can erode trust in institutions and acceptance of the system.In summary, the existence of these issues affects the widespread adoption of EML. Trustworthy EML systems is important to provide well-balanced, robust, interpretable, and unbiased services.§.§ Concepts and Attributes of Trustworthy EMLBased on subsection I-B, we can understand that trustworthy ML and trustworthy EML are conceptually related, although they have different focuses. However, there is currently no publicly available literature that explicitly provides a definition for trustworthy EML. Given this situation, we can refer to definitions of trustworthy ML in literature <cit.> and <cit.> for a preliminary definition of trustworthy EML.Authors in <cit.> define trustworthy ML as "A framework is designed to validate a system's trustworthiness by evaluating the evidence related to its specified criteria. It ensures that the system fulfills the expectations of users and stakeholders in a verifiable manner". Authors in <cit.> define trustworthy ML as "Programs and systems developed to emulate human-like problem-solving capabilities provide advantages and convenience to individuals without posing any threat or potential harm". From the above definitions, it is observed that trustworthy ML extensively focuses on the entire life cycle of ML, including data preparation, model development, training, deployment, and oversight. Its purpose is to ensure model accuracy, robustness, and interpretability, as well as to prevent potential impacts of attacks and biases.In comparison, trustworthy EML places a more specialized emphasis on development and deployment within the context of EC. It necessitates addressing additional challenges and factors specific to edge environments, including limited computational and storage resources, unstable network connections, and real-time service requirements. Therefore, trustworthy EML can be defined by "A system executes ML tasks and models in an EC environment, employs a series of technologies and strategies to guarantee system performance, and ensures that users and stakeholders trust the system intelligent decision and control, while minimizing potential risks and hazards".Based on the definition, there are four basic attributes of trustworthy EML: optimality, reliability, interpretability, and fairness, as shown in Fig. <ref>. We elaborate them in the following content. §.§.§ Optimality It refers to determining system configuration by considering and adjusting multiple metrics to achieve performance balance. Optimality is one important attribute of trustworthy EML, which not only ensures the performance and reliability of the system, but also optimizes resource utilization to improve user experience.However, there are many challenges in achieving optimality: 1) The heterogeneous computation and storage capabilities of edge devices require reasonable resource allocation to ensure model scalability; 2) The diverse demands of applications in terms of accuracy, latency, and energy consumptionfurther increase the complexity of system resource allocation; 3) Various wireless network architectures and communication strategies demand adaptive algorithms for different network scenarios to achieve trade-offs among different performance metrics.§.§.§ ReliabilityIt refers to consistent, stable, and trustworthy abilities of a system to process data and provide services. In network edge environments, attackers can exploit the distributed nature to increase the covertness and impact of their attacks. Some representative attacks include: * Poisoning attacks, which manipulate and inject training data to mislead ML models, as well as compromise system reliability and usability <cit.>. Due to the heterogeneity of edge devices, the data distribution may be different among them, and attackers may take advantage of such differences to insert malicious data and trigger model training biases. * Adversary attacks, which aim to deceive and mislead the output of a model by making a minor but intentionally designed modification to the input data <cit.>. Compared to large-scale cloud-based models, compression models running on edge devices typically have fewer parameters and computational resources, and adversarial examples can easily mislead them. * Inference attacks, attackers of which can infer sensitive information by observing model predictions or accessing intermediate states, posing privacy threats and undermining system trustworthiness<cit.>. Since network edges often use model compression and lightweight techniques, the feature representation of the model may not be rich enough, making it easy for attackers to launch inference attacks. * Distributed Denial of Service (DDoS) attacks, where multiple devices flood the server with excessive requests, restricting access for legitimate devices. In edge environments, devices may have different computing resources and network connectivity <cit.>. Attackers can exploit this diversity to launch distributed attacks, which are difficult to detect and respond. * Eavesdropping attacks, in which malicious nodes exploit the open nature of the radio channel to intercept and decode data, compromising transmission integrity and security <cit.>. Due to the distributed and heterogeneous nature of the edge environment, eavesdropping attackers can easily hide their presence and are difficult to detect. These attacks share a common characteristic in the context of EC: the distribution and limited network resources provide attackers with great opportunities. Reliability requires EML systems have proactive and reactive defense mechanisms, such as encrypted communications, robust model aggregation, and anomaly detection.§.§.§ Interpretability It refers to the ability of ML models to explain their decision-making process and predict outcomes in an understandable manner. Due to resource constraints of edge devices, it is often necessary to use lightweight models. In this scenario, interpretability techniques need to be developed to ensure that interpretations are informative while also conforming to limitations imposed by available computational resources.The taxonomy of interpretability can be categorized from various perspectives, as shown in Fig. <ref>. Authors in <cit.> identify three dimensions of interpretability: global and local interpretability, time constraints, and user expertise. Depending on the model stage, interpretability can also be categorized intopost-hoc interpretability and ante-hoc interpretability <cit.>. To be specific, post-hoc interpretability involves using additional methods to understand model predictions after training and prediction processes. This approach does not affect model construction, but aims to assist people in understanding the model prediction process and the reasoning behind its decisions in specific cases. Ante-hoc interpretability focuses on selecting understandable algorithms and features during the model design and construction process, to ensure the overall interpretability of the model. Ante-hoc interpretability models are generally comprehensible to humans, because their decisions are based on interpretable features and rules.§.§.§ Fairness It refers to ensure impartiality and reduce bias in the decision-making process. In a distributed EC network, data do not need to be transmitted to a central server, reducing the bias caused by data centralization and transmission. Additionally, the EC environment allows personalized ML models to be constructed based on the unique data characteristics of different nodes, enhancing adaptability to various regions or groups, and thereby promoting fairness.However, achieving fairness at the network edge also presents distinct challenges. During the selection of training nodes, certain edge devices may be overlooked due to factors like geographical locations, small data volume, and poor network conditions <cit.>. This oversight can lead to unbalanced system performance in different groups and areas, which may introduce fairness concerns. Moreover, constrained resources of edge devices require a balance between fairness and other metrics.In certain situations, assigning a great weight to individuals from minority groups can help mitigate the impact of data bias and promote fairness <cit.>. In addition, algorithmic bias can be mitigated by incorporating fairness constraints into the learning process <cit.>. Furthermore, it is possible to adjust the output to meet the desired fairness criteria.§ FRAMEWORKS AND TECHNOLOGIES FOR TRUSTWORTHY EDGE MACHINE LEARNING In this section, we focus on distributed learning frameworks and technologies that play a crucial role in ensuring the trustworthiness of EML. These frameworks facilitate collaborative learning across a multitude of devices, offer benefits such as accelerated training speeds, enhanced resource utilization efficiency, and improved model generalization capabilities. The utilization of these technologies serves as a means to address existing challenges and enhance system trustworthiness. §.§ Frameworks of Distributed LearningFederated Learning (FL), as a novel framework, enhances EML by enabling collaborative model training across decentralized devices, improving model performance while preserving data privacy and reducing communication overhead. Therefore, FL, along with extensions like decentralized learning and semi-decentralized learning, lay a solid foundation for building efficient, secure, and trustworthy EML systems. Following, we provide a detailed introduction to these three types of learning frameworks.§.§.§ Federated Learning It enables multiple participants to collaboratively train a global model without uploading original data to the central server. Unlike traditional distributed training methods, FL ensures that each edge node retains full control over its local data. This unique approach enables data availability without compromising data visibility. Consequently, each end device can train a shared model based on its own dataset without directly sharing data with other participants <cit.>.A typical FL system consists of a central server and a set of end devices, forming a star structure. The training process in FL involves two main phases: local update and global aggregation <cit.>. In the local update phase, each device performs gradient descent to minimize the local loss and uploads its latest parameters. In the global aggregation phase, the central server collects and aggregates the updated local model parameters before distributing new global parameters for the next training iteration. FL has been successfully applied in edge caching and computation offloading applications, vehicular networks, and other areas of edge intelligence <cit.>. Yang et al. <cit.> classify FL into three categories: horizontal FL, vertical FL and federated transfer learning. Authors in <cit.> present a fine-grained description of important challenges involved in the area of FL communication and networking applications. §.§.§ Decentralized LearningIt utilizes Peer-to-Peer (P2P) communications to support direct device-to-device parameter aggregation and update. Decentralized learning reduces dependence on a central server and helps to alleviate network bandwidth pressure and computing resource constraints of central servers. At the same time, this approach promotes knowledge sharing among edge devices, reduces communication delay, and improves scalability in the learning process.One widely adopted decentralized learning approach is gossip averaging, where edge devices exchange model parameters or gradient information with neighboring nodes randomly, eventually achieving a consistent state among all nodes <cit.>. By leveraging the computational resources of edge devices, gossip learning aims to enable efficient model training and inference for low-latency and high-reliability task processing in EC environments <cit.>. Similarly, swarm learning <cit.> provides a decentralized learning framework based on blockchain, with the purpose of enabling high privacy, security, resilience, and scalability.Deep Reinforcement Learning (DRL) <cit.> combines properties of DL with Reinforcement Learning (RL) for solving difficult problems such as resource allocation optimization. It can enable intelligent decision making according to the changing network environments and diverse application demands. In particular, the extension of DRL known as Multi-Agent DRL (MADRL) <cit.> is designed to address complex interaction problems, such as collaboration and competition. In a multi-agent system, each intelligent agent serves as an independent decision-making entity that can observe the environment state and take appropriate actions. For instance, in the context of collaborative driving <cit.>, agents not only interact with the environment but also cooperate with other agents to optimize their respective tasks.§.§.§ Semi-decentralized LearningThe semi-decentralized architecture combines the star architecture of FL and the P2P architecture of decentralized learning, achieving a hybrid framework <cit.>. In this architecture, local parameter aggregation occurs through P2P communications among edge devices within the same region, while model parameters obtained from each local region are periodically uploaded to the central node for aggregation. This collaborative learning approach allows for both global convergence on the central server and local optimization on edge devices, to reduce centralized load pressure and communication overhead <cit.>. Unlike fully decentralized architectures, P2P communications in the semi-decentralized architecture is easier to maintain and manage <cit.>. Moreover, the semi-decentralized architecture is promising to keep both privacy protection and communication efficiency, since most local updates do not need to be transmitted to central servers. §.§ Technologies of Trusteworthy EML Although existing learning frameworks can improve EML communication efficiency and privacy protection, they are not yet able to fulfill the four dimensions of trustworthiness presented in subsection II-B. In this subsection, we describe concepts and details of techniques that are frequently used in the latest research related to trustworthy EML. §.§.§ Training and Inference Acceleration Technologies During the model training phase, researchers usually adopt the gradient sparsification technique to reduce the number of parameters, model complexity and resource consumption. Over-the-Air Computation (AirComp), as a novel wireless technology, is expected to integrate communication and computation over the air, thus improving the communication efficiency. In the practical model deployment and inference phases, technologies such as model compression, model partitioning, early exit, and Knowledge Distillation (KD) are often used to accelerate inference. They play a key role in realizing the optimality of trustworthy EML, and are introduced as follows.AirComp: It utilize the signal superposition characteristics of wireless multiple access channels to perform mathematical function computation, including arithmetic means, weighted average, geometric means, polynomial sums, and Euclidean normss <cit.>. Unlike traditional multiple access techniques, which require separate transmission and decoding of information, AirComp integrates computation into communication, giving rise to a new technology characterized by "computation-in-communication".AirComp enables edge devices to simultaneously transmit their respective local updates and compute the expected function (e.g., the weighted average function), as illustrated Fig. <ref>. This significantly enhances communication and computation efficiency, substantially reducing the required latency for multiple access and data fusion. Furthermore, there is ongoing research in utilizing AirComp to accelerate edge inference, although it is still in its early stages <cit.>.Gradient sparsification: It always transmits the most valuable gradients, so that others become zero or close to zero, thus reducing communication and computation costs and improving training efficiency. One common approach for gradient sparsification is Top-k, which selects k parameters with the highest absolute gradient values, while the gradients of other parameters are set to zero <cit.>. Another method is to set parameter gradients with absolute values below a certain threshold, effectively reducing the number of parameters. These techniques contribute to reducing the model complexity while maintaining its accuracy <cit.>.Model compression: It is used to address the challenge of deploying Deep Neural Networks (DNNs) on resource-constrained end devices. By reducing the number of model parameters and storage space, lowering model complexity, and improving energy efficiency, local fast inference of DNNs is tried to achieve. Weight pruning <cit.> is an effective way to realize lightweight DL models and is currently the most widely used model compression technique. The basic idea is to use a pre-defined threshold to guide the pruning of weights, and those with absolute values less than the threshold are discarded. Quantization <cit.>, as another important method, focuses on reducing storage and computational costs by using a few bits to represent model parameters.Model partitioning: It plays an important role in reducing the communication and computation burden on edge nodes, as well as maximizing the utilization of distributed computing resources. Given the limited computing resources of end devices and the complex structure of learning models, DNNs can be partitioned into several different parts. The computationally intensive part can be offloaded to powerful edge servers or even uploaded to cloud servers to accelerate DNNs inference <cit.>. In addition, model partitioning helps to protect data privacy, because sensitive data can be processed locally without the need to be transferred to a central server <cit.>. Early exit: It aims to improve model accuracy and efficiency in edge inference by outputting results from the middle of the neural network, avoiding data traversal across the network <cit.>. By early exit, unnecessary computation can be reduced, thus significantly accelerating the inference speed of the model. For large DL models and complex tasks, early exit can significantly reduce inference time. Compared to model partitioning, early exiting is performed within a single model, and does not require dividing the model into multiple sub-models or coordinating communication among multiple devices. Knowledge distillation: It helps to accelerate inference, improve generalization capabilities, and enhance model robustness in resource-constrained environments <cit.>. The core idea of KD is to train a small student model to mimic the outputs of a large teacher model, leveraging the hierarchical abstraction of features. By learning the generalization capabilities of the teacher model, the student model gains flexibility in complex edge environments, allowing personalized model architectures to meet different needs. Furthermore, considering that learning models are vulnerable to malicious attacks, KD can serve as a defense mechanism <cit.>. A student model can learn from the features of a teacher model without directly handling sensitive data, thereby reducing the risk of sensitive information leakage.§.§.§ Security and Privacy-Preserving Technologies The devices involved in EML are usually deployed close to the user side to realize real-time data processing and decision making. Both the security of computation and the risk of sensitive data leakage may reduce the trust of users in EML. Therefore, security and privacy-preserving technologies can be utilized to minimize the susceptibility of EML systems to attacks and prevent data leakage, thereby enhancing user confidence in the system. Next, we provide a detailed description of security and privacy-preserving technologies, respectively.Security technologies: The security of EML can be viewed from several perspectives, including computational security, model security, and communication security. In the following, we introduce these related security technologies in detail, which include Trusted Execution Environment (TEE), anomaly traffic detection, Byzantine defense and Physical Layer Security (PLS). * TEE: It is an isolated processing environment designed to provide computing and storage capabilities with security and integrity guarantees <cit.>. The basic idea is to allocate segregated hardware memory for sensitive data, ensuring secure transmission, storage, and processing. TEE allows independent execution of multiple applications while restricting unauthorized access. Therefore, deploying TEE on edge nodes can ensure the confidentiality of both local models and training data <cit.>. * Anomaly traffic detection: It is used to automatically identify abnormal data. In resource-constrained edge environments, it helps to promptly identify abnormal traffic, thus mitigating DDoS attacks <cit.>. Moreover, during the process of edge training, data updates from various training nodes can be analyzed to detect malicious updates based on differences between pairs of remote updates <cit.>. This process enables the acquisition of a trustworthy global model. * Byzantine defense: It is a security mechanism to address Byzantine faults in distributed systems. In distributed training, the primary objective is to establish an accurate global model even in the presence of a small number of malicious clients <cit.>. By guarding against Byzantine faults, the system tries to maintain correct and trustworthy operations. * PLS: It is used to enhance the security of wireless communication systems. Different from traditional encryption methods, PLS relies on physical properties of communication channels rather than algorithms and keys. Common PLS techniques include: artificial noise <cit.>, cooperative jamming <cit.>, and beamforming <cit.>. In particular, artificial noise is used to mask original data from eavesdroppers by intentionally adding noise to the transmitted signal in the communication channel. In cooperative jamming, multiple nodes work together to protect communication privacy by interfering with potential eavesdroppers <cit.>. Beamforming technology concentrates signal energy in a specific direction while reducing signal strength in other directions, making it difficult for eavesdroppers to intercept. Privacy-preserving technologies: Ensuring data privacy usually requires the use of different techniques and methods, such as Differential Privacy (DP), Homomorphic Encryption (HE), secret sharing, Secure Multi-party Computation (SMC) and Confidence Score Mask (CSM). In the following, we provide a brief introduction to them. * DP: It aims to provide statistical guarantees for individual data while minimizing the disclosure of individual privacy <cit.>. The main idea is to add noise, such as Laplace noise, to the original query results (numerical or discrete values). The added noise prevents the inference of significant information about individuals from query results, preserving personal privacy. * HE: It is a class of encryption mechanisms that support processing and computation of cipher texts <cit.>. Depending on the type and number of operations it supports, HE can be categorized into three types: Partially Homomorphic Encryption (PHE), Somewhat Homomorphic Encryption (SHE), and Fully Homomorphic Encryption (FHE). In actual training and inference process, HE techniques can ensure the security of model parameters and raw data, thus developing trustworthy EML models. However, existing HE solutions need to address the problem of high computational overhead, especially in resource-constrained edge environments <cit.>. * Secret sharing: It is also a cryptographic technique used to protect data privacy <cit.>. The basic idea is to split sensitive information into multiple parts and distribute them to different participants, making it possible to reconstruct the complete information only with parts of participants. This method is important in information transmission and storage to prevent data leakage caused by failure of a single node. In comparison to HE, secret sharing is easy for implementation, particularly in resource-constrained environments, since it eliminates intricate key management and distribution processes. * SMC: It enables collaborative computation on a combined dataset without compromising the data privacy of individual parties <cit.>. SMC is particularly suitable for scenarios that involve multiple participants. For instance, it can enable collaborative training of diagnostic models in medical research institutions without sharing sensitive patient data <cit.>. By implementing SMC at the edge nodes, trustworthy collaborative computation can be achieved, and data availability without data visibility can be guaranteed. * CSM: It is a privacy-preserving technique specialized for inference attacks <cit.>. CSM aims to reduce the leakage of private information from the confidence scores of model outputs. It can obfuscate the confidence score by adding random noise, making it difficult for an attacker to accurately infer private information from the original data. In addition, techniques such as regularization, transfer learning, Domain Adaptation (DA), and Generative Adversarial Networks (GANs), can be leveraged to protect privacy, which are specified as follows. * Regularization: It decreases model complexity and helps prevent the model from overfitting on the training data. By limiting the range of model parameters, regularization reduces the possibility of attackers obtaining sensitive information from model outputs. * Transfer Learning: It allows knowledge learned from one domain (source domain) to be applied to another domain (target domain) without sharing the original data, thus reducing the risk of data leakage. * DA: Different from general transfer learning, the goal of DA is to improve model performance by adapting to the data distribution in the target domain. This makes the model less dependent on the source domain data. * GANs: It can generate synthetic data that is statistically similar to real data but does not contain the individual information. The synthetic data can be used for model training, thus reducing the need for real data and lowering the risk of privacy leakage. Other technologies for reliability: Mobile Target Defense (MTD) <cit.> is an active defense mechanism that prevents network attacks by continuously and dynamically changing attack surface. Its objective is to create uncertainty for attackers and shift the asymmetry between attackers and defenders. Compared to MTD, cyber deception techniques <cit.> employ more aggressive strategies, intentionally providing false information (such as baits and honeypots) to mislead attackers <cit.>. In addition, Network Function Virtualization (NFV) can decouple security functions (e.g., intrusion detection) from proprietary hardware devices and enable on-demand creation and elastic scaling <cit.>.Blockchain is a cryptographic, decentralized, and user-transparent technology that provides secure transactions and computing in network edgeenvironments <cit.>. It is a chained data structure consisting of multiple blocks linked together. After adding a new block, the records in that block are broadcast to other nodes in the chain to ensure data consistency. Compared to the above techniques, blockchain has unique technical features such as consensus protocols and distributed ledgers. These features enable blockchain to effectively regulate security risks in EML systems. First, the consensus mechanism ensures the establishment of trust among devices for model training.Second, the tamper-proof distributed ledger keeps the recording of authentic and reliable information, promoting a transparent process <cit.>. Last, it rewards participating nodes based on their contributions, which incentivizes selfish nodes to provide their local resources <cit.>.§.§.§ Interpretability Technologies Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are applicable to various models, while Gradient-Weighted Class Activation Mapping (Grad-CAM) specifically targets Convolutional Neural Network (CNN) models. In the following, we offer a brief overview of these technologies.LIME: It is a model-agnostic interpretability method, and focuses on interpreting individual instances rather than the entire model to provide local explanations for specific test inputs <cit.>. LIME introduces random perturbations to the input instances of a black-box model and trains an interpretable surrogate model, such as decision tree and linear model. The weights of the surrogate model can directly reflect the significance of features and their impacts <cit.>. To ensure interpretability and local fidelity, LIME tries to minimize the discrepancy between the surrogate model and the black-box model at the instance point.SHAP: It can be regarded as a unified approach that combines LIME and shapley values <cit.>. The shapley value of a single feature is the weighted average of the marginal contribution of that feature to a subset of all feature combinations. SHAP is the basis for fairly distributing contributions of each feature to the model and has three desirable properties (i.e., local accuracy, missingness, and consistency) <cit.>. Generally, SHAP is applied as an interpretability method based on feature correlation in edge scenarios such as smart healthcare <cit.> and intrusion detection <cit.>, not only to effectively interpret final decisions, but also to support industry experts to quickly optimize and evaluate the correctness of their judgments. Grad-CAM: It is a visual local interpretability method designed specifically for CNN models <cit.>. It calculates the gradient of the target class with respect to the last convolutional feature map of the CNN. This gradient information helps to identify image regions that have the biggest contribution to model predictions. By back-propagating gradients and multiplying them with the feature map, the importance weight is assigned to each pixel. These weights are then used to generate a heat map, which highlights the significant regions in the image. As an extension of Class Activation Mapping (CAM) <cit.>, Grad-CAM supports a wider range of CNN models and does not require further changes to the model architecture. Since the size of feature maps is usually much smaller than the input image, heat maps produced by Grad-CAM may not provide precise localization. Authors in <cit.> combine a fine-grained visualization method of guided back-propagation with Grad-CAMto produce high-resolution activation maps. The guided Grad-CAM tries to provide intuitive and effective interpretation in clinical medical image analysis, helping physicians to determine the location, type, and severity of lesions, and facilitating the advancement and application in the field of medical imaging analysis <cit.>.Other technologies for interpretability: Attention mechanism is a widely used technique that mimics features in the human visual and perceptual system to process and interpret input data. Visual attention is a method for visualizing the attention weights of a model, usually for text and image data.Rule-based interpretability techniques explain the behavior of a model by defining a set of rules. These rules can be created manually or generated by automatic learning techniques. Representative techniques include decision trees, rule sets, and expert systems. §.§.§ Resource Optimization Technologies At the network edge, reasonable scheduling of constrained communication resources (e.g., bandwidth) and computing resources (e.g., heterogeneous edge devices) is important to achieve optimality and reliability for trustworthy EML. Additionally, realizing fairness in the current EML systems heavily relies on device scheduling algorithms. In this context, researchers typically formulate resource allocation as an optimization problem and employ techniques such as two-dimensional search, Alternating Direction Method of Multipliers (ADMM), Sequential Convex Approximation (SCA), and game theory to address the problem. Following, we provides a brief overview of the above optimization techniques.Two-dimensional search algorithm is a technique to find the optimal solution in a two-dimensional parameter space. In edge training scenarios, achieving optimal allocation of uplink resources requires consideration of factors such as uplink data rates, local training speeds, and training batch sizes. Two-dimensional search algorithm gradually narrows down the search scope through binary partitioning, so that the optimal solution can be determined <cit.>.Lyapunov optimization analyzes system stability under various uncertainties and disturbances by constructing a non-negative function called Lyapunov function. It can be used to solve problems of bandwidth allocation and selection of optimal devices in trustworthy EML systems. The Lyapunov optimization method decouples the original problem into a series of optimization problems for individual time slots, each of which is solved separately <cit.>.Joint allocation of communication and computational resources is usually involved in trustworthy EML, but the optimization problem is usually complex. Lagrangian method is a mathematical technique for solving such constrained optimization problems. Based on Lagrange multipliers, it can transform an optimization problem with constraints into an unconstrained problem <cit.>. ADMM technique is particularly well-suited for problems with a decomposable structure. It decomposes complex optimization problems into a series of subproblems, and progressively approaches the global optimum by alternately updating variables of these sub-problems <cit.>. Due to the involvement of multiple variables and constraints in joint optimization problems, even if one variable is fixed, the coupling ofremaining variables may still render the problem non-convex. SCA handles this challenge by decomposing the original non-convex optimization problem into a sequence of convex optimization subproblems, gradually approximating the solution to the original problem and simplifying the problem-solving process.Multi-Arm Bandit (MAB) technique falls under the category of RL, and is usually described by an agent making choices among multiple arms, each of which is associated with an unknown probability distribution, to maximize the overall reward. This is an important decision problem for weighing rewards of exploring an unknown arm against utilizing a known arm when resources are limited. As an extension of MAB, Contextual Multi-Arm Bandit (CMAB) <cit.> and Contextual Combinatorial Multi-Arm Bandit (C^2MAB) <cit.> consider contextual information (e.g., environment states and user characteristics) to make decisions. CMAB focuses on single-arm selection, while C^2MAB chooses among combinations of multiple arms.Game theory is often used to solve complex strategic problems <cit.>. It helps to understand how people weigh different options and consider strategies of their opponents in the decision-making process, provides a tool for analyzing strategic interactions, and helps to predict possible outcomes and optimal strategies.Other technologies for optimization: Stochastic Mirror-Prox algorithm is used to solve optimization problems with convex loss functions and constraints. It combines the ideas of mirror mapping and stochasticity, and tries to find near-optimal solutions in high-dimensional and large-scale data scenarios <cit.>. Pareto optimization is a technique to find a set of solutions that can achieve the best trade-off among multiple conflicting objectives <cit.>. § SOLUTIONS FOR TRUSTWORTHY EDGE MACHINE LEARNING In this section, we explore solutions for achieving trustworthiness of EML based on four basic attributes: optimality, reliability, interpretability, and fairness. Research on trustworthiness is promising to help develop reliable, secure, manageable, and acceptable EML systems. In addition, in real-world scenarios, incentivizing nodes to share computational and storage resources is crucial for improving system performance, as well as enhancing reliability and fairness. Therefore, we present existing solutions for incentive mechanisms at the end of this section.§.§ Solutions for Optimality As discussed in section II-B, achieving optimality in trustworthy EML requires to balance multiple conflicting metrics including latency, energy consumption, and accuracy. In this regard, we provide a summary of existing studies that focus on the trade-off between accuracy and latency, as well as that between accuracy and energy consumption.§.§.§ Trade-offs Between Latency and Accuracy In EC scenarios, the accuracy discrepancy among ML models deployed at the network edge may have a significant impact on user experience. Realizing a balance between accuracy and latency is crucial to ensure reliable and efficient decisions. Tab. <ref> provides a brief description and technologies used for relevant solutions. On the one hand, researchers can address this trade-off by training and inference acceleration techniques presented in subsection III-B1. For example, gradient sparsification reduces computational and communication overheads by lessening the number of uploaded gradient, thus allowing ML models to run on resource-constrained devices. Authors in <cit.> utilize gradient correction to handle insignificant gradients with the purpose of improving model convergence. At the same time, in order to reduce communication overhead, they use local gradient-based batch normalization update mechanisms to mitigate the effect of delayed gradients. In contrast, authors in <cit.> and <cit.> take into account the impact of non-Independent Identically Distributed (non-IID) properties of localized datasets. The former uses global gradients of previous rounds to estimate current global gradients and update current zero-sparse local gradients. This method aims to mitigate the communication overhead and accelerate convergence to the global optimization. The latter sets an adaptive threshold to identify and remove redundant updates without retraining the model. Unlike centralized training with pure gradients, decentralized training involves both gradient and consensus updates. Authors in <cit.> provide an error compensation sparsification method to accelerate decentralized training. The key of the method lies in identifying components of information exchange in each iteration (i.e., sparse model updates) and applying targeted error compensation specifically for these components. Model partitioning is promising to meet the requirements of inference accuracy and latency <cit.>. Authors in <cit.> provide an automated partitioning approach, which stores lightweight models on edge devices and returns results as long as a minimum accuracy threshold is met. Otherwise, the data is transferred to the cloud for accurate inference results. The goal of approaches in <cit.> is to identify segmentation and bit-width assignments of weights and activations that reduce the overall latency without sacrificing accuracy. Authors in <cit.> develop a device-edge collaborative framework for on-demand DNN inference. By considering both static and dynamic network environments, it jointly optimizes DNN partitioning and right-sizing to maximize inference accuracy while ensuring latency requirements. Efficient resource allocation plays a crucial role in balancing model accuracy and learning latency. Authors in <cit.> aim to enhance model accuracy within a given training time budget. They first establish a benchmark for the problem by deriving a lower bound on the performance loss. Next, they devise an optimal bandwidth allocation strategy for devices. Finally, they introduce a greedy scheduling algorithm to select devices with the shortest update time. Differently, authors in <cit.> consider Communication-and-Computation (C^2) resource constraints. They provide a closed-form solution for joint batch size selection and communication resource allocation, aiming to achieve the best learning performance while considering resource budgets. Similarly, researchers in <cit.> aim to address the challenge of joint allocation of C^2 resources. Instead of training, the work focuses on edge inference in a multi-user system and considers batching and early exiting techniques to improve inference accuracy. According to subsection III-B1, the AirComp technology is able to compute a weighted average of local updates based on the wireless multiple access channel, thereby reducing the computational load on the edge server and minimizing training latency. However, channel quality and noise can impact the Mean Square Error (MSE) of AirComp. To address this challenge, some researchers focus on power control. For example, in <cit.>, the joint optimization of transmission power of edge devices and denoising factors at the server is performed. Specifically, authors in <cit.> assume that local gradients are IID with zero mean and unit variance. In subsequent research <cit.>, they extend the optimization approach to include learning hyperparameters and transmission power control. In contrast, authors in <cit.> consider statistical features of gradients are non-IID. They aim to minimize the aggregation error by optimizing the transmission power of each device under an average power constraint.Beside power control, device selection strategies are commonly used to reduce the MSE of AirComp. For instance, authors in <cit.> aim to combine device selection and receive beamforming to improve learning accuracy and parameter aggregation speeds. They use sparse and low-rank methods to solve the hybrid combinatorial optimization problem with non-convex quadratic constraints. Authors in <cit.> propose an algorithm that minimizes Lyapunov drift for device scheduling. They utilize local updates from unselected devices to enhance accuracy and efficiency of global model aggregation. In contrast, authors in <cit.> employ Reconfigurable Intelligent Surface (RIS) to assist AirComp model aggregation. They intend to jointly optimize receive beamforming, RIS phase shift, and device selection to improve learning accuracy.Despite differences in approaches regarding wireless resource optimization, device selection, and RIS-assisted channel reconfiguration, they all share a common objective of minimizing local update aggregation errors by aligning received signals <cit.>. Differently, authors in <cit.> propose a dynamic learning scheme, which dynamically adjusts local learning rates to adapt to fading channels, thus reducing the impact of wireless distortion on learning accuracy.§.§.§ Trade-offs Between Energy Consumption and Accuracy In practical scenarios, excessive demands for high learning accuracy can lead to high energy consumption. Striking a balance is essential to maintain learning accuracy while meeting energy limitations. We summarize some representative studies addressing the trade-offs between accuracy and energy consumption in Tab. <ref>.Model compression methods such as pruning and quantization are commonly used to address this balance. Some studies focus on either pruning <cit.> or quantization <cit.>, while authors in <cit.> incorporate the both. An energy-aware model compression method for various data streams is proposed in <cit.>, aiming to reduce energy consumption of edge devices. Specifically, the energy-aware model compression is formulated as a multi-step optimization problem, in which the model is partially quantized or pruned at each step. Given the nature of the multi-step problem, RL is used to find the optimal model compression strategy. In addition, some studies address the balance between learning accuracy and energy consumption by scheduling edge devices and managing communication resources. For instance, authors in <cit.> jointly optimize the selection of IoT devices and transmission power to address the trade-offs between learning accuracy and energy consumption. Authors in <cit.> leverage complex interactions among environmental contextual information (e.g., workload, amount of available computing resources, and data quality) to select available User Equipments (UEs) and propose an approximation algorithm to find the suitable aggregator location.Authors in <cit.> investigate the joint adaptive configuration and bandwidth allocation in edge-assisted real-time video analysis systems. They consider factors such as energy consumption, accuracy, and system latency to select suitable configurations for multiple video streams. Authors in <cit.> improve learning accuracy and energy efficiency by optimizing data partitioning and rate control. They formulate a multi-objective optimization problem, by considering continuously varying communication rates. The authors simplify the problem by assuming that the server buffer capacity is infinite and the one-shot data arrival exists at the sensor side, and then utilize hierarchical ordering, objective merging, and variable reduction to obtain an optimal solution.Lesson 1: With reasonable resource allocation and device scheduling approaches based on technologies such as two-dimensional search, ADMM and SCA, the performance of EML systems can be improved. However, collaboration among devices is crucial in dynamic edge environments to handle variations in resource availability. Lightweight techniques, such as model partitioning, early exit, and model compression, should be integrated based on specific requirements to achieve a balance among model accuracy, task processing latency, and energy consumption.Decentralized and semi-decentralized learning can be used to enhance the robustness and reduce communication costs of EML as discussed in subsection III-A. However, decentralized learning architectures may require good connectivity for communications among all edge devices, which is difficult to scale to large-scale systems, especially when devices in different regions are required to participate in model training. In addition, the semi-decentralized architecture still relies on a central server to handle the aggregation and updating of model parameters. As a result, efficient approaches are still required to balance different performance metrics.§.§ Solutions for Reliability In this subsection, we discuss three aspects: First, security solutions for EC networks defend against DDoS and eavesdropping attacks, ensuring availability of the edge network and security of data transmission; Second, security solutions for learning models address threats such as model poisoning and adversarial inputs, ensuring the trustworthiness and robustness of learning models; Finally, privacy-preserving solutions focus on handling sensitive data involved in model training and inference. In the following, we discuss them in detail. §.§.§ Security Solutions for EC Networks Despite the existence of multiple security threats, the unique characteristics of the network edge lead researchers to primarily focus on specific types of attacks. In this context, DDoS attacks and eavesdropping attacks are the main focus of current researchers due to their generalization and potential hazards. Fig. <ref> illustrates the defense of DDoS and eavesdropping attacks at the network edge, and relevant solutions are summarized in Tab. <ref>.Defense strategies for edge DDoS attacks: Anomalous traffic detection and resource allocation are two common strategies for Edge DDoS Mitigation (EDM). During DDoS attacks, attackers typically send large amounts of traffic to flood the target system, rendering the service unavailable. As a result, researchers typically use anomaly traffic detection techniques to analyze traffic in real time and detect unusual requests.Authors in <cit.> propose FlowGuard, an edge server-centric DDoS anomaly detection scheme for IoT. It consists of two main processes: 1) flow filtering, and 2) flow handling. The former is responsible for detecting DDoS attacks and maintaining flow filtering rules. The latter analyzes suspicious streams, identifies malicious streams based on Long Short-Term Memory (LSTM) models, and classifies them by CNN. Differently, authors in<cit.> propose outlier-aware Auto-Encoder (oAE), a semi-supervised anomaly detection model. It detects anomaly flows based on a limited number of labeled outliers and an oAE-based loss function. In another study <cit.>, authors design a semi-supervised Dynamic line Graph Neural Network (DGNN) for intrusion detection. This method transforms network traffic into spatio-temporal graphs, applies DGNN to extract spatial information, and captures the contextual evolution of communication between IP pairs over time. Compared to FlowGuard <cit.>, semi-supervised-based approaches <cit.> avoid the reliance on high-quality labeled data. From the perspective of resource allocation, authors in <cit.> discuss four key constraints for EDM: capacity constraints, proximity constraints, latency constraints, and limited service range constraints. Meanwhile, they present two EDM approaches, EDMOpti and EDMGame, to address these constraints and find optimal or near-optimal solutions for DDoS mitigation. In contrast, authors in <cit.> concentrate on the defensive resource scheduling problem for EC nodes. They utilize virtualized Intrusion Protection Systems (vIPSs), which are container-carrying intrusion protection systems with self-defense capabilities. The approach involves pooling and coordinating idle vIPSs from local and neighboring EC nodes to achieve a balanced defense workload. Moreover, authors in <cit.> combine MTD with cyber deception technique to mislead attackers and manipulate their perceptions. They propose a lightweight defense framework based on Software-Defined Networking (SDN) that can be easily deployed in IoT environments, without requiring significant modifications to the existing network architecture. Defense strategies for eavesdropping attacks: Cooperative jamming based on PLS can enhance the confidentiality of the communication process <cit.>. Authors in <cit.> propose a secure FL scheme, which utilizes devices that do not participate in FL, such as Sensor Nodes (SNs), to send jamming signals to defend against eavesdropping attacks. They optimize local training time, model uploading time, and transmission power of Federated Clients (FCs) to obtain the optimal pairing of FCs and SNs. Another defense strategy is jamming power allocation, which focuses on concealing transmission behavior. Authors in <cit.> propose a game theory-based cooperative jamming power allocation strategy, considering the eavesdropper as a strategic player.RIS technology, unlike artificial jamming, enhances the security of wireless communication networks by adjusting signal amplitudes and phases <cit.>. Authors in <cit.> propose a DRL-based approach to optimize beamforming strategies between the base station and RIS, aiming to counter eavesdroppers in dynamic environments. Instead, authors in <cit.> consider a non-cooperative game between the base station and an intelligent attacker. They jointly optimize power allocation and beamforming to improve the secrecy rate. Meanwhile, RL is utilizedto predict attack methods and allocate power accordingly. Differently, authors in <cit.> utilize cyber deception to send true information to the intended receiver while injecting fake information to confuse eavesdroppers. Traps are strategically deployed to attract eavesdroppers and provide them with increasingly clear fake messages, thus establishing a secure communication channel between senders and receivers. This method ensures security of exchanged information, even if eavesdroppers gain access to the secret channel information.§.§.§ Security Solutions for Learning Models In recent years, security research on EML focuses on defending against poisoning attacks, and a few studies consider adversarial attacks in EML. Distributed learning makes poisoning attacks insidious, and malicious nodes can easily tamper with local data to affect model training. In addition, adversarial attacks deceive model inference through a small perturbation, leading to erroneous outputs. Given the proximity of edge intelligent devices to users, attackers can easily feed adversarial samples to EML. Fig. <ref> provides an illustrative example for model attack resistance at the network edge, and relevant security solutions of learning models are summarized in Tab. <ref>.Defense strategies for poisoning attacks: Based on the difference between anomalous and normal updates, some researchers use anomaly detection techniques to weed out malicious updates. Authors in <cit.> propose a scoring model that employs kernel density estimation to evaluate updates from remote clients. They statistically approximate the optimal threshold to distinguish malicious updates from clean ones. Authors in <cit.> develop an unsupervised anomaly detection method based on Support Vector Machines (SVM). They introduce a separate validation operation for each potentially malicious local model, to improve anomaly detection accuracy. However, this approach comes at the cost of increased time for the anomaly detection process. Differently, authors in <cit.>propose a weight-based detection scheme. It provides edge nodes with small validation datasets to detect and filter anomalous parameters uploaded by malicious end devices. Based on detection results, edge nodes set appropriate parameter weights to eliminate the effect of pseudo-parameters on the model. In addition, they use DP techniques to provide privacy measures for sensitive data.In addition, some studies use Byzantine fault-tolerant aggregation algorithms to ensure the robustness and accuracy of the global model in the presence of malicious users. Authors in <cit.> consider different scenarios with varying proportions of malicious entities. For the scenario with a majority of honest participants (Byzantine < 50%), they design a specialized truth discovery aggregation scheme to eliminate malicious model updates. In the scenario with a majority of Byzantine participants (Byzantine 50%), they employ a filter based on maximum cliques to ensure the overall model quality.Since adversaries may learn private information from local model updates, some researchers consider privacy-preserving Byzantine robust learning schemes. For example, literature <cit.> and <cit.> both use cosine similarity to identify suspicious local updates as well as HE-based mechanisms to protect privacy. The difference is that the former uses a two-trapdoor HE-based mechanism to prevent key disclosure and data leakage, while the latter uses FHE to provide secure aggregation. However, HE-based schemes rely on complex ciphers, which lead to high computational overhead. In order to construct a lightweight model while maintaining training accuracy, authors in <cit.> design a lightweight secure aggregation protocol that utilizes two servers for model aggregation. In the presence of poisoning attacks, authors in <cit.> utilize three-Party Computation (3PC) to maintain robustness and privacy of local models simultaneously during global model aggregation. Instead of relying on a central parameter server, authors in <cit.> propose a fast and computationally efficient byzantine-robust algorithm for fully decentralized training systems. Their algorithm utilizes a new sequential, memory-assisted, and performance-based criterion for training on logical rings while filtering out byzantine users. Similarly, researches in <cit.> also focus on robust aggregation for fully decentralized training systems. The difference from <cit.> is that they use blockchain to provide a transparent process for data aggregation with the purpose of improving security of training processes.Defense strategies for adversarial attacks: Authors in <cit.> propose a dynamic defense mechanism that combines techniques such as KD, MTD, and Bayesian Stackelberg games to improve model robustness, especially the classification accuracy of EML in an adversarial setting. Authors in <cit.> focus on adversarial attacks in Industrial AI systems (IAISs). They propose a new defense decision-making function for fast detection of adversarial samples in IAISs with a large number of data inputs, which tries to satisfy latency and privacy preservation requirements in industrial environments. §.§.§ Privacy-preserving Solutions for Learning Models EML involves training and inference on local devices, which may contain private information such as identity information, location data, and health records. Membership Inference Attacks (MIAs) can reveal whether individual data are in the training set or not, and model inversion can infer internal information about the model, such as the input and training data. In the following, we discuss existing strategies for the above mentioned privacy attacks, which are also summarized in Tab. <ref>.Defense strategies for MIAs: CSM techniques can reduce effectiveness of MIAs by hiding the true confidence scores returned by the target classifier. Authors in <cit.> propose MemGuard, a defense method against black-box MIAs. It adds carefully crafted noise to confidence score vectors predicted by the target classifier, creating adversarial examples that mislead attacker classifiers. MemGuard operates in two phases: finding the noise vector with utility-loss constraints and adding the noise with a certain probability to satisfy a given utility-loss budget. In contrast, DP techniques process training data and query outputs of models to protect privacy and reduce the success of MIAs. Authors in <cit.> present a DP framework based on output perturbations. In order to strike a balance between privacy preservation and model accuracy, they propose a tighter upper bound on a global sensitivity of model parameters. Based on the upper bound, the overall degree of noise injection is controlled by injecting DP noise into randomly selected neurons in the output layer of the baseline neural network.Since ML models can be overfitted in training data, an attacker can determine whether an input belongs to the training data by analyzing the output of ML models. Therefore, regularization facilitates to reduce the difference in model behavior on its training and test data, thus helping to mitigate MIAs. For example, authors in <cit.> propose adversarial regularization, which ensures that the model is indistinguishable between its training data and other datasets (with the same distribution). They use the gain of the inference attack as a regularization term for the classifier to minimize the prediction loss of the model and the maximum gain of the inference attack. Differently, authors in <cit.> add distance between output distributions of training and non-training data, computed by Maximum Mean Difference (MMD), as a new regularization term to the objective function of the target classifier. The new regularization term forces the classifier to generate similar output distributions for its training data and non-training data.Additionally, authors in<cit.> propose a defense method, where the model is trained on a different but related dataset, avoiding direct access to original sensitive dataset. Authors in <cit.> utilize an unprotected model trained on private data and transfer its knowledge to a student model trained on labeled reference data. Authors in <cit.> employ GANs to generate new data that required for training. MIAs may fail because the target model does not directly learn from the source dataset. To enhance quality of the generated data by a GAN, authors utilize truncation techniques and clustering algorithms during the generation process for different types of data.Defense strategies for model inversion attacks: DP can also be utilized to mitigate model inversion attacks, as demonstrated in <cit.>. The authors concentrate on developing privacy-preserving strategies for transparent model, and introduce an α-violation privacy risk estimator combined with DP, to defend against inversion attacks. In another research <cit.>, DP and CSM are combined to reshape the probability distribution and confuse attacker classifiers. This method preserves the order of confidence scores in vectors, ensures the minimum loss of classification accuracy, and eliminates the need to train new models, thus reducing the calculation cost. Differently, authors in <cit.> utilize secret sharing to resist against model inversion attacks.In a departure from traditional methods, authors in <cit.> employ transfer learning to resist against model inversion attacks while maintaining training accuracy. Additionally, authors in <cit.> notice that channel randomness interferes with model updates from each worker, and the updates from multiple workers create significant interference within a limited bandwidth. This interference hides the exact model update trajectory of each local node, thus preventing model inversion attacks and protecting data privacy. In addition, through analog transmission and ADMM approach, worker nodes upload disturbed model updates to the parameter server. This approach not only reduces the communication pressure through the aggregation of over-the-air perturbations, but also has a protective effect on data privacy for scenarios where curious parameter servers exist.Lesson 2: Existing attacks mainly exploit the vulnerabilities of the decentralized nature of EML, such as communication restrictions, data heterogeneity, and resource constraints, thus significantly affecting the security of EML systems. As a result, Ensuring security and privacy of EML systems is important for their reliability improvement. However, with the application and development of AI, attacks and defenses have entered the era of intelligent countermeasures. In order to cope with increasingly intelligent attacks, further research should consider the development of robust defense techniques, such as zero-trust architectures <cit.>. In addition, implementing security measures introduces additional energy, computation, and communication overheads. Thus, how to keep security without violating resource constraints is challenging. §.§ Solutions for Interpretability To ensure the trustworthiness of EML systems, it is essential for ML models to be interpretable, transparent, and understandable. In this subsection, we discuss solutions for achieving interpretability in EML from both post-hoc and ante-hoc perspectives, which are summarized in Tab. <ref>. §.§.§ Post-hoc Interpretability It generally interprets ML models after they have been trained. It can be roughly categorized into three types: 1) Visual-based explanations <cit.>; 2) Feature-based explanations <cit.>; and 3) Surrogate model-based explanations <cit.>. Visual-based explanations offer an intuitive and visual representation of decision-making processes and underlying patterns of models. For instance, authors in <cit.> refine the CAM technique to highlight relevant portions of EEG signals associated with mental states in the vehicle drowsiness monitoring system. Authors in <cit.> utilize the guided-Grad-CAM method to provide real-time explanations with high-resolution activation maps for multimodal DL models. Different from CAM, authors in <cit.> utilize attention modules for fine-grained spatial localization, to generate accurate heatmaps. Furthermore, research in <cit.> combines micro and macro interpretation modules to explain the failure cases of object detection models in autonomous driving systems. By extracting and visualizing features of CNNs and providing spatiotemporal information, this method aims to assist model developers in understanding, fine-tuning, and developing models.Feature-based explanations employ techniques like LIME and SHAP to analyze the importance of features in ML models. In <cit.>, DeepSHAP method is used to identify features that impact the likelihood of an attack. Authors in <cit.> combine LIME and SHAP to explain the importance of clinical features and genotype in warfarin daily dose prediction models. This combination performs global and local interpretations to help healthcare practitioners understand and trust model predictions. In the global interpretation, the ranked importance and SHAP interpreter produce a ranking of feature importance across the entire dataset. In the local interpretation, LIME and SHAP interpreters show the effect of features on the output of models run on specific samples.Surrogate model-based explanations approximate the behavior of black-box models and provide explanations. Authors in <cit.> propose a tree regularization technique to approximate the complex decision boundaries of deep models. By optimizing the surrogate models, domain experts can gain a good understanding of the black-box models' behavior. These approaches contribute to the interpretability of EML models, making them transparent and understandable, thereby enhancing system trustworthiness.§.§.§ Ante-hoc Interpretability Unlike post-hoc interpretability methods, ante-hoc approaches focus on incorporating interpretability directly into the model during its construction. This ensures that the model inherently possesses interpretability and can provide explanatory outputs during the prediction phase. One example of ante-hoc interpretability is rule-based models, where interpretable rules are explicitly incorporated into the model construction process <cit.>.Authors in <cit.> propose a rule mining strategy for interpretable abnormality detection of ECG signals. They utilize a tree-based search algorithm to generate interpretable rules, enabling real-time detection based on low-power edge sensors at the network edge. They also introduce a mechanism where wireless transmission is gated by a control unit, allowing transmission only when abnormal heartbeats are detected. This reduces energy consumption and enables further analysis in the cloud. In contrast, authors in <cit.> employ a linear model with nonlinear relationships encoded in learning weights to enhance interpretability and capture uncertainty using random gates and additional branches. Lesson 3: Interpretability plays a crucial role in the field of EML applications. Model developers benefit from interpretability by identifying and addressing issues in model predictions, leading to improved accuracy and reliability. Users, such as autonomous vehicle drivers, are likely to accept and adopt these technologies, when they can understand and trust model behaviors. Using various interpretability methods and combining approaches can generate comprehensive explanations. However, attackers can exploit interpretability techniques to detect vulnerabilities, especially in risk-sensitive scenarios <cit.>. In addition, there is a lack of a scientific assessment system for evaluating interpretation methods. In the case of post-hoc interpretability, it is crucial to consider the accuracy of interpretation results. For ante-hoc interpretability, quantifying the inherent explanatory capability of model becomes essential.§.§ Solutions for Fairness Fairness is essential in collaborative learning to ensure equal opportunities and contributions for edge devices, without discrimination on their attributes or locations. Fairness constraints are always considered in the algorithm design process to ensure fair and non-discriminatory decisions for different individuals and groups. In addition, some studies also consider fairness assurance schemes in the presence of malicious users. Tab. <ref> provides a brief description for existing solutions.Client selection is a common method to meet fairness constraints, with the purpose of ensuring that global models do not heavily rely on certain clients. For instance, authors in <cit.> introduce fairness constraints to address the potential deprivation of low-priority clients from training opportunities. These constraints reserve a certain probability of being selected for each client, but come at the expense of reduced training efficiency and model accuracy. Furthermore, EML data is typically non-IID, and participants may have varying numbers of local iterations.Using a simple equivalent selection probability to choose participants may not ensure model fairness. Accordingly, authors in <cit.> propose a heuristic scheduling method to ensure fairness in the collaborative learning process. This method combines DRL, an estimation algorithm, and a breadth-first search method for local device selection with non-IID datasets, determining the number of local iteration determination and radio resource allocation. Differently, authors in <cit.> propose anonline client selection algorithm to minimize training latency and maintain long-term fairness. The algorithm formulates the client selection in the form of an MAB problem, which is solved by the virtual queueing technique in Lyapunov optimization.Moreover, some researchers adjust the aggregation weights of different representative clients to improve the fairness. Authors in <cit.>propose the Agnostic FL (AFL) approach to prevent the model from overfitting to any particular customer at the expense of others. However, assigning large weights only to clients with the highest training losses may make global model accuracy not as good as expected. Therefore, authors in <cit.> propose a multi-objective optimization framework, focusing on the outcome of the current model across all participants. A Pareto-stationary solution is developed to find a common direction of descent for all selected clients to avoid sacrificing the performance of any client.Furthermore, to prevent dishonest participants from sending malicious updates and free-rider attacks, approaches such as TrustFed <cit.> and FPPDL <cit.> focus on both robustness and fairness. Specifically, TrustFed <cit.> utilizes the blockchain to detect and remove anomalous devices, preventing malicious clients from contaminating the model. Authors in <cit.> address the issue of how to treat FL participants fairly based on their contributions. They try to obtain FL models with corresponding accuracy and collaborative fairness based on individual participants' contributions. In addition, blockchain technology is utilized to record all operations to ensure transaction security. Differently, authors in <cit.> propose a personalized multi-task FL algorithm, called Ditto, which aims to improve both fairness and robustness. After optimizing a global objective function, Ditto adds a regularization term to make the personalized model close to the optimal global model.Lesson 4: In summary, achieving fairness in collaborative learning needs to consider client honesty, balance robustness and fairness, and implement appropriate strategies. It is essential to be aware that increasing fairness may lead to the decrease of learning accuracy. Moreover, prioritizing fairness without considering data or model parameter sharing can raise privacy concerns. Therefore, adopting technical measures that strike a balance between fairness and privacy-preserving is crucial. §.§ Solutions for Incentivization In practical edge application scenarios, nodes are typically autonomous and can freely join and leave the training process. They may be unwilling to contribute their resources for free. Therefore, it is important to design effective mechanisms to incentivize nodes for active participation in training and maximize benefits for both service requesters and training nodes. In this subsection, we introduce incentive mechanisms based on node contributions and reputations, which are summarized in Tab. <ref>.§.§.§ Incentive Mechanism Based on Node Contributions The quality of model updates can vary greatly due to a variety of factors such as training data volume, data quality, and data distribution. As a result, some studies utilize them to measure customer contributions. For example, authors in <cit.> use the amount of data provided by edge nodes as a measure. They formulate the problem of minimizing the overall cost of the parameter server and maximizing the benefit of edge nodes as a Stackelberg game. In addition, they use DRL to address the challenges of non-shared decisions (e.g., the amount of data used for model training) and contribution evaluation, so that parameter servers and edge nodes can dynamically adjust their strategies. To cope with potential dropouts of high-quality training nodes, authors in <cit.> employ a reverse auction model to incentivize high-quality and low-cost computing nodes to participate in the training process.In contrast to approaches in <cit.>, which depend on data volume and quality as a reference for contributions, authors in <cit.> utilize the enhancement in global model accuracy achieved by local model updates as the evaluation criterion. Based on historical contributions, the probability of being selected for the next round of model training can be automatically determined. However, a single-layer incentive mechanism may be not efficient in motivating all parties. To address this challenge, researchers in <cit.> and <cit.> propose a two-layer incentive mechanism. In the lower layer, the size of reward pool and data allocation of training nodes are determined based on data volume, data quality, and privacy budget provided by each data owner. In the upper layer, profits are allocated to training nodes based on their marginal contributions to the model publisher, i.e., the impact on the global model performance. Instead of focusing solely on a single dimension, such as node data quality and quantity, which may not guarantee long-term system efficiency and stability, some studies address situations where nodes possess multi-dimensional attributes, such as heterogeneity communication resources and computational capabilities of nodes <cit.>. Authors in <cit.> consider the balance among rewards, costs, edge communication and computing capabilities, allowing nodes to determine their contributions of local data and computational resources in each training round.In the context of blockchain based decentralized training, authors in <cit.> implement incentives in smart contracts to create a trusted, fast, and transparent environment. They propose a game-based incentive mechanism that models the rationality of clients, edge servers, and clouds in the presence of multi-dimensional attributes. Authors in <cit.>, similarly, model the resource allocation problem in blockchain-based learning as a two-stage Stackelberg game. They assist the model owner in allocating rewards and help clients determine their computational resources for model training and task mining.§.§.§ Incentive Mechanism Based on Node Reputations Despite previous studies, such as <cit.> and <cit.>, consider data privacy in their incentive mechanism, there are still concerns about potentially malicious behaviors from local training nodes. These malicious nodes may deliberately provide incorrect inputs to undermine the integrity of the global model. To address this issue, reputation-based mechanisms are proposed as solutions to identify and exclude malicious participants. Authors in <cit.> combines reputation-based and payment-based incentives, using "credibility coins" as encrypted cryptocurrency for data transactions. At the same time, they introduce a dynamic incentive model based on evolutionary game theory to analyze user interactions and the stability of strategies. Authors in <cit.> integrate reputation and contract theory to ensure fair rewards. They utilize blockchain for secure reputation management of training nodes, providing non-repudiation and anti-tampering properties in a decentralized manner. Apart from integrating with blockchain, authors in <cit.> apply the hidden social effects between edge devices and their users to establish a social graph model based on Stackelberg game for identifying trustworthy co-learners who share mutual trust and learning interests. These reputation-based approaches assess the trustworthiness and contributions of participating nodes, either based on their impacts on model performance or through social interactions, to ensure integrity and reliability of learning. Lesson 5: In conclusion, contribution-based incentives reward active and resource-contributing nodes, promoting system collaboration and optimization. Reputation-based incentives rely on node credibility, encouraging honest behavior for enhanced stability and security. These mechanisms collectively encourage active participation, trust-building, and reasonable resource management. However, incentive mechanisms should be tailored and optimized for specific application scenarios, considering their unique characteristics and requirements.§ RESEARCH CHALLENGES AND OPEN ISSUES Research on trustworthy EML is in its infancy. In order to achieve the vision of secure, reliable, interpretable, fair, accurate, and low-cost trustworthy EML, there aremany issues and research directions that need to be further explored. Inspired by existing solutions, in this section, we discuss various research challenges and open issues in the field of trustworthy EML. §.§ The Impact of Communication Costs on Secure EML Currently, most studies focus on data privacy and algorithm security, communication costs are often overlooked. With the expansion and proliferation of computing devices, edge nodes are expected to have higher storage and computational capabilities in the future. As a result, EML is trending towards fully decentralized learning, relying solely on end devices to execute learning tasks, thereby overcoming dependency on central nodes. However, in this scenario, there are several security and privacy risks, such as: 1) Decentralized entity control: Different nodes in a decentralized system may be controlled by different administrative entities, leading to potential inconsistencies in the security levels of nodes. A vulnerable node may become a target of attackers, posing a threat to the overall system security; 2) Widespread semi-honest nodes: They can introduce instability into the system, and attempt to steal data or send malicious data during collaborative learning to disrupt system availability; and 3) Topology diversity: The topology of a distributed system can be highly complex, with diverse relationships among nodes, which complicates security analysis and management.To ensure communication security and privacy, additional defense costs can be incurred, such as communication costs for ubiquitous smart devices. This may result in increased training and inference latency, limiting the real-time task processing capability. For certain applications like intelligent transportation systems and industrial automation, this may be unacceptable. Extensive data transmission can consume a lot of bandwidth resources, potentially resulting in additional expenses. Additionally, high communication costs are often accompanied by high energy consumption, especially for mobile devices and sensor nodes. This can reduce battery life, and decrease device availability. Hence,the impact of communication costs is crucial for security and privacy issues in EML.§.§ Autonomous Collaboration for Edge Co-inference Autonomous collaboration among edge nodes is essential for dynamic edge co-inference, but its implementation poses challenges in distributed training and execution. In distributed control scenarios, edge nodes operate autonomously and perceive a shared environment, allowing for learning and deployment based on feedback. They can make independent decisions and execute tasks in a dynamic environment while collaborating to achieve common goals. To enable such autonomous collaboration, distributed RL methods are employed.Distributed RL encompasses two main aspects: distributed training and distributed execution. Full distributed RL is particularly challenging, because it needs to consider not only the interaction between individual agents and the environment, but also the interplay among multiple agents. Correspondingly, various challenges are introduced, including: 1) Changes in agent strategies: It can lead to environmental instability, since the behavior of one agent affects that of others; 2) Distributed training and reward feedback: Distributed training requires individual agents to receive separate reward feedback. Decomposing feedback from the environment into rewards for each agent and quantifying contributions of each agent to teamwork can be complex; and 3) Curse of dimensionality: When the number of agents increases, the learning process faces challenges such as the curse of dimensionality, resulting in a significant increase in computational complexity. §.§ Blockchain for Trustworthy EML Blockchain technology has been widely applied to enhance data privacy and security in EML and establish trust among devices <cit.>. It has unique advantages in trustworthy EML, such as: 1) Providing a decentralized security mechanism to avoid the risks of data theft or tampering; 2) Ensuring data integrity and immutability to guarantee data credibility and reliability; and 3) Providing a distributed consensus mechanism that enables edge devices to reach consensus and collaborate to improve data collaboration efficiency and accuracy.However, blockchain technology faces some challenges: 1) The limited computing resources and storage space of edge devices may hinder the widespread application of blockchain technology, since it requires substantial data transmission and processing capabilities; 2) Unstable network connections of edge devices may lead to low data transmission and processing efficiency; 3) Ensuring security of blockchain technology demands significant computing resources and time, potentially impacting the performance and energy consumption of edge devices. Additionally, connecting a large number of edge devices to the blockchain network can cause network congestion and performance degradation, highlighting the need to address scalability issues in blockchain systems; and 4) The resource-intensive nature of blockchain technology, including computing and storage requirements, may lead to high costs, particularly for devices with limited resources. Therefore, applying blockchain technology into trustworthy EML requires comprehensive consideration of the actual situation and specific needs, as well as striking a balance among cost, efficiency, and other factors. §.§ Trade-off between Interpretability and Accuracy at the Network Edge In EML, balancing accuracy and interpretability is an important issue <cit.>. Some models can achieve high accuracy, but are difficult to explain their internal mechanisms, which limits their applications in certain fields. Other models with high interpretability may sacrifice accuracy, which also affects their effectiveness in practical applications. Fig. <ref> shows the relationship between accuracy and interpretability of ML models.This trade-off issue involves several challenging factors: 1) "Interpretability" is difficult to define and measure, because different application fields and scenarios have different requirements; 2) In EML application fields like healthcare, protecting data privacy is crucial, which may limit the use of models with strong interpretability; 3) Some models with strong interpretability may be too simplistic to handle large-scale, high-dimensional, and complex data, potentially sacrificing accuracy; 4) Some models may require many parameters or complex structures to achieve high accuracy, which are hard to interpret; and 5) In scenarios involving high-dimensional and complex data, even models with strong interpretability may struggle to explain their internal mechanisms and decision-making processes, necessitating the development of novel interpretativemethods. In conclusion, achieving a balance between accuracy and interpretability in trustworthy EML is a complex task, requiring consideration of multiple factors and practical application requirements. Additionally, continuous development of novelty techniques is essential to address these challenges. §.§ Highly Integrated Edge Computing Chips The research and development of highly integrated edge computing chips is a very promising research direction. Recently, a research team from MIT <cit.> has utilized integrated silicon photonic chips to encode components of learning models on central servers into optical waves. These optical waves are then transmitted via optical fibers to connected devices, enabling the transfer of substantial data at optical bandwidths of 2.4 TB/s. Subsequently, a straightforward optical device is employed as a receiver to rapidly perform computations on the model components carried by these optical waves.However, it is still challenging to develop such highly integrated edge computing chips, because: 1) Edge devices typically need to operate for extended periods, necessitating low energy consumption to extend device lifespan; 2) Since edge devices often operate in energy-constrained environments such as sensor networks and IoT systems, leveraging environmental energy and employing advanced energy-saving batteries become necessary; 3) Edge devices frequently work in harsh environments with factors like extreme temperatures, humidity, and vibration. Consequently, the chip must exhibit stability and strong resistance to the interference; 4) High integration requires small-sized chips that can incorporate multiple hardware modules such as processors, memory, and sensors. Ensuring effective collaboration among these modules without conflicts poses significant challenges on the design flow, technical complexity, and manufacturing costs; and 5) Security issues arising from sensitive data processing cannot be ignored.In summary, addressing these challenges is crucial for the successful development of highly integrated edge computing chips, since they play a vital role in enabling efficient and reliable edge computing systems.§ CONCLUSION We summarize and discuss the development, solutions and challenges in a large number of related literature on trustworthy EML. First, we introduce important attributes of trustworthy EML. Then, we discuss and summarize basic learning architectures and technical support to achieve trustworthiness. Subsequently, we review a comprehensive and in-depth investigation of recent studies from different aspects: optimality, reliability, interpretability, fairness, and incentive mechanisms for trustworthy EML. Finally, we present the relevant research challenges and open issues for achieving trustworthy EML. We hope this survey provides an effective guideline that can inspire readers to advance trustworthy EML. IEEEtran | http://arxiv.org/abs/2310.17944v1 | {
"authors": [
"Xiaojie Wang",
"Beibei Wang",
"Yu Wu",
"Zhaolong Ning",
"Song Guo",
"Fei Richard Yu"
],
"categories": [
"cs.LG"
],
"primary_category": "cs.LG",
"published": "20231027073954",
"title": "Trustworthy Edge Machine Learning: A Survey"
} |
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Totimorphic structures for space application Amy Thomas,Jai Grover,Dario Izzo,Dominik Dold Advanced Concepts TeamEuropean Space Agency, European Space Research and Technology CentreKeplerlaan 1, 2201 AZ Noordwijk, The Netherlands January 14, 2024 ==================================================================================================================================================================================================================In this paper, techniques for improving multichannel lossless coding are examined. A method is proposed for the simultaneous coding of two or more different renderings (mixes) of the same content.The signal model uses both past samples of the upmix, and the current time samples of downmix samples to predict the upmix.Model parameters are optimized via a general linear solver, and the prediction residual is Rice coded.Additionally, the use of an SVD projection prior to residual coding is proposed.A comparison is made against various baselines, including FLAC.The proposed methods show improved compression ratios for the storage and transmission of immersive audio.Immersive Audio, Multichannel Audio, Lossless Audio Coding § INTRODUCTIONAs immersive audio gains popularity and content creators utilize the capabilities of these modern formats,the same content is increasingly available for different multichannel listening setups, such as 5.1 and 7.1.4<cit.>.Legacy content is also being transferred to these new formats. To facilitate this, the different versions have to be created manually, or via an automatic process of re-mixing or rendering.While downmixing and rendering are mainly simple linear processes,upmixers have to reconstruct unknown signal components with morecomplicated algorithms. In the case of blind upmixing especially, theoriginal artistic intent is not guaranteed to be preserved despite thesophistication of such systems, making them less than ideal.Non-blind upmixing on the other hand can be viewed as being synonymous to audio coding. Multichannel audio transmission and storage typically utilizesparametric coding, e.g. preserving the channel covariancestructure is effective <cit.>.Unfortunately, lossy multichannel coding is difficult to optimize perceptually. Perceptual differences are difficult tojudge due to their multidimensional nature <cit.>. While many codecs make good arguments that they achieve transparencyafter some bitrate, this cannot be fully guaranteed for all possible contentdue to the limitations of subjective testing.Lossless coding is a viable option to address concerns related toboth blind upmixing and parametric coding.As transmission capacities have constantly improved,the need for extremely low bitrates is no longer as major a concern as before. Furthermore, prejudices against lossy coding and transmission haveincreased as both consumers and content creators become more educated.There is a need for exact controlof the immersive audio reproduction process in all situations.Compared to traditional lossless coding, it is not as clear how to mosteffectively deal with immersive audio. More sophisticated prediction models have been proposedin <cit.>.However, in the case of multichannel audio, these models havenot resulted in major benefits,but rather in small improvements. As we see in Sec. <ref>, a system using a very simple baseline model of coding channels separatelyis able to get very close to real codec performance. This paper proposes methods to move toward more comprehensive handling of immersive lossless audio. Our main contribution is topropose a hypothetical audio system, where several(two or more) different mixes are stored simultaneously for the same content. Such a system would be possible to implement by packing the differentlycoded bitstreams in the same file container etc.Alternatively, the downmix(es) can be assumed to be available a priori at thedecoder. However, having mixes in the same container can very effectivelycontrol the artistic intent for the content as well, regardless of compression. We construct a controlled experiment showing the attainable benefitsof using hierarchical reconstruction of the different formats. The method exploits correlations between the different content versions andreconstructs more elaborate presentations based on the lower-levelmultichannel formats and a non-trivial signal model.This would then result in decreased storage requirements for theaudio format described above.In the use case of streaming a single format at a time,we additionally propose a method combining short-term prediction, SVD, and Ricecoding which performs considerably better than the realistic baselines for 5.0 audio,at a computational encoding cost.Details of the methods arepresented in Sec. <ref>. The experiment results, anddiscussion about their implications follow in Sec. <ref> andSec. <ref>. § METHODS §.§ Core lossless coding engineCurrent real-life lossless codecs share many compression principles,techniques, as well as overall performance. FLAC (Free Lossless Audio Codec) <cit.> is an open source, widely-usedcodec implementation whose fundamental algorithms are based on earlier Shorten <cit.>.In this paper, we apply the official FLAC implementation as a reference, and replicate itsperformance with a simple baseline implementation.The general principle applied in FLAC and our method can be describedwith the following simplified signal model (time and channel indices omitted): s = f(s') + e, where the original signal s is represented bya predictor function f() operating on a predictor source signal s', which isoften related to s. The predictionresidual is noted with e. In standard lossless coding, f() is often a linear predictor (LPC) operated on short frames, givingthe signal model: s(t) = ∑_k=1^pβ_k s(t-k) + e(t). For each time sample of the frame, p(i.e. prediction order) past samples are used as linear combination to model it. The coefficientsβ = [β_1 ... β_p] are typicallysolved for minimizing the frame residual MSE ||e||^2_2. In real codecs, search procedures and frame signalingare often used to find the best p, as well as sometimesthe type of predictor solution (e.g. LPC or a standardtemplate <cit.>). In this paper, we omit this optimizationstep, and rather aim to isolate the affect of the predictor source s'.The compression ratio achieved by lossless audio codecs ispredominantly effected by the entropy coding of the prediction residual e. FLAC and Shorten simply assume that the residual distribution isgeometric, with symmetrical focus around value zero. These will have some Golomb code <cit.> as an optimal prefix code. Rice coding is a subset of such codes where the Golombparameter is power of 2 for computational efficiency.Without loss of data type generality, the length of the Ricecode for an integer number n can be obtained byAlgorithm <ref>. It calculates the codeword lengthin bits as a function of the Rice parameter r.Bitwise operations are utilized toperform sign-folding to non-negative integers, so that largerabsolute values get longer codewords. Overflow checks are omitted herebut may be implemented with min operations. The optimal Rice parameter r can be estimatedfrom the signal <cit.>, but we used abrute-force search (e.g. r ∈ [0...20]) and selected the code that minimizes the sum amount of bits in the analysis frame of each channel.Despite the simplicity of this baseline signal model, it already accounts for much of the performance of the current real-lifelossless codecs(see <ref>).Some further tools, such as efficient handling ofsilent frames and signal runs is not considered here, but can certainly improve results for sparse material. Replacing Rice coding witharithmetic coding <cit.>, or hybrid entropy coding <cit.>gains typically few percent in compression efficiency. We also experimented with an additional long-termpredictor that tries to find the best matching segment to the current framefrom the full history of the signal <cit.>,but did not include it in our models.§.§ Multichannel modelingWith multichannel, or immersive audio, the question becomes whether correlations between channels can be exploited.In the case that s has more than one channel,one can use model <ref> for each channel c separately;each sample s_c(t) only is predicted from the past samples of that same channel. The number of model parameters is then pC, where C is the number of channels. In contrast, we also construct a vanilla baseline for a multichannelpredictor in order to test the hypothesis that there are easilyexploitable correlations between the channels: s_c(t) = ∑_c=1^C∑_k=1^pβ_c,k s_c(t-k) + e_c(t). In effect, the current sample of each channel is predictedusing all other channels' samples looking p timesteps in the past. This increases the amount of model parameters to pC^2.Another possibility for exploiting the correlations between the channelsis to utilize a transform with desirable properties. For example, commontechnique is to use PCA or SVD to find a linearprojection of maximal energy compaction, and orthogonality of the transformedcomponents <cit.>. To our knowledge, this technique has not been well investigated for losslessmultichannel audio coding, and it is only approximated with heuristicmid-side channel pairs etc.The SVD projection method of <cit.>was utilized here. To avoid large values, we found that applying the projection to the predictionresidual e and not to the original signal is preferable. It should be notes that while computationally complex at the encoder, themore crucial decoding cost of such transform is only increased by a single matrix multiplication.§.§ Hierarchical reconstruction from downmix The main contribution of this paper is to suggest a multichannel audiosignal model, which when optimized, can be used for efficient predictionin the context of hierarchical reconstruction. Assume that the decoder has available many mixes of the same content, either from the same container, or otherwise. Decoding is traditionally done first on the lowest mix in the hierarchy(aka "downmix", typically the mix with the least amount of channels).It is then used to predict the next mix (aka "upmix") with the signal model. The whole process can be repeated by using the decoded upmix as the new downmixfor the next iteration.We test adding simple additional predictors that utilize the downmixto the previous single-, and multichannel models of(<ref>) and (<ref>): s(t) = ∑_k=1^pβ_k s(t-k) +∑_d=1^Dγ_d s_d(t) + e_c(t),s_c(t) = ∑_c=1^C∑_k=1^pβ_c,k s_c(t-k) +∑_d=1^Dγ_d s_d(t) + e_c(t), where s_d indicates channel d of the downmix, andγ_d the corresponding prediction parameter.It can be seen that these models only utilize the most current sample of the downmix, in addition to predicting from the past of the upmix as intraditional models. We found this worked the best for our tests, as compared to moreelaborate utilization of the downmix. Also, the addition of such downmix prediction only introduces D*C more modelparameters. Also important for the hierarchical models is the optimizer selection,as discussed in Sec. <ref>.§.§ Model optimization Traditionally, lossless coding predictors have been optimized withLevinson recursion <cit.>. These methods achieve computational efficiency by assuming Toeplitz systems. The Toeplitz assumption however limits the type of predictors that are possible: all solved predictor parameters mustoriginate from a time series of consecutive samples. Another alternative used bye.g. <cit.> is to use several cascaded Toeplitz modelswhose parameters are not optimized globally. In matrix form,the minimization becomes: αargmin||s - S'α||_2^2 where prediction source S' is a Toeplitz matrix withdifferent lags of source signal s' as columns. In case of standard single-channel model <ref> of order p: α = [β_1, , β_p]. In contrast, we use well-established solvers for linear systems thatare not limited to be Toeplitz, namely the GELSD algorithmavailable in LAPACK <cit.>. Despite being slower computationally,it allows optimizing all model parameters globally when usingthe complicated models of <ref> and <ref>,and include arbitrary columns to the source signal matrix S'. When the predictor is based on the model of <ref>, we have: α = [β_1,1, , β_c,p, γ_1, , γ_d], for prediction order p, c upmix channels, and d downmix channels, respectively. To enable comparison, GELSD is used for all prediction models. Computational efficiency refinements are largely left for future work. We however utilize Tikhonov regularization <cit.> in all solvers,except the single-channel baseline <ref>, by solving forsmaller (covariance) matrices, and adding a diagonal component δ I: αargmin||S'^Ts - (S'^TS' + δ I)α||_2^2. As is typical, the 16-bit integer input signals are transformed intodouble precision float in the range [-1, 1] for optimization computations,For simplicity, all solved model parameters are quantized as 16-bit floatsprior to the residual calculation and Rice coding.Mirroring the datatype changes and rounding operations in the decoder ensureslossless reconstruction. § EXPERIMENTS§.§ Dataset We tested the methods for 100 songs that had been mixed and mastered specifically to the 5.1. format. The content included mainly pop/rock, and classical genres fromvarious different performers.In our view, such an ad-hoc dataset represents a general, realistic situation,and the exact content or music style is not a determiningfactor for the overall performance. All material was utilized with 16 bit depth and 44.1 kHz sample rate in the tests.LFE channel is omitted in the dataset; we only use the 5.0-channels (L, R, C, Ls, Rs)of the mixes. Preliminary experiments indicated that including LFE in the prediction would not help, and thus sending itwith a single-channel predictor like (<ref>) would just add the sameamount of bitrate for all the methods in the comparisons. Furthermore, LFE channel coding may benefit from advanced silence handling,which was not the focus of this paper.We utilize the ITU standard downmix <cit.> from5.0 to 2.0 stereo in order to show the benefitof hierarchical reconstruction in the typical situation where thedownmix is correlated to the upmix. Of course, this is an artificialsituation; in real life this downmix could be obtained at the decoderwithout sending it, by applying the known linear operation of <cit.>. However, we believe the results also indicate that thereis a benefit when using an artistic downmix,especially if the processing in mixing consists oflinear operations such as panning.This assumption may break down in rarer cases of strong nonlinearitiesor uncorrelated mixes. It should also be emphasized that we are not in this paper addressingobject audio, but channel-based material. The former aims to be agnostic to the rendering setup by sending panning information per object, and thus canin principle account for the upmixing blindly. §.§ Systems tested The tested methods are listed in Table <ref>. The models used for prediction discussed in Sec. <ref> had theirparameters optimized with the Tikhonov regularized (δ = 1e-4) GELSD solverfor (<ref>). For the basicsingle-channel model of (<ref>), GESLD was used to optimize(<ref>) in order to compare this baseline against FLAC withsimilar solver criterion. We used FLAC with the default parameters, as experimenting with other optionsresulted in little difference.The prediction order for all models implemented(as well as the default FLAC LPC max order) was p=8. Unlike in real-life coders, we used constant p for each frame of 4096 samples.The only hyperparameter signaled per frame was the Rice codeparameter per channel. SVD projection of <cit.>prior to residual coding was also applied selectively.In addition to quantizing the residual,the prediction and transformparameters were counted towards thebitrate of each method as discussed in Sec. <ref>. As mentioned in Sec. <ref>, MPEG-ALSincludes a more involved multichannel prediction models. The reasons for not comparing against it here are the lack of availability of MPEG-ALSsoftware, and the related fact that FLAC is more widely adopted. See Sec. <ref> for further discussion.§.§ ResultsTable <ref> shows the average compression ratiosfor the 100 songs tested. Ratio per file was calculated asthe total number of bits divided by original number of bits of the 16-bit representation.Even though the hierarchical systems rely on sending both 2.0 and 5.0mixes simultaneously, the 5.0 upmix compression ratio is more interesting for evaluating the model effect. Since the 2.0-mix does not use hierarchicalprediction, nor was the use of complex non-hierarchical models found to benefit,it can be sent with established stereo coding, such as FLAC in this experiment. This merely adds the same constant rate for all tested methodswhen sending both 2.0 and 5.0. It can be seen that in comparison to FLAC with the same database, the use of thebaseline single-channel prediction model of (<ref>)gives close to identical performance. The vanilla multichannel model ((<ref>)) does not give notable gains.However, when combined with subsequent residual SVD projection, the compressionis improved.For the hierarchical methods requiring the presence of the downmix, andusing it in prediction, the single-channel method of (<ref>) seems to not work well. The real benefit of using the downmix emergeswhen using the multichannel model of (<ref>), especially when combined with the SVD projection.This implies that global parameter optimization can be an importantfactor for the success of complex signal model predictors.Despite a direct comparison against real codecs not being the priority,it should be noted that compression ratios better than ours(or the present FLAC result) were reportedfor MPEG-ALS with a 5.1 test set <cit.>.However, the content in <cit.> may have been sparser and more dynamic,with less surround- and center channel utilization (material being older),and the LFE channel being included.Most importantly, the MPEG-ALS use of cascaded multichannel prediction did not have as significant a relative benefit compared tonon-multichannel baseline, as theutilization of the downmix prediction, or SVD projection in this paper. Rather, it was comparable to the difference between our single- and multichannelbaseline models, (<ref>) and (<ref>). § CONCLUSION The work presents improved methods for the lossless compression of multichannel audio, both with an upmix alone and when the upmix is packed with a downmix, at the cost of computational complexity. Results show approximately a 30% improvement in the compression ratio, over FLAC, when both the downmix and upmix are to be joinly encoded. A 10% gain in compression ratio is achieved over FLAC, by utilizing a combination of multichannel prediction, SVD, and Rice coding, when sending 5.0-content alone. The proposed approaches could yield significant gains for data server storage and transmission of multichannel audio data.Further implementing frame-based method switching, silence handing, and other typical codec features may improve results for specific content. IEEEbib | http://arxiv.org/abs/2310.18461v1 | {
"authors": [
"Toni Hirvonen",
"Mahmoud Namazi"
],
"categories": [
"eess.AS",
"cs.MM"
],
"primary_category": "eess.AS",
"published": "20231027201400",
"title": "Improved Lossless Coding for Storage and Transmission of Multichannel Immersive Audio"
} |
[email protected] QOLS, Blackett Laboratory, Imperial College London, London SW7 2AZ, United [email protected] Quantum Information and Communication, Ecole polytechnique de Bruxelles, CP 165/59, Université libre de Bruxelles (ULB), 1050 Brussels, Belgium Physikalisches Institut, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Straße 3, D-79104 Freiburg, Germany EUCOR Centre for Quantum Science and Quantum Computing, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Straße 3, D-79104 Freiburg, [email protected] International Iberian Nanotechnology Laboratory (INL), Av. Mestre José Veiga, 4715-330 Braga, Portugal Quantum Information and Communication, Ecole polytechnique de Bruxelles, CP 165/59, Université libre de Bruxelles (ULB), 1050 Brussels, BelgiumQOLS, Blackett Laboratory, Imperial College London, London SW7 2AZ, United Kingdom Gaussian boson sampling (GBS), a computational problem conjectured to be hard to simulate on a classical machine, has been at the forefront of recent years' experimental and theoretical efforts to demonstrate quantum advantage. The classical intractability of the sampling task makes validating these experiments a challenging and essential undertaking. In this paper, we propose binned-detector probability distributions as a suitable quantity to statistically validate GBS experiments employing photon-number-resolving detectors. We show how to compute such distributions by leveraging their connection with their respective characteristic function. The latter may be efficiently and analytically computed for squeezed input states as well as for relevant classical hypothesis like squashed states. Our scheme encompasses other validation methods based on marginal distributions and correlation functions. Additionally, it can accommodate various sources of noise, such as losses and partial distinguishability, a feature that have received limited attention within the GBS framework so far. We also illustrate how binned-detector probability distributions behave when Haar-averaged over all possible interferometric networks, extending known results for Fock boson sampling.Gaussian boson sampling validation via detector binning M.S. Kim=======================================================§ INTRODUCTIONGaussian boson sampling (GBS) <cit.> is a well defined computational problemthat, under plausible complexity-theoreticassumptions, is conjectured to be hard to simulate (even approximately) by classical means <cit.>. The task consists of sampling from the output state of a passive linear optical network (LON) fed with squeezed light, using photon-number-resolving (PNR) detectors. Recent progress in the field of photonic quantum technology led to multiple independent claims of quantum advantage<cit.>.In addition to constituting a prime candidate for an experimental demonstration of quantum advantage using present day technological capabilities, GBS also finds application in solving problems of practical interest such as simulating molecular vibronic spectra <cit.>, predicting stable molecular docking configurations for drug development <cit.>, perfect matchings counting <cit.> and finding dense subgraphs <cit.>.A fundamental problem of GBS is that of verifying the correct functioning of the device. That means, we want to certify that data samples are drawn from the ideal theoretical distribution (also known as ground truth), and not from an efficiently computable distribution that only resembles it. For small systems, the ground truth can be analytically computed for arbitrary input Gaussian states and may therefore be directly compared with the experimental observations <cit.>. However, despite remarkable progress of classical algorithms for the simulation of a boson sampler <cit.>, in the quantum advantage regime direct comparison with the ground truth is hindered by the very nature of the problem. In fact, computing the theoretical outcome probabilities involves the evaluation of the Hafnian of complex matrices, a problem known to be #P-hard. Additionally, even if we had access to the ground truth, the problem would persist, as an exponential number of samples would be needed to experimentally estimate the output probabilities. For these reasons, full certification is believed to be out of reach <cit.> and one has to rely on indirect methods to probe the correct functioning of a Gaussian boson sampler.In particular, validation protocols based on the evaluation of an efficiently computable quantity aim at identifying scalable statistical tests that any GBS experiment operating in the quantum advantage regime is expected to pass. Useful validation methods should have the following desirable properties <cit.>.First, they should be universal - i.e. applicable to any interferometric setup - an especially important requirement for GBS applications that need configurability of the LON. They should also be sensitive to high-order multi-photon interference <cit.>, an effect that can be hampered by partial distinguishability of the input states <cit.>. Finally, any practical validation protocol must make a limited use of resources. In particular, the protocol must be computationally efficient, meaning that the quantity at its core only requires polynomially many calculations for its evaluation on a classical computer, and must be sample efficient, i.e. it requires polynomially many experimental data samples to estimate the same quantity to meaningful relative precision.Several validation methods for GBS are found in the literature.As already mentioned, for systems of modest size one is able to compute the full theoretical distribution and directly statistically compare it with data coming from the experiment. For intermediate size GBS setups, Bayesian techniques <cit.> may be employed. These methods involve computing the probability of obtaining a specific output detection pattern under different hypotheses (i.e. different initial states and/or noise models), and with just a few dozen samples it becomes feasible to select the most likely hypothesis with high degree of confidence. While these methods provide strengthening evidence for reduced versions of a larger experiment, they become highly unscalable as we approach the quantum advantage regime. For systems operating in the quantum advantage regime, a popular choice is to consider validation methods based on the computation of low-order correlation functions between the photons counts in each output mode <cit.>.These techniques, while practical and efficient, were shown to be insufficiently sensitive to high-order multi-photon interference, as these correlators mat easily be reproduced by classical models <cit.>. Additional noteworthy validation methods include heavy outcome generation tests and other cross entropy benchmarks <cit.>, as well as that presented in Ref. <cit.>, where the connection between graph theory and GBS is exploited to verify the correct functioning of the device.Recently, validation methods based on detector binning have been proposed and successfully applied to Fock state boson sampling (AABS) <cit.> and to GBS experiments utilizing threshold detectors <cit.>. These validation protocols consist of grouping the detectors at the LON's output into a few bins (whose number must not scale with the size of the system), their measurement readings summed into a single count for each bin. One is then interested in the probability distribution of such binned counts, a quantity which is sensitive to high-order interference. This coarse grain operation has the effect of exponentially reducing the sample space size, meaning that the binned-mode photon number probability distribution can be estimated experimentally (up to relative error) using a number of samples which scales only polynomially with the system’s size, thus ensuring sample efficiency.Then, crucially, for each validation method based on this paradigm to be practical, onemust show that the theoretical binned probability distribution can be computed efficiently, i.e. in polynomial time. This framework encompasses validation methods based on correlation functions as well as marginal distributions.Previous GBS validation protocols based on binned-count probability distributions focused on implementations of the task that employ on/off detectors and made use of numerical Monte Carlo sampling techniques <cit.>. GBS experiments that employ PNR detectors have become increasingly popular in recent years owing to the necessity of entering higher energy regimes (with detection events with large total photon number) to achieve quantum advantage. Additionally, PNR-capabilities are required for most of the real-world applications of GBS, such as the simulation of molecular vibronic spectra. In this paper, we address the problem of validating a GBS experiment employing PNR detectors by developing a framework that enables the computation of binned count probability distributions for various instances of such tasks.In particular we show that the so called characteristic function, i.e. the quantity at the core of this work, may be efficiently and analytically evaluated, making our approach both precise and easily implementable. Within our formalism, the only free parameter and sole source of error is the cutoff one has to introduce due to the Gaussian nature of the initial state.Our method can easily accommodate for losses in the LON, detection imperfections andpartial distinguishability, a source of noise well studied in AABS, but whose role has so far been given little attention in the GBS framework. This is crucial, as experimental implementations of GBS are unavoidably affected by different sources of noise that may challenge the sampling task from entering the regime where quantum advantage is achievable. Indeed, if enough noise is present then the sampling task becomes efficiently simulable using classical algorithms <cit.>.This paper is structured as follows. In Section <ref>, we introduce notation and define the binned-count probability distributions and its connection to the characteristic function, the quantity at the core of this paper.In Section <ref>, we explicitly compute the binned-count probability distribution of a GBS instance employing PNR detectors, and provide evidence that it can be done efficiently on a classical computer. In Section <ref> we derive the Haar-averaged asymptotic behaviour of these distributions, while in Section <ref> we show how our formalism can readily be adapted to compute the binned probability distributions for classical input states such as thermal states and squashed states.In Section <ref> we show how partial distinguishability may be introduced into binned GBS framework. Lastly, in Sec. <ref> we draw conclusions and give some final remarks. § BINNED PROBABILITY AND CHARACTERISTIC FUNCTION Let us consider a generic sampling experiment where an m-mode quantum state ρ̂ is measured via PNR detection.As previously mentioned, the validation scheme we propose relies on grouping the detectors into bins. More formally, we consider a partition of the m output modes into B bins, i.e. non-empty, mutually disjoint subsets {𝒦_j}_j=1^B such that 𝒦_j⊂{ 1,…,m}. We limit the number of bins to be independent of the size of the experiment, i.e. B = O(1). We point out that this requirement is necessary to ensure computational and sample efficiency, as it will soon be clear, however the formalism and the equations presented in this paper remain valid for any partition and number of bins. We are interested in computing the probability P(k) of observing a given detection pattern k = (k_1,…,k_B) of the binned detectors, where k_j denotes the number of photons measured in the j-th bin 𝒦_j.To do so, one introduces the characteristic function [Note this is not the characteristic function typically considered in the continuous variables literature, i.e. the Fourier transform of the Wigner function.], defined via the following expectation valueX(η) = ρ̂e^iη·N̂.Here N̂ is a vector of operators defined such that η·N̂ = ∑_j=1^B η_jN̂_j = ∑_j=1^B η_ j∑_ℓ∈𝒦_jn̂_ℓ,where n̂_j = â^†_j â_j is the bosonic number operator of mode j. We anticipate that, if ρ is a Gaussian state, then Eq. (<ref>) may be computed exactly. In Appendix <ref> we show how the characteristic function and the binned probability distribution are related via a discrete Fourier transform, namelyX(η) = ∑_k∈Ω^BP(k)e^iη·k,where Ω^B = {k| k_i ∈{ 0,…,n},∀ i∈{ 1,…,B}}. Here, n is a photon-number (energy) cutoff the needs to be introduced due to the Gaussian nature of the input state implies that the number of photons is not defined.This parameter must be carefully chosen, in order to ensure that the probability of observing more than n photons in any given bin is negligible.From Eq. (<ref>), the probability P(k) can then be retrieved by means of inverse discrete Fourier transform, i.e.P(k) =1/(n+1)^B∑_ν∈Ω^B X(2πn+1ν) e^-2π i/n+1ν·k. Marginal distributions are naturally encompassed by this formalism. In particular, ℓ-marginals involve considering only a fixed set of ℓ output modes, while disregarding the rest (one is typically interested in single and two-mode marginals). These can be regarded as specific instances of binned probability distributions, where we consider ℓ < m bins comprising of a single detector each. As an example, focusing on the first ℓ output modes, the relevant characteristic function isX(η) = _ℓ{_m-ℓ{ρ̂} e^i∑_j=1^ℓη_j n̂_j},where _m-ℓ{ρ̂} is the reduced ℓ-mode state. Analogously, if we instead consider threshold photo-detection, the formalism still stands, provided that we replace the operator η·N̂ in the characteristic function definition Eq. (<ref>) with η·Π̂ = ∑_j=1^B η_ j∑_ℓ∈𝒦_jΠ̂_ 1,ℓ,where Π̂_1,ℓ = ℐ-|0⟩⟨$| represents the "on" element of the threshold detection's POVM acting on the Hilbert space of modeℓandℐdenotes the identity operator on the Hilbert space. Notice how, in this scenario, there is no need to introduce an energy cutoff, as the sample space size is naturally finite, regardless of the stateρ̂. However, as opposed to GBS with PNR detection, in this case we are not able to analytically compute the characteristic function. Nevertheless, this problem may be tackled using the Monte Carlo techniques developed in Ref. <cit.>.We also note that, for GBS experiments employing threshold detectors and operating in the non-collisional regime (i.e. the probability of observing two or more photons in any given output mode is negligible), we may still approximate the binned probability distribution using Eq. (<ref>).In the next section we show that it is possible to analytically compute the characteristic function Eq. (<ref>) of a GBS experiment, by exploiting the phase-space formulation of quantum optics.§ CHARACTERISTIC FUNCTION OF GBSA GBS experiment consists of injecting squeezed vacuum states into a passive LON, and sampling the output state using PNR detectors.Them-mode initial stateρ̂_inentering the interferometer thus readsρ̂_in =⊗_j=1^m Ŝ(r_j)|0⟩⟨ ̂|S^† (r_j) ,where Ŝ(r_j) = e^r_j/2(â_j^† 2-â_j^2)is the well-known single-mode squeezing operator andr_j>0is the squeezing parameter. The evolution of the input state through the LON is described by the quantum CP-mapℰ. In particular, the linear transformation of the system's modes induced by the (possibly lossy) LON is entirely characterized by a sub-unitary matrixL.In the following, we provide an overview of the techniques employed to analytically compute the characteristic functionX(η) = ℰ(ρ̂_in) e^iη·N̂,while the details of the calculation may be found in Appendix <ref>. Using the identitye^iθn̂ = :e^(e^iθ-1)n̂:, where:∙:denotes normal operator ordering, we can write the multi-mode phase-shift operatore^iη·N̂as followse^iη·N̂ = ⊗_j=1^B ⊗_ℓ∈𝒦_j :e^(e^iη_j-1)n̂_ℓ : .The expectation value of a normally-ordered operator may then be evaluated by averaging over the phase-space variables according to the positivePrepresentation of the state. In fact, anym-mode quantum stateρ̂admits a non-negative phase-space representation via a quasi-probability distributionP(α,β)such thatρ̂ = ∫_ℂ^2m d^2mα d^2mβ P(α,β) |α⟩⟨β^*|/⟨β^*||α⟩ ,where|α⟩ = |α_1⟩⊗⋯⊗|α_m⟩is anm-mode coherent state.Furthermore, a squeezed vacuum state admits a positivePrepresentation on the real space (rather then complex) <cit.>, and consequently the positivePrepresentation ofρ̂_inreads P_in(x,y) = ∏_i=1^m [ √(1+γ_i)/πγ_i e^-(x_i^2+y_i^2)(γ_i^-1+1/2)+x_i y_i] ,wherex,y∈ℝ^mandγ_i = e^2r_i - 1. After some calculations, we obtain X(η)=∫_ℝ^2md^mxd^my P_in(x,y)e^∑_j=1^B (e^iη_j-1) ∑_ℓ∈𝒦_j(L^*y)_ℓ(Lx)_ℓ.Standard, multi-dimensional Gaussian integration yields the final resultX(η) = ∏_i=1^m [ 2√(1+γ_i)/γ_i]1/√(Q),where the matrixQis defined as Q=[ 2Γ^-1 + 𝕀_m -L^⊺diag{ e^iθ_j}_j=1^m L^*; -L^†diag{ e^iθ_j}_j=1^m L 2Γ^-1 + 𝕀_m ],withΓ= diag{γ_i }_i=1^m.Let us now briefly consider the computational complexity of calculating thebinned probability distributionP(k)using the approach outlined above. As previously pointed out, Gaussian states do not have a definite photon content, hence a suitable energy cutoffnneeds to be introduced. In Appendix <ref> we show how this parameter can be appropriately chosen to ensure that the probability of observing more thannphotons in any given bin is exponentially suppressed.With this constraint, only(n+1)^Bbinned photon-count patterns are taken into account, and from Eq. (<ref>) we see that we need to evaluate the characteristic function in(n+1)^Bpoints. Lastly, Eq. (<ref>) reveals that computingX(η)amounts to evaluating the determinant of the2m * 2mmatrixQ, which can be done efficiently. From these considerations, we conclude that the calculation of binned probability distributions is computationally efficient.§ HAAR-AVERAGED DISTRIBUTIONSWithin the usual paradigmatic setting of ideal GBS, them×munitary matrixUthat describes the LON is drawn at random according to the Haar measure. In this section we derive the asymptotic properties of the binned probability distributionP(k|U)averaged over all possible interferometric configurations, where we have highlighted theU-dependence of the function. We also derive the corresponding distribution for distinguishable input states.We remind the reader that the Haar-average ofP(k|U)is defined as P(k| U) = ∫ P(k| U) dμ(U) ,wheredμdenotes the Haar measure and the integral is taken over the whole unitary group.The unitary invariance of the Haar measure implies that for every unitary matrixWit holds thatP(k| U) = ∫ P(k| UW) dμ(U) = ∫ P(k| UW) dμ(U) dμ(W) .The second equality follows from the fact that the averaged binned probability distribution is independent ofW, thus justifying a further average over the latter.In what follows, we takeWas a diagonal matrix that represents anm-mode phase shifte^iϕ·n̂applied to the input state. Consequently, the Haar measure simply readsdμ(W)=dϕ/(2π)^m.It is well-known that averaging overϕcauses the off-diagonal elements of the initial state's density matrix-expressed in the Fock basis-to vanish. In fact, let us consider a genericm-mode quantum state |ψ⟩ = ∑_n c_n|n⟩,where|n⟩ = |n_1⟩⊗⋯⊗|n_m⟩anm-mode Fock state. One easily shows that ∫d^mϕ/(2π)^me^iϕ·n̂|ψ⟩⟨e|^-iϕ·n̂ = ∑_n| c_n|^2 |n⟩⟨,|where we have used the integral representation of the Kroneker delta ∫d^mϕ/(2π)^m e^i(n-m)·ϕ = δ_n,m.Hence, the above argument implies that, when computing Haar averages of the binned probability distribution, we can equivalently substitute any initial statewith its fully decohered version. The advantage of dealing with such statistical mixture lies in the fact that one can readily exploit known results applicable to Fock states input, by means of post-selecting on the total number of detected photonsn. In Ref. <cit.> Shchesnovich used combinatorial arguments toprove that, givenninput photons-either perfectly distinguishable or indistinguishable-impinging on anm-mode unitary LON whose output modes are partitioned intoBbins, the probability of observing a specific detection patternk=(k_1,…,k_B), when averaged over the Haar-random interferometers, are given by P^dist_Fock(k) = n!/∏_i=1^B k_i!∏_i=1^B q_i^k_i, P^indist_Fock(k)=P^dist_Fock(k)∏_i=1^B (∏_ℓ=0^k_i-1 [1+ℓ/𝒦_i] )/∏_ℓ=0^n-1 [1+ℓ/m],whereq_i = |𝒦_i|/m,|𝒦_i |being the cardinality of𝒦_i, i.e. the number of output modes within thei-th bin.By taking the asymptotic limitn≫1andB≪n,min𝒦_iof these expressions,can be reduced further in theto a GaussianP^σ_Fock(k) ≈exp{-n ∑_i = 1 ^B (x_i-q_i)^2/2(1+σα)q_i}/(2π(1+σα)n)^(B-1)/2∏_i = 1 ^B √(q_i)×( 1 + 𝒪( αδ_σ, 1/n) ),whereα= n/mis the particle density,δ_σ,1denotes the the Kronecker delta, and we haveσ= 1for indistinguishable particles andσ= 0for distinguishable ones. Eq. (<ref>) representsthe quantum generalization of the well-known asymptotic law for a multinomial distribution (de Moivre-Lagrange-Laplace theorem <cit.>) which governs the behaviour of boson sampling instances with perfectly distinguishable particles (σ= 0).Quantum statistical effects are taken into account by the parameterα. Note how the asymptotic law in Eq. (<ref>) only depends on the total number of photons, but not on the the specific Fock input state compatible withn. Additionally, Eq. (<ref>) is valid for any assignment of modes to bins, at fixed cardinality of the latter.Coming back to GBS, let us consider, as an example, the paradigmatic instance of the task wheremidentical squeezed vacuum input statesŜ(r)|0⟩enter the LON. Upon post-selecting on detection events withntotal photons, the probability of observing a specific patternk, averaged over Haar-random unitaries representing the ideal LON, will be approximated by Eq. (<ref>), suitably renormalized according to the total photon number distribution. The latter is equal toP_m(n/2), i.e. the probability of detectingn/2photon pairs, given by Eq. (<ref>).Putting everything together we obtain the Haar-averaged asymptotic law for binned-detector GBSP^σ_Gaussian(k) =P_m (n/2) P^σ_Fock(k)= m/2+n/2-1n/2(r)^m(tanhr)^nP^σ_Fock(k).Notice how the above equation holds for evenn, whileP^σ_Gaussian(k) = 0otherwise, due to the fact that ideal squeezed vacuum states only contain an even number of photons. We display this distribution in Fig. <ref>, with a comparison to numerical averages.§ CLASSICAL MOCK-UP DISTRIBUTIONSWithin the context of validating a Gaussian boson sampler, it is of great importance to ensure that the experimental samples are statistically more compatible with the theoretical ground truth of a (possibly lossy) GBS instance, rather than with the output probability distribution of a sampling task that may be efficiently simulated on a classical machine. This situation arises, for example, when the input states entering the LON areP-classical states, i.e. their Glauber-SudarshanPrepresentation is non-negative. These classical input states can then be chosen to resemble a squeezed vacuum state. We remind the reader that anym-mode quantum stateρ̂admits a diagonal representation on the coherent state basis by means of the Glauber-SudarshanPfunction, namelyρ̂ = ∫ d^2mβ P(β) |β⟩⟨ | .Despite being normalized,P(β)may diverge more severely than a delta function and, in general, is not positive semi-definite.A stateρ̂is said to beP-classical, if itsPfunction is positive and well-behaved. The computation of the characteristic function for a GBS instance employingP-classical input states (details can be found in Appendix <ref>) proceeds similarly to what we have outlined in Section <ref>, the main difference being that we can now exploit the Glauber-SudarshanPrepresentation, thus eliminating the necessity of resorting to the positivePrepresentation. This, in turn, results inthe dimension of the phase space being halved.Once again, we start from the definition of the characteristic function X(η) = ℰ(ρ̂_in) e^iη·N̂,where now the quantum state at the output of the LON can be expressed as ℰ(ρ̂_in) =∫ d^2mβP_in(β) |L β⟩⟨ | ,whereP_inis thePfunction of them-mode input state.Using Eq. (<ref>) and after some algebra, we obtain X(η)= ∫ d^2mβP_in(β) e^β^†𝒰^⊺β,where𝒰 = L^⊺H L^*andHis defined in Eq. (<ref>). In what follows, we focus on two classes of classical input states, namely thermal states and squashed states. These constitute two of the most common choices ofP-classical input states that are tested against experimental data coming from a Gaussian boson sampler <cit.>. Let us consider anm-mode thermal input stateρ̂_in = ⊗_j=1^m ν̂_th(k_i), wherek_i = 2n_i+1andn_iis the mean photon number ofν̂_th(k_i). One can show that itsPfunction reads P_th(β) = 𝒩 e^-β^† D β,whereD=diag(2/k_1-1,…,2/k_m-1)and𝒩 = ∏_i=1^m [2/π(k_i-1)] .We can now substitute Eq. (<ref>) into Eq. (<ref>), carry out a multi-dimensional Gaussian integral, and obtain X(η) =𝒩(2π)^m/√(Q),whereQis a complex symmetric matrix defined as Q =[ 2D-𝒰-𝒰^⊺ i(𝒰-𝒰^⊺); i(𝒰^⊺-𝒰) 2D-𝒰-𝒰^⊺ ].Let us now focus on squashed states, i.e. Gaussian states that exhibit vacuum fluctuations in one quadrature (and more than vacuum fluctuation in its conjugate). They may conveniently be parametrized as squeezed thermal states, namelyρ̂_in = ⊗_i=1^m Ŝ(r_i)ν̂_th(e^2r_i)Ŝ^†(r_i) ,withr_i>0. TheP-function of the state Eq. (<ref>) reads P_sq(β) = 𝒩 e^-x^⊺ D xδ^(m)(y) ,whereβ = x + iy,D=diag(2/λ_1,…,2/λ_m)and 𝒩 = ∏_i=1^m [√(2/πλ_i)] .Substituting Eq. (<ref>) into Eq. (<ref>) and integrating over them-dimensional delta function leads to X(η) = 𝒩∫ d^mxe^-x^⊺ (D-𝒰^⊺) x.Finally, standard multi-dimensional Gaussian integration yields X(η) = 𝒩√((2π)^m/Q),withQ = 2D-𝒰-𝒰^⊺.§ PARTIAL DISTINGUISHABILITY Together with losses and detection inefficiencies, partial distinguishability constitutes another source of imperfection that might prevent the sampling task from entering a regime where quantum advantage is, in principle, attainable. In Ref. <cit.>, the authors introduced a simple toy model of GBS that aims at capturing some of the phenomenology associated with partial distinguishability of the photons, and studied how the latter-measured by the indistinguishability efficiency0≤η_i≤1-affects the computational complexity of the problem. The idea that underlies the model is that, before entering the interferometer, the initially indistinguishable light undergoes a process which turns some of the photons into distinguishable ones. These then propagate through the LON via virtual modes without interfering with other photons, before being eventually measured by the detectors. In the following, we adopt the nomenclature of Ref. <cit.> and call “port” what we have referred to as “mode” up until now.In fact, each of the LON'smports is simultaneously populated by the indistinguishable mode as well as by othermadditional distinguishable modes, that independently contribute to the photo-count. Equivalently, each mode spansmports and we reserve the superscript(j)for quantities related to thej-th distinguishable mode. The indistinguishable mode is initially populated withmsqueezed vacuum states. On the other hand, each virtual mode is initialized in the vacuum state, until a fictitious beam-splitter-like transformation causes the exchange of photons between thej-th port of the indistinguishable mode and thej-th port of thej-th distinguishable mode.As a result, before entering the LON, both the indistinguishable and distinguishable modes are populated by lossy squeezed vacuum states, their covariance matrices respectively given by (see Ref. <cit.> for more details)σ = ⊕_j=1^m σ̃(r_j,η_i) ,σ^(j) = 𝕀_2j-2⊕σ̃(r_j,1-η_i) ⊕𝕀_2m-2j,whereσ̃(r,η) =[η e^2r +1-η0;0 η e^-2r +1-η ]is the covariance matrix ofρ̂(r,η): a single-mode squeezed vacuum stateŜ(r)|0⟩that propagated through a loss channel with trasmissivityη. The output detection pattern is obtained simply by summing the contribution of the indistinguishable moden=(n_1,…,n_m)and those of the virtual modesn^(j)=(n^(j)_1,…,n^(j)_m). As the modes contribute independently to the photo-count, it follows that the model we have described is equivalent to simulatingm+1distinct lossy GBS instances where: the Gaussian input states have covariance matrices Eq. (<ref>) and Eq. (<ref>), eachm-port LON is described by the matrixL, and corresponding output ports across them+1modes are grouped together. Later we will absorb the fictitious losses of the input state into the LON, in order to retain a GBS implementation whose input ports are fed either with squeezed vacuum or vacuum states. Detector binning is easily incorporated into this framework by considering a partition of the output ports{𝒦_j }_j=1^Bacross all modes and further grouping together corresponding bins, as can be seen in Figure <ref> (notice how the total number of bins remainsB). This is clearly equivalent to simulating a bigger GBS instance withm(m+1)ports however, since most of them are fed with vacuum states, this is not reflected in an increase in the computational complexity of computing the binned probabilities. We illustrate this with a simple example of anm-port GBS experiment with input stateρ̂_in = |ψ_in⟩⟨$|, where |ψ_in⟩ = Ŝ(r)|0⟩⊗|0⟩^⊗ m-1. Exploiting the positive representation on the phase real space of a squeezed vacuum state we can write ρ̂_in = ∫ d^mxd^myP_in(x,y)|x⟩⟨y|/⟨y||x⟩ ,where P_in(x,y) = P(x_1,y_1) ∏_i=2^m δ(x_i) δ(y_i) and P(x,y) is defined in Eq. (<ref>). The characteristic function is given byX(η) = ∫ d^mxd^myP_in(x,y) e^x^⊺𝒰y,and after integrating over the delta functions we are left with the two-dimensional Gaussian integralX(η) = √(1+γ)/πγ∫ d^2ze^-1/2z^⊺ Q z = 2√(1+γ)/γ√(Q),where z=(x_1,y_1) and Q =[ 2γ^-1+1 -1-𝒰_11; -1-𝒰_11 2γ^-1+1 ].Hence, in this scenario, computing the characteristic function amounts to the calculation of the determinant of a 2× 2 matrix, while in Section <ref> we showed that when all input ports are fed with squeezed light the Q matrix is 2m× 2m. One then easily realizes that, in general, each vacuum input state reduces the dimension of the Q matrix by 2.Consequently, computing the characteristic function of a GBS instance with partial distinguishability amounts to computing the determinant of a 4m× 4m matrix, thus retaining the same scaling of the ideal indistinguishable case.Lastly, we point out that the fictitious losses of the input states may be absorbed into the state's evolution, as depicted in Figure <ref>. In particular, we consider a big LON that describes the evolution of all ports across all modes, characterized by the matrix L̃ = √(η_i) L⊕ (√(1-η_i)L)^⊕ m.Note that simulating GBS with fully distinguishable photons may be achieved by setting η_i = 0, which causes the indistinguishable mode to disappear. In this case, the big LON is described by L̃ = ⊕_i=1^m L .Hence, following the argument above, computing the characteristic function in this scenario amounts to the calculation of the determinant of a 2m× 2m matrix, same as for the ideal indistinguishable case. § CONCLUSIONS In this paper we have studied the problem of validating a Gaussian boson sampler via detector binning. In particular, we showed how to compute the binned-count probability distribution for a GBS instance employing PNR detectors, by means of discrete Fourier transform of the related characteristic function. We derived an analytical closed formula for the latter and showed that its computation only involves the evaluation of matrix determinants, thus ensuring the computational efficiency of the protocol. Our method can accommodate for multiple noise sources, including loss and partial distinguishability. This is a crucial requirement to substantiate any claims of quantum advantage, as the presence of noise and imperfections may render the task classically efficiently simulable, thus preventing it from reaching the regime where quantum speedup is achievable. Additionally, our method encompasses known validation techniques based on marginal probabilities and correlation functions, and may also be used to compute binned-count probability distributions for classical inputs such as thermal states and squashed states.These can then be used to certify that experimental samples are statistically more compatible with the ground truth of GBS, rather then with an efficiently computable probability distribution that only resembles the latter.Lastly, we computed Haar averages of the binned-count probability distribution for an ideal GBS task and showed that, at fixed number of detected photons, one obtains a Gaussian profile in the asymptotic limit. § ACKNOWLEDGMENTSG.B. is part of the AppQInfo MSCA ITN which received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 956071.B.S. is a Research Fellow of the Fonds National de la Recherche Scientifique – FNRS.MSK acknowledges the Samsung GRC programme and the UK EPSRC through EP/W032643/1 and EP/Y004752/1. B.S. thanks Ursula Cardenas Mamani for help on the figures.§ BINNED PROBABILITY AND CHARACTERISTIC FUNCTION Here, we explicitly show that the two expressions of the characteristic function X(η) introduced in the main text Eq. (<ref>) and Eq. (<ref>), indeed coincide.Expanding the state ρ̂ on the Fock state basis |s⟩≡⊗_j=1^m â^† s_j_j/√(s_j!)|0⟩yieldsX(η) = {ρ̂e^iη·N̂} = ∑_s,t{|s⟩⟨ ̂|ρ|t⟩⟨e|^iη·N̂}= ∑_s,t⟨s|ρ̂|t⟩⟨e|^iη·N̂|s⟩ = ∑_s,t⟨s|ρ̂|t⟩⟨e|^i∑_j η_j ∑_ℓ∈𝒦_jn̂_ℓ|s⟩ = ∑_s⟨s|ρ̂|s⟩e^i∑_j η_j ∑_ℓ∈𝒦_js_ℓ.Crucially, the sum over s can now be decomposed as a sum over all possible partitioned modes' detection patterns k, and a sum over the Fock states |s⟩ that are compatible with k, i.e. k_j = ∑_ℓ∈𝒦_js_ℓ for every j∈{ 1,…,B}. Hence, we obtainX(η) =∑_k∑_s|k⟨s|ρ̂|s⟩ e^i∑_j η_j ∑_ℓ∈𝒦_js_ℓ = ∑_k e^iη·k∑_s|k⟨s|ρ̂|s⟩ = ∑_k P(k) e^iη·k,where we have used the fact that ∑_ℓ∈𝒦_js_ℓ = k_j and that ∑_s|k⟨s|ρ|s⟩= P(k).§ CHARACTERISTIC FUNCTION FOR GBS A GBS experiment consists of injecting a LON with squeezed vacuum states and sampling from the output photon-number distribution. The m-mode initial state ρ̂_in thus readsρ̂_in =⊗_j=1^m Ŝ(r_j)|0⟩⟨ ̂|S^† (r_j) ,where Ŝ(r_j) = e^r_j/2(â_j^† 2-â_j^2)is the single-mode squeezing operator and r_j>0 is the squeezing parameter. The quantum evolution of a state via the lossy LON is described by the CP-map ℰ.We recall that the corresponding transformation of the system's modes is linear, hence it is fully characterized by a sub-unitary matrix L, i.e. L^† L ≤𝕀_m, with 𝕀_m denoting the m× m identity matrix.Hence, given a partition {𝒦_j}_j=1^Bof the output modes into B bins, the characteristic function of a GBS experiment we aim to compute readsX(η) = ℰ(ρ̂_in) e^iη·N̂,whereη·N̂ = ∑_j=1^B η_ j∑_ℓ∈𝒦_jn̂_ℓ.In Ref. <cit.> the authors computed this quantity for an ideal system, where the evolution is described by a unitary matrix. Here, we generalize the calculation by first allowing noisy evolution described by L and in Appendix <ref> we compute the characteristic function for P-classical states.Using the identity e^iθn̂ = :e^(e^iθ-1)n̂:we can write the phase-shift operator appearing in Eq. (<ref>) ase^iη·N̂ = ⊗_j=1^B ⊗_ℓ∈𝒦_je^iη_j n̂_ℓ = ⊗_j=1^B ⊗_ℓ∈𝒦_j :e^(e^iη_j-1)n̂_ℓ : .We can prove Eq. (<ref>) by explicit computation of the matrix elements of the two operators on the coherent state basis |α⟩. In particular, we obtain⟨α|e^iθn̂|α⟩ = ∑_n=0^∞⟨α|e^iθn̂|n⟩⟨n||α⟩ = ∑_n=0^∞ e^iθ n |⟨n||α⟩|^2 =e^-|α|^2∑_n=0^∞e^iθ n|α|^2n/n! = e^(e^iθ-1)|α|^2,and⟨α|:e^(e^iθ-1)n̂:|α⟩=⟨α|:e^(e^iθ-1)â^†â:|α⟩ = e^(e^iθ-1)|α|^2where we have used the representation of the coherent state on the Fock state basis, namely |α⟩=e^-|α|^2/2∑_n α^n/√(n!)|n⟩.Eq. (<ref>) and Eq. (<ref>) manifestly coincide, thus concluding our proof. The expectation value of a normally-ordered operator may then be evaluated by averaging over the phase-space variables according to any generalized P distribution of the state. A generic m-mode quantum state ρ̂ admits a non-singular phase-space representation via a quasi-probability distribution P(α,β) such thatρ̂ = ∫_ℂ^2m P(α,β) |α⟩⟨β^*|/⟨β^*||α⟩ dμ(α,β) ,where α,β∈ℂ^m and |α⟩ = |α_1⟩⊗⋯⊗|α_m⟩ is an m-mode coherent state.The specific functional form of P(α,β) depends on the choice of the integration measure, and it can be showed that dμ (α,β)=d^2mα d^2mβ leads to a non-negative representation for any state, which we call positive P representation. Hence, the output state of the LON ℰ(ρ̂_in) can be expressed as ℰ(ρ̂_in) = ∫_ℂ^2m P_in(α,β) ℰ(|α⟩⟨β^*|)/⟨β^*||α⟩ d^2mα d^2mβ,where P_in is the positive P representation of ρ̂_in and the integral spans the whole 2m-dimensional complex space, thus corresponding to a 4m-dimensional real volume integral. In order to compute the action of the CP-map ℰ on the operator |α⟩⟨β^*|, we recall that an m-mode lossy LON may be simply modelled by considering a bigger 2m-mode loss-less interferometer where the additional m environmental modes are initialized in the vacuum state, and a final trace is taken over the environmental degrees of freedom. This ideal LON is characterized by a 2m×2m unitary block matrix T that readsT=[ L N; P M ],and whose unitarity enforces the constraint L^† L + P^† P = 𝕀_m .We can now compute ℰ(|α⟩⟨β^*|)= _env{T̂(|α⟩⟨β^*|⊗|0⟩⟨|) T̂^†} = _env{|T(α 0)⟩⟨T(β^* 0)|} = _env{|Lα Pα⟩⟨Lβ^* Pβ^*|}= ⟨Pβ^*|| Pα⟩|Lα⟩⟨Lβ^*|,where T̂ is the unitary operator that describes the loss-less LON and the trace is taken over the m environmental modes. Substituting Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>) yieldsX(η)=∫_ℂ^2mP_in(α,β)⟨Pβ^*||Pα⟩/⟨β^*||α⟩⟨Lβ^*|⊗_j=1^B ⊗_ℓ∈𝒦_j :e^(e^iη_j-1)n̂_ℓ : |Lα⟩ d^2mα d^2mβ=∫_ℂ^2mP_in(α,β)⟨Pβ^*|| Pα⟩⟨Lβ^*|| Lα⟩/⟨β^*||α⟩ e^∑_j=1^B (e^iη_j-1) ∑_ℓ∈𝒦_j(L^*β)_ℓ(Lα)_ℓ d^2mα d^2mβ,where we have used the fact that matrix elements on the coherent state basis of normally-ordered operators satisfy the property ⟨β| : f(â^†,â): |α⟩ = ⟨β||α⟩ f(β^*,α) , where f(â^†,â) is a generic function of the bosonic operators. We can also conveniently rewrite the exponential appearing in the expression above ase^∑_j=1^B (e^iη_j-1)∑_ℓ∈𝒦_j(L^*β)_ℓ(Lα)_ℓ = e^α^⊺𝒰β,where 𝒰 = L^⊺ H L^* and H is a diagonal matrix that contains the phase information, defined as H=diag(e^iθ_1-1,…,e^iθ_M-1) , withθ_i = η_jif i∈𝒦_j .Furthermore, using the well known formula for the overlap between coherent states⟨β||α⟩ = e^-1/2(|β|^2 + |α|^2 - 2β^* α),one easily proves that⟨Pβ^*|| Pα⟩⟨Lβ^*|| Lα⟩/⟨β^*||α⟩ = e^-1/2[β^⊺ (L^† L + P^† P -𝕀)β^*+α^*⊺(L^† L + P^† P -𝕀)α-2β^⊺(L^† L + P^† P -𝕀)α] = 1 ,where we have used the unitarity constraint Eq. (<ref>). Hence, the characteristic function readsX(η)=∫_ℂ^2mP_in(α,β) e^α^⊺𝒰βd^2mα d^2mβ.A single-mode squeezed vacuum state S(r)|0⟩ admits a positive P representation on a two-dimensional real space that reads <cit.> P(x,y) = √(1+γ)/πγ e^-(x^2+y^2)(γ^-1+1/2)+xy,where 1+γ = e^2r. Note that the above expressionholds for strictly positive values of the squeezing parameter. Since ρ̂_in is a tensor product, it follows that P_in(α,β) is simply the product of the positive P distributions of the squeezed vacuum states, i.e.P_in(x,y) = ∏_i=1^m [ √(1+γ_i)/πγ_i e^-(x_i^2+y_i^2)(γ_i^-1+1/2)+x_i y_i] = 𝒩e^-x^⊺ A x - y^⊺ A y + x^⊺y,where 𝒩 = ∏_i=1^m [ √(1+γ_i)/πγ_i] ,A=diag{γ_i^-1+1/2}_i=1^m .We can now substitute Eq. (<ref>) into Eq. (<ref>) andobtainX(η) = 𝒩∫_ℝ^2m d^mx d^mye^-x^⊺ A x - y^⊺ A y - x^⊺By - y^⊺B^⊺x =𝒩∫_ℝ^2m d^2mz e^-1/2z^⊺ Q z,where B=-(𝕀+𝒰)/2, z=(x,y) and the matrix Q defined asQ = 2[ A B; B^⊺ A ] =[ 2Γ^-1 + 𝕀_m -L^⊺diag{ e^iθ_j}_j=1^m L^*; -L^†diag{ e^iθ_j}_j=1^m L 2Γ^-1 + 𝕀_m ]with Γ = diag{γ_j }_j=1^m. Notice how the positive-definiteness of the real part of the complex symmetric matrix Q ensures the convergence of the Gaussian integral in Eq. (<ref>). Straightforward multi-dimensional Gaussian integration yields the final resultX(η) = 𝒩(2π)^m/√(Q)= ∏_i=1^m [ 2√(1+γ_i)/γ_i]1/√(Q). § ENERGY CUTOFF The sample space size of a GBS experiment employing PNR detection is naturally infinite because of the Gaussian nature of the input states. The latter implies that the number of photons reaching the detectors is not fixed, therefore bringing forth the necessity to introduce an energy cutoff n such that the probability of observing more than n photons in any of the bins is negligible. This, of course, depends on the specific partition of the output modes, however it is clear that this condition is automatically satisfied if we require the probability of having more than n photons entering the passive LON to be exponentially small. Notice that if the GBS instance at study is operating in the non-collisional regime, i.e. the probability of observing more then one photon in any output mode is highly suppressed (also an assumption of current complexity proof of GBS), then the total number of photons is much smaller than the number of modes m, and we can safely set the cutoff to n = m.In the following, we focus on the particularly relevant case of identical squeezed vacuum states entering the interferometer. We emphasize that the LON does not contain active optical elements, meaning that the total number of photons may only decrease due to losses within the system. The expansion on the Fock basis of a single-mode squeezed vacuum stateS(r)|0⟩ = 1/√(coshr)∑_n=0^∞ (tanhr)^n √((2n)!)/2^n n!|2n⟩reveals that the latter contains even number of photons only, with the probability of observing k couples of photons readingP_1(k) = (tanhr)^2k/coshr·(2k)!/(2^k k!)^2.If we now consider m identical squeezed vacuum states, then the probability P_m (k) of observing a total of k photon pairs is obtained by subsequent convolution of Eq. (<ref>). In particular, for even m one obtains <cit.> P_m (k) = m/2+k-1k(r)^m(tanhr)^2k,i.e. a negative binomial distribution.We recall that the average photon number of S(r)|0⟩ is (sinhr)^2, hence the mean value of the total photon pairs distribution P_m is simply m/2(sinhr)^2. The extension of Eq. (<ref>) to odd values of m is achieved by employing the Gamma function to generalize the factorial, namelyP_m (k) = Γ(k+m/2)/Γ (m/2)k!(r)^m(tanhr)^2k.The negative binomial distribution with support on the set { 0,1,2,…} models the number of observed failures witnessed before n successes in consecutive Bernoulli trials. Hence, if Y_n∼NB(n,p) then ℙ[Y_n=k] = k+n-1k (1-p)^k p^n ,where p is the success probability of a single Bernoulli trial.By comparing Eq. (<ref>) with the parametrization of Eq. (<ref>), we establish the correspondences n=m/2 ,p=(r)^2 ,1-p=(tanhr)^2 .Let B_s+n be a random binomial variable with s+n and p being the number of trials and the success probability, respectively. The following identity holdsℙ[Y_n > s] = ℙ[B_s+n<n] ,i.e. the probability of observing more than s failures before having witnessed n successes is equal to the probability of observing less than n successes in s+n trials. Our aim is to derive an anti-concentration inequality for the negative binomial distribution, i.e. we want to bound ℙ[Y_n>α𝔼[Y_n]] = ℙ[B_α𝔼[Y_n]+n<n] ,where α>1. The equation above reveals that it is possible to bound the tail of the negative binomial distribution by exploiting the properties of the binomial distribution. The expectation value of B_α𝔼[Y_n]+n reads𝔼[B_α𝔼[Y_n]+n] = (α𝔼[Y_n]+n)p=n(α (1-p)+ p) ,hence we can write Eq. (<ref>) asℙ[Y_n>α𝔼[Y_n]] = ℙ[B_α𝔼[Y_n]+n<𝔼[B_α𝔼[Y_n]+n]/α(1-p)+p] .The equation above reveals that it is possible to bound the tail of the negative binomial distribution by exploiting Chernoff's bound for the binomial distribution's lower tail <cit.> ℙ[B<(1-ε)𝔼[B]]≤exp(-ε^2/2𝔼[B]) ,where B is a generic binomial random variable and 0<ε<1. In particular, using the parameter identifications in Eq. (<ref>) and 1-ε = (α(1-p)+p)^-1 we obtain the bound we were looking for, namelyℙ[k> α m sinh^2r/2] ≤exp( -m (α-1)^2sinh^2rtanh^2r/4(1+αsinh^2r)) .This bound is exponentially decreasing in the number of modes m and that any accuracy can be achieved by tuning α. In particular, as α increases, the truncation error decreases exponentially, while algorithm's complexity remains unchanged. Notice how, in the limit of many modes m≫ 1, the total photon number distribution distribution Eq. (<ref>) converges to a normal distribution by virtue of the central limit theorem. Consequently, in this regime, one could safely replace Eq. (<ref>) with suitable bounds for the tail of a Gaussian. § CHARACTERISTIC FUNCTION FOR P-CLASSICAL INPUT STATES In this Appendix we compute the characteristic function of a sampling experiment where thermal or squashed states are sent into a LON described by sub-unitary matrix L, before being measured by PNR detectors that are grouped into bins according to the partition {𝒦_j}_j=1^B. The calculations closely follow those presented in Appendix <ref>, the main difference being that the classical nature of these two classes of input states at study allows us to exploit their Glauber-Sudarshan P representation rather than their positive P representation, effectively halving the dimension of the phase space. Any m-mode quantum state ρ̂ admits a diagonal representation on the coherent state basisρ̂ = ∫ d^2mβ P(β) |β⟩⟨ | ,where P(β) is the state's P function. The latter typically displays negativities and severe divergencies, however it is well defined and positive-definite for states - like thermal and squashed - that lack genuine quantum properties.Following Appendix <ref>, we start from X(η) = ℰ(ρ̂_in) e^iη·N̂= ℰ(ρ̂_in)⊗_j=1^B ⊗_ℓ∈𝒦_j :e^(e^iη_j-1)n̂_ℓ : ,and exploit the P-function representation of the initial state ρ̂_in to express the output state asℰ(ρ̂_in) = ∫ d^2mβP_in(β)ℰ (|β⟩⟨)|= ∫ d^2mβP_in(β) |L β⟩⟨ | .Here, we have used the fact that, by definition, a LON described by the sub-unitary matrix L sends a multi-mode coherent state |β⟩ to |Lβ⟩.Hence, the characteristic function readsX(η)= ∫ d^2mβP_in(β) ⟨Lβ|⊗_j=1^B ⊗_ℓ∈𝒦_j :e^(e^iη_j-1)n̂_ℓ : |Lβ⟩ = ∫ d^2mβP_in(β) e^∑_j=1^B(e^iη_j-1)∑_ℓ∈𝒦_j(Lβ)^*_ℓ (Lβ)_ℓ = ∫ d^2mβP_in(β) e^β^†𝒰^⊺β,where 𝒰 = L^⊺ H L^*, and H=diag(e^iθ_1-1,…,e^iθ_M-1) withθ_i = η_j if i∈𝒦_j. We can now substitute specify the input state, substitute its P function in the previous expression and explicitly compute the characteristic function by integration. §.§ Thermal state inputThe P-function of a generic single-mode Gaussian state with zero displacement and covariance matrix σ reads <cit.>P(β) = 2/π√(σ-𝕀_2) e^-2(xy )(σ-𝕀_2)^-1(xy )^⊺,where x and y denote the real and imaginary parts of β, respectively. The conventions used are such that the covariance matrix of a thermal state ν̂_th(k) reads σ = k𝕀_2 ≡ (2n+1)𝕀_2, where n=ν̂_th(k) n̂ is the mean number of thermal photons. The P-function of a multi-mode thermal state ρ̂_in = ⊗_i=1^m ν̂_th(k_i) is simply the product of the P functions of single mode thermal states and reads P_th(β) = ∏_i=1^m 2/π(k_i-1) e^-2/k_i-1|β_i|^2= 𝒩 e^-β^† D β,where D=diag(2/k_1-1,…,2/k_m-1) and the normalizing factor 𝒩 is given by 𝒩 = ∏_i=1^m [2/π(k_i-1)] .Hence, we can write the characteristic function as the Gaussian integralX(η) =𝒩∫ d^2mβe^-β^† (D-𝒰^⊺)β.Let us now highlight the real and imaginary part of the complex vector β= x + iy, namely X(η)=𝒩∫ d^mx d^mye^-(x^⊺-iy^⊺) (D-𝒰^⊺)(x+iy)= 𝒩∫ d^2mze^-1/2z^⊺ Qz,where z=(x,y) and Q is a complex symmetric matrix defined as Q =[ 2D-𝒰-𝒰^⊺ i(𝒰-𝒰^⊺); i(𝒰^⊺-𝒰) 2D-𝒰-𝒰^⊺ ].Standard multi-dimensional Gaussian integration yields the final resultX(η) =𝒩(2π)^m/√(Q). §.§ Squashed state inputA squashed state is a P-classical Gaussian state that exhibits vacuum fluctuations in one quadrature and higher fluctuations in the conjugate one. A single mode squashed state can be parametrized as the following squeezed thermal state ρ̂_sq = Ŝ(r)ν̂_th(e^2r)Ŝ^†(r), its covariance matrix reading σ_sq = diag(e^4r,1) with r>0 without loss of generality. One then usually sets r such that the squashed state's mean photon number matches that of the squeezed vacuum state it approximates.Notice how the matrix σ_sq-𝕀_2 that appears in the P function definition Eq. (<ref>) is now singular, leading to a delta-like divergence in its P function, i.e.P_sq(β) = √(2/πλ) e^-2x^2/λδ(y) ,where λ = e^4r-1>0.Let us now consider a LON fed with m squashed states, i.e. ρ̂_in = ⊗_i=1^m Ŝ(r_i)ν̂_th(e^2r_i)Ŝ^†(r_i). The P-function of this state clearly readsP_sq(β) = 𝒩 e^-x^⊺ D xδ^(m)(y) ,where β = x + iy, D=diag(2/λ_1,…,2/λ_m) and 𝒩 = ∏_i=1^m [√(2/πλ_i)] .The characteristic function then readsX(η) = 𝒩∫ d^mx d^mye^-x^⊺ D xδ^(m)(y) e^(x^⊺-iy^⊺) 𝒰^⊺(x+iy) = 𝒩∫ d^mxe^-x^⊺ (D-𝒰^⊺) x.Upon the symmetrization of the matrix D-𝒰^⊺ we arrive atX(η) = 𝒩∫ d^mxe^-1/2x^⊺ Q x= 𝒩√((2π)^m/Q),where Q = 2D-𝒰-𝒰^⊺. | http://arxiv.org/abs/2310.18113v1 | {
"authors": [
"Gabriele Bressanini",
"Benoit Seron",
"Leonardo Novo",
"Nicolas J. Cerf",
"M. S. Kim"
],
"categories": [
"quant-ph"
],
"primary_category": "quant-ph",
"published": "20231027125552",
"title": "Gaussian boson sampling validation via detector binning"
} |
Numerical impulse controllability for parabolic equations] Numerical impulse controllability for parabolic equations by a penalized HUM approach 1]Salah-Eddine [email protected]]Ghita El [email protected] 3]Lahcen [email protected] 4]Walid [email protected][1,2,3]LMDP, UMMISCO (IRD-UPMC), Cadi Ayyad University, Faculty of Sciences Semlalia, Marrakesh, B.P. 2390, Morocco[4]Department of mathematics, Ibn Zohr University, Faculty of Applied Sciences Ait Melloul, Route Nationale N10, Azrou, B.P. 6146, Morocco This work presents a comparative study to numerically compute impulse approximate controls for parabolic equations with various boundary conditions. Theoretical controllability results have been recently investigated using a logarithmic convexity estimate at a single time based on a Carleman commutator approach. We propose a numerical algorithm for computing the impulse controls with minimal L^2-norms by adapting a penalized Hilbert Uniqueness Method (HUM) combined with a Conjugate Gradient (CG) method. We consider static boundary conditions (Dirichlet and Neumann) and dynamic boundary conditions. Some numerical experiments based on our developed algorithm are given to validate and compare the theoretical impulse controllability results. [ [ January 14, 2024 ====================§ INTRODUCTION AND MAIN RESULTSParabolic equations, where the heat equation is the prototype, constitute a class of Partial Differential Equations (PDEs) that describe the evolution of physical quantities over time and space. The heat equation is particularly important in the study of heat transfer and diffusion processes.Impulsive systems in the context of PDEs refer to systems whose behavior changes abruptly or impulsively at certain points in space or time. These impulsive changes can be modeled mathematically using PDEs with discontinuities or Dirac delta functions, which are used to present concentrated impulses at specific spatial or temporal locations. Impulsive systems are encountered in various fields, from biological models to fluid dynamics as well as economics, among others. They manifest as sudden boundary conditions in switched control inputs or shock waves in compressible flows. Controlling impulsive systems via impulse controls can be challenging (due to the presence of delta functions) requiring specific techniques such as logarithmic convexity, Carleman commutator approach, and optimal impulse control theory, see e.g., <cit.>.Impulsive controllability is a concept in control theory that deals with the ability to control a dynamical system by applying control inputs at specific discrete instants or time intervals, often referred to as “impulse times" (or intervals). In impulsive controllability, the control actions are not continuously applied but occur at distinct time points. Impulsive systems can model situations where control inputs change abruptly. They make it possible to handle systems in situations where continuous control may not be feasible or practical. For instance, in switched control systems, the control input can change instantaneously at a specific instant. In this context, the impulse approximate controllability was studied for a linear heat equation with homogeneous Dirichlet and Neumann boundary conditions in <cit.>, using a new strategy combining the logarithmic convexity method and the Carleman commutator approach. In <cit.>, the authors have established a Lebeau-Robbiano-type spectral inequality for a degenerate one-dimensional elliptic operator with application to impulse control and finite-time stabilization. It should be pointed out that this method is a new approach to steer the solution to zero using impulse control as a stabilizer in finite time. Recently, in <cit.> the authors have established new results of impulse controllability for a general type of dynamic boundary conditions which introduce mathematical issues that require sophisticated estimates due to the boundary terms. We refer to the seminal paper <cit.> for more details on the non-impulsive control case. Boundary conditions play a crucial role in solving PDEs as they have a significant effect on the behavior of the solutions. They describe the interaction of the system with its surroundings. For example, in heat transfer problems, the temperature on the boundary may represent an insulated or constant temperature boundary, reflecting the physical properties of the system. The choice of appropriate boundary conditions is a fundamental step in the analysis of PDEs in various fields of science and engineering. From a numerical perspective, boundary conditions are fundamental for ensuring the accuracy, stability, and convergence of numerical solutions to PDEs. They guide how the spatial domain should be discretized. In this study, we propose an algorithm designed for the numerical computation of impulse controls with minimal energy. This approach involves an adaptation of the penalized HUM and a CG method to the impulsive case. For further information, we recommend the book <cit.> and the paper <cit.>. Our investigation encompasses both static boundary conditions (Dirichlet and Neumann) as well as dynamic boundary conditions. To validate and compare theoretical findings regarding impulse controllability, we conduct several numerical experiments using the algorithm we have developed. Finally, it should be emphasized that the numerical computation of impulse controls has not been considered before for static boundary conditions. We refer to the section “Future works" of the thesis <cit.>.The remainder of this paper is structured as follows: in Section <ref>, we provide a review of various results related to impulse controllability for the heat equation with different boundary conditions. Section <ref> is devoted to the algorithm for computing impulse optimal controls, accompanied by numerical simulations for illustration. Finally, we conclude with a comparative analysis of numerical outcomes across various boundary conditions.§ PRELIMINARY RESULTSLet Ω⊂ℝ^n be a bounded domain with smooth boundary Γ:=∂Ω. Let T>0 be an arbitrary control time and τ∈ (0, T) be an arbitrary fixed impulsion time. We consider the following impulse-controlled system [left = ]alignat=2∂_t Ψ- 𝐀 Ψ=0, in(0, T) \{τ}, ψ(·, τ)=ψ(·, τ^-)+1_ω h(·,τ), inΩ, Ψ(0) = Ψ^0 , where ψ(·,τ^-) denotes the left limit of the function ψ at time τ, the control region ω⋐Ω is a nonempty open subset, 1_ω stands for the characteristic function of ω,h(·,τ) is an impulsive control acting at the impulse instant τ. The notation 𝐀 designates a linear operator on an L^2-space with norm ·, and the state Ψ might be a couple (ψ, ψ_Γ), depending on the type of boundary conditions (see the next subsections).If the operator 𝐀 generates a C_0-semigroup. Then, for every initial datum Ψ_0, the system (<ref>) has a unique mild solution given byΨ(t) = e^t𝐀Ψ_0 + 1_{t≥τ}(t) e^(t-τ)𝐀 (1_ω h(τ),0),t∈ (0,T). System (<ref>) is null approximate impulse controllable at time T if for any ε > 0 and any Ψ^0, there exists a control function h ∈ L^2(ω),such that the associated state at final time satisfiesΨ(·, T)≤ε‖Ψ^0‖ . This means that for every ε >0 and every initial datum Ψ^0, the setℛ_T, Ψ^0, ε :={h ∈ L^2(ω):the solution of (<ref>) satisfies ‖Ψ(·, T)‖≤ε‖Ψ^0‖},is nonempty; which leads to the definition of the cost of null approximate impulse controllability.The quantity K(T,ε):=sup_Ψ^0=1inf_h ∈ℛ_T, Ψ^0, εh_L^2(ω)is called the cost of null approximate impulse controllability at time T. §.§ Dirichlet caseIn this subsection, we recall the impulsive controllability result for the heat equation with the Dirichlet boundary condition: [left = ]alignat=2∂_t ψ-Δψ=0, inΩ×(0, T) \{τ}, ψ(·, τ)=ψ(·, τ^-)+1_ω h(·,τ), inΩ,ψ= 0,onΓ×(0, T) ,ψ(·, 0) = ψ^0, onΩ. To obtain the null approximate impulse controllability of the above equation, the key ingredient is the following logarithmic convexity estimate:For any T > 0 and any ω nonempty open subset of Ω,u(·, T)_L^2(Ω)≤e^C K/Tu(·, T)_L^2(ω)^βu(·, 0)^1-β_L^2(Ω),where β∈ (0,1), C, K >0 are constants, and u is the solution of the following homogeneous system [left = ]alignat=2 ∂_t u-Δu=0, inΩ×(0, T), u= 0,onΓ×(0, T) , u(·, 0)=u^0, inΩ.The previous lemma reflects an observability estimate at a single instant of time. This estimate has been proven using the weight functionΦ (x,t)=-| x-x_0|^2/4(T-t+ρ),(x,t) ∈Ω× (0,T),where x_0∈ω and ρ>0 is suitably chosen.Consequently, the following result on impulse controllability of the equation (<ref>) was established: [<cit.>] The heat equation (<ref>) is null approximate impulse controllable at time T. Moreover, we have the following upper bound for the control cost K_1(T, ε) ≤M_1e^M_2/T-τ/ε^δ,where M_1, M_2 and δ are positive constants depending on Ω and ω.§.§ Neumann caseHere, we recall the impulsive controllability result for the heat equation with the Neumann boundary condition: [left = ]alignat=2∂_t ψ-Δψ=0, inΩ×(0, T) \{τ}, ψ(·, τ)=ψ(·, τ^-)+1_ω h(·,τ), inΩ,∂_νψ= 0,onΓ×(0, T) ,ψ(·, 0) = ψ^0, inΩ, where ν is the unit outward normal vector to Γ, and ∂_νψ denotes the normal derivative. As before, the key ingredient is the following logarithmic convexity estimate:For any T > 0 and any ω nonempty open subset of Ω,u(·, T)_L^2(Ω)≤(e^C(1+1/T)u(·, T)_L^2(ω))^βu(·, 0)_L^2(Ω)^1-β .where β∈ (0,1), C >0 are constants only depending on Ω and ω, and u is the solution of the homogeneoussystem [left = ]alignat=2 ∂_t u-Δu=0, inΩ×(0, T), ∂_νu = 0,onΓ×(0, T) , u(·, 0)=u^0, inΩ.This result has been recently extended to a general parabolic equation with variable diffusion and drift coefficients in <cit.>.Being different from the Dirichlet case, the above lemma has been established by introducing a small parameter s∈ (0,1) in the weight functionΦ_s(x,t)=-s| x-x_0|^2/4(T-t+ρ),(x,t) ∈Ω× (0,T),where x_0∈ω and ρ>0 is suitably chosen.Then, one can prove the following impulse controllability for the equation (<ref>): [<cit.>] The heat equation (<ref>) is null approximate impulse controllable at time T. Moreover, we have the following upper bound for the control cost K_2(T, ε) ≤N_1e^N_2/T-τ/ε^σ,where N_1, N_2 and σ are positive constants depending on Ω and ω.§.§ Dynamic caseNow, we consider the following heat equation with dynamic boundary conditions [left = ]alignat=2∂_t ψ-Δψ=0, inΩ×(0, T) \{τ}, ψ(·, τ)=ψ(·, τ^-)+1_ω h(·,τ), inΩ, ∂_tψ_Γ - Δ_Γ ψ_Γ + ∂_νψ=0, onΓ×(0, T)\{τ},ψ_Γ(·, τ)=ψ_Γ(·, τ^-), onΓ,ψ_Γ(x,t) = ψ_|Γ(x,t),onΓ×(0, T) , (ψ(·, 0),ψ_Γ(·, 0))=(ψ^0,ψ^0_Γ), onΩ×Γ, where (ψ^0,ψ^0_Γ)∈ L^2(Ω)× L^2(Γ) denotes the initial condition. Again, the key result is the logarithmic convexity estimate.For any T > 0 and any ω nonempty open subset of Ω, the following estimate holdsU(·, T)_L^2(Ω)× L^2(Γ)≤(μe^K/Tu(·, T)_L^2(ω))^βU(·, 0)^1-β_L^2(Ω)× L^2(Γ),where μ, K >0, β∈ (0,1) are constants, and U=(u,u_Γ) is the solution of the following homogeneous system [left = ]alignat=2 ∂_t u-Δu=0, inΩ×(0, T),∂_tu_Γ - Δ_Γ u_Γ + ∂_νu =0, onΓ×(0, T), u_Γ(x,t) = u_|Γ(x,t),onΓ×(0, T) , (u(·, 0),u_Γ(·, 0))=(u^0,u^0_Γ), onΩ×Γ.In this dynamic case, several new boundary terms occur and should be absorbed. This has been done thanks to the small parameter s introduced in the weight function Φ_s inspired by the Neumman case.Consequently, we obtained the following impulse controllability result: [<cit.>] The system (<ref>) is null approximate impulse controllable at any time T > 0. Moreover, we have the following upper bound for the control cost K_3(T, ε) ≤L_1e^L_2/T-τ/ε^κ,where L_1, L_2 and κ are positive constants depending on Ω and ω. § ALGORITHM FOR CALCULATING HUM IMPULSE CONTROLSIn this section, we propose a numerical algorithm designed for calculating the HUM impulse controls. This method employs a penalized HUM approach along with a CG algorithm. We refer to <cit.> and <cit.> for more details on such a method. §.§ NotationsWe introduce the following notations to encapsulate various boundary conditions and give a general algorithm:𝕃^2:= L^2(Ω),(Dirichlet and Neumann cases),L^2(Ω)× L^2(Γ),(Dynamic case),with the inner product⟨·, ·⟩:=⟨·, ·⟩_L^2(Ω),(Dirichlet and Neumann cases), ⟨·, ·⟩_L^2(Ω)+⟨·, ·⟩_L^2(Γ),(Dynamic case)and the norm·:=·_L^2(Ω),(Dirichlet and Neumann cases), ·_L^2(Ω)× L^2(Γ),(Dynamic case).Any capital letter as ϑ will stand for the couple (υ, υ_Γ) ∈ L^2(Ω)× L^2(Γ). In particular, we will identify (υ, υ_Γ) ∈ L^2(Ω)× L^2(Γ) with υ∈ L^2(Ω) in Dirichlet and Neumann cases. We denote by 𝐁𝐂 one of the boundary conditions: Dirichlet condition, Neumann condition, or Dynamic condition. In each case, the operator 𝐀 stands for the governing linear operator, and e^t 𝐀 designates its associated C_0-semigroup on 𝕃^2. §.§ The HUM impulse controlsLet ε>0 be fixed and let Ψ^0 be an initial datum to be controlled. Without loss of generality, we may assume that Ψ^0=1. We define the cost functional J_ε: 𝕃^2 →ℝ byJ_ε(ϑ^0)=1/2υ(·, T-τ)_L^2(ω)^2+ε/2ϑ^0^2 + ⟨Ψ^0, ϑ(·,T) ⟩,where ϑ is the solution of the homogeneous heat equation with 𝐁𝐂 corresponding to ϑ^0. Note that the functional J_ε is strictly convex, of class C^1, and coercive, i.e., J_ε(ϑ^0) →∞ as ϑ^0→∞. Then the unique minimizer ϑ̃^0_ε∈𝕃^2 of J_ε is characterized by the Euler-Lagrange equation∫_ωυ̃_ε(x, T-τ) z(x, T-τ) d x+ε⟨ϑ̃^0_ε,Z^0⟩+⟨Ψ^0, Z(·,T) ⟩ = 0for all Z^0 ∈𝕃^2, where ϑ̃_ε and Z are respectively the solutions of the homogeneous heat equation with 𝐁𝐂 corresponding to ϑ̃^0_ε and Z^0. We introduce the control operator ℬ𝕃^2 →𝕃^2 defined byℬϑ=(1_ωυ,0),and we consider the non-negative symmetric operator (the Gramian operator)Λ_τ𝕃^2 →𝕃^2,given byΛ_τϱ=e^(T-τ)𝐀ℬ e^(T-τ)𝐀ϱ.Thus, the HUM impulse control is given byh=ℬ e^(T-τ)𝐀ϑ̃^0_ε,and the identity (<ref>) can be rewritten as(Λ_τ +ε𝐈_𝕃^2) ϑ̃^0_ε = -e^T𝐀Ψ^0,where 𝐈_𝕃^2 denotes the identity operator. To resolve the above operator equation, we propose the following CG algorithm.§.§ Numerical experimentsNow, we conduct several numerical tests to demonstrate the theoretical findings and to highlight the effectiveness of the above CG algorithm.In all main numerical experiments, we will take the following valuesT=0.02, τ=0.01, Ω=(0,1), ω=(0.3, 0.7) ⋐ (0,1),and the initial datum to be controlled is given byψ_0(x)=√(2)sin(π x),x ∈ [0,1]. We employ the method of lines to numerically solve diverse parabolic equations subject to different boundary conditions in Algorithm <ref>. In this approach, we use the uniform spatial grid given by x_j=j Δ x forj=0, N_x, with Δ x=1N_x. Next, we denote by u_j(t):=u(t,x_j). The second-order derivative of u is approximated byu_xx(t,x_j) ≈u_j-1(t)-2 u_j(t) + u_j+1(t)/(Δ x)^2,j=1, N_x-1.The first-order derivatives on the boundary are approximated byu_x(t,0)≈u_1(t)-u_0(t)/Δ xu_x(t,1)≈u_N_x(t)-u_N_x-1(t)/Δ x.Thus, it suffices to resolve the resulting system of ordinary differential equations.For our computations, we take N_x=25 for the spatial mesh parameter. The initial guess in the algorithm is taken as 𝐟_0=0. We also choose ε=10^-2 and the stopping parameter tol=10^-3 for the plots. §.§ Dirichlet caseWe plot the uncontrolled and the controlled solutions.The algorithm stops at the iteration number k_*=10. §.§ Neumann caseNext, we plot the uncontrolled and the controlled solutions.The algorithm stops at the iteration number k_*=29. §.§ Dynamic case Next, we plot the uncontrolled and the controlled solutions.The algorithm stops at the iteration number k_*=11.By analyzing the previous experiments, some comments and remarks are in order: * From Figures <ref>, <ref> and <ref>, we notice the impact of the impulse controls at time τ=0.01 on the state.* Tables <ref>, <ref> and <ref> show that when we fix the value of the penalization parameter ε, the Dirichlet case requires fewer iterations. The Dynamic case comes afterward with more needed iterations than the Dirichlet case. In contrast, the Neumann case requires more iteration than both previous cases.* The tables also show that the norms Ψ(T) decrease and the norms of the impulse controls h_L^2(ω) increase as ε tends to zero. Moreover, for a fixed ε, we haveΨ_D(T)<Ψ_Dyn(T)<Ψ_N(T),andh_D_L^2(ω)<h_Dyn_L^2(ω)<h_N_L^2(ω).These are relevant numerical observations that deserve further theoretical investigation to better understand why the above comparison holds. The numerical simulations show that the HUM algorithm yields accurate results for the numerical approximation of impulse controls at one single instant τ for the heat equation with static boundary conditions (Dirichlet and Neumann) and also with dynamic boundary conditions. The developed algorithm deserves more investigation in the context of discrete systems and their convergence analysis in terms of discrete impulse controls. This will be investigated in future research.99BCKDPC. Bardos and K. D. Phung, Observation estimate for kinetic transport equations by diffusion approximation, Comptes Rendus Mathematique, 355 (2017), 640–664.ABWZA. Ben Aissa and W. Zouhair,Qualitative properties for the 1-D impulsive wave equation: controllability and observability,Quaestiones Mathematicae, (2021).Bo'13 F. Boyer, On the penalised HUM approach and its applications to the numerical approximation of null-controls for parabolic problems, ESAIM: Proc., 41 (2013), 15–58.Buffephung R. Buffe and K.D. Phung, A spectral inequality for degenerate operators and applications, C. R. Math. Acad. Sci., 356 (2018), 1131–55.RBKDPR. Buffe and K. D. Phung, Observation estimate for the heat equations with Neumann boundary condition via logarithmic convexity, J. Evol. Equ., 22, 86 (2022).CGMZ'23 S. E. Chorfi, G. El. Guermai, L. Maniar and W. Zouhair, Finite-time stabilization and impulse control of heat equation with dynamic boundary conditions, Dyn Control Syst. , (2023), 1-31.CGMZ'21 S. E. Chorfi, G. El. Guermai, L. Maniar and W. Zouhair, Impulsivenull approximate controllability for heat equation with dynamic boundary conditions, Math. Control Relat. Fields, 13 (2023), 1023-1046.CGMZ'22 S. E. Chorfi, G. El. Guermai, L. Maniar and W. Zouhair, Logarithmic convexity and impulsive controllability for the one-dimensional heat equation with dynamic boundary conditions, IMA J. Math. Control. Inf., 39 (2022), 861-891.Du22 Y. Duan, L. Wang and C. Zhang, Quantitative unique continuation for parabolic equations with Neumann boundary conditions, (2022), arXiv:2202.10200.GL'08 R. Glowinski, J.-L. Lions and J. He, Exact and Approximate Controllability for Distributed Parameter Systems: a Numerical Approach, 117, Encyclopedia of mathematics and its applications, Cambridge University Press, Cambridge, UK; New York, 2008.XLSSX. Li and S. Song, Impulsive systems with delays, Springer Singapore, 2022.MMS'17 L. Maniar, M. Meyries and R. Schnaubelt, Null controllability for parabolic equations with dynamic boundary conditions, Evol. Equat. and Cont. Theo., 6 (2017), 381–407.pkm K. D. Phung, Carleman commutator approach in logarithmic convexity for parabolic equations, Math. Control Rel. Fields, 8 (2018), 899–933.pkdgwyx K. D. Phung, G. Wang, and Y. Xu, Impulse output rapid stabilization for heat equations, J. Differential Equations, 263 (2017), 5012–5041.Vo'18 T. M. N. Vo, Construction of a control and reconstruction of a source for linear and nonlinear heat equations, PhD thesis, Orléans University, 2018.Vo'17 T. M. N. Vo, The local backward heat problem, (2017), arXiv:1704.05314. | http://arxiv.org/abs/2310.18436v1 | {
"authors": [
"S. E. Chorfi",
"G. El Guermai",
"L. Maniar",
"W. Zouhair"
],
"categories": [
"math.OC",
"cs.NA",
"math.AP",
"math.NA",
"93C27, 49N25, 35R12"
],
"primary_category": "math.OC",
"published": "20231027192256",
"title": "Numerical impulse controllability for parabolic equations by a penalized HUM approach"
} |
An Advanced Fuel Efficiency Optimization Model with Fractional Programming Md Isfakul Anam, Clarkson University, Tuyen Vu, Clarkson UniversityMd Isfakul Anam, T. T. Nguyen, and T. Vu with Clarkson University, Potsdam, NY, USA; Emails: [email protected], [email protected], [email protected]. Corresponding Author: T. Vu, Email: [email protected]================================================================================================================================================================================================================================================================================================= Reducing the fuel consumption within a power network is crucial to enhance the overall system efficiency and minimize operating costs. Fuel consumption minimization can be achieved through different optimization techniques where the output power of the generators is regulated based on their individual efficiency characteristics. Existing studies primarily focus either on maximizing the efficiency function or minimizing the operating cost function of the generators to minimize fuel consumption. However, for practical implementation, it becomes imperative to incorporate a function within the optimization framework to represent the fuel consumption rate directly. This study introduces a novel approach by formulating a minimization problem with a sum-of-ratios objective function representing the fuel consumption rate. However, optimization problems with sum-of-ratios objective functions or constraints are extremely challenging to solve because of their strong nonlinearity. To efficiently solve the formulated problem, a fractional programming (FP) approach is adopted in this study. This reformulation technique significantly reduces the solution time of the optimization problem and provides a better solution than nonlinear programming (NLP). In addition, the reformulated problem can also be applied to large-scale systems where the NLP fails to converge. The proposed methodology of this study is tested on the notional MVAC ship system, modified IEEE 30-bus and IEEE 118-bus systems. The results demonstrate that the model successfully minimizes fuel consumption by effectively scheduling the generator and ESS dispatch.Fuel consumption, system efficiency, energy management system, sum-of-ratio problem, fractional programming. [01]P_i,g, Q_i,gReal and reactive power output of i-th generator [02]P_i,l, Q_i,lReal and reactive load at i-th bus [03]P_i,inj, Q_i,injReal and reactive power injection at i-th bus [04]V_i, θ_ikvoltage and voltage angle difference [05]G_ik, B_ikConductance and susceptance of line ik [06]P_ikReal power capacity of line ik [07]DR_i, UR_iDown and up ramp rate of i-th generator [08]P_i^C,max, P_i^D,maxMaximum charging and discharging rate of i-th ESS [09]E_i,bEnergy stored at i-th ESS [10]P_i,b^rOutput power of i-th ESS [11]SOC_iState of charge of i-th ESS [12]αfuel energy density [13]N_gNumber of generator [14]TTotal planning horizon [15]tTime step [16]BNumber of buses§ INTRODUCTION Continuous increments in load demand on power systems due to the growing number of consumers impose a great challenge for the engineers and operators in the field. It is essential to supply the increased load requirements of a power system territory by introducing additional energy sources and/or expanding the capacity of the existing sources <cit.>. Both solutions result in higher fuel costs incurred for operating a large number of distributed generators. Furthermore, the survival period of stand-alone power networks, such as islanded microgrids or ship electrical power systems, largely depends on the fuel consumption of the distributed generators. As a result, a state-of-the-art system efficiency model is required to optimize the fuel consumption of a system. System efficiency optimization techniques refer to minimizing the fuel consumption rate by regulating the output power of the generators within the system. Over the past decades, a significant number of research has been conducted to improve system efficiency using various optimization methods.One notable contribution is presented in <cit.>, where the authors propose a novel approach to analyze the characteristics of the efficiency function, leading to the determination of the maximum total power supply and overall efficiency for such systems. In <cit.>, a genetic algorithm modeling framework is presented where the optimization problem involves minimizing the thermal cost function. The cost function is constructed based on the power generation characteristics of the hydropower plant. A similar approach can be found in <cit.>, where the daily optimal generation scheduling problem (DOHGSB) is solved by implementing a unique differential evolution algorithm. The objective function is formulated by analyzing the hydropower plant characteristics or input-output curve.In <cit.>, the authors introduce a novel distributed algorithm to maximize system efficiency. A fourth-order and a third-order efficiency function for the main and auxiliary power generation module (PGM), respectively, are optimized using a distributed crow search algorithm (DCSA).The approaches in <cit.>-<cit.> focus on optimizing the efficiency function or the generation characteristics of the generators, which can be complex to apply to systems consisting of generators with different ratings. Also, since the generator efficiency function is not a straightforward representation of the system's fuel consumption, it is impractical to optimize the efficiency function with an objective to improve fuel efficiency. Some literature can be found where the optimization problem is formulated to minimize the fuel consumption rate or fuel cost directly. In <cit.>, four power-sharing schemes are presented to establish a unit commitment strategy to minimize fuel costs. However, instead of considering the generator efficiency function, the authors utilized a typical fuel consumption characteristic curve, which largely depends on the capacity and model of a generator.The authors in <cit.> implement a recursive method to estimate a second-order polynomial model of specific fuel consumption. The model is later used to determine the optimal load distribution between the different generators. A similar approach is found in <cit.>, where dynamic programming is used to solve the formulated problem. However, both optimization models are suitable for simple power networks since they don't include AC power flow or energy storage models.A minimum hourly fuel consumption curve interpolated by a quadratic equation is used in <cit.> to minimize the fuel consumption of hybrid electric vehicles. This method can not be extended for transmission or distribution systems since most power system constraints are not included in the model.In <cit.>, the authors utilize a 3D map of brake-specific fuel consumption (BSFC) in terms of the rotating speed of the drive and generated mechanical torque to determine the minimum point of diesel engine (DE) fuel consumption. Authors in <cit.> also apply a similar method where a speed vs. power curve of the diesel engine (DE) is exploited to achieve minimum fuel consumption conditions. Nonetheless, these methods have two major drawbacks: they are implemented particularly for the DC ship systems, and the strong nonlinearity of the BSFC curve increases the complexity of determining the optimal operation.Reduction in fuel cost can also be achieved indirectly through economic dispatch (ED) optimization problems <cit.>, <cit.>. In the ED problem, a polynomial objective function (generally in quadratic form) representing the cost of the generator dispatch is optimized to supply the demand most economically. However, since the objective function of the ED problem does not represent the fuel consumption rate, these formulations are unable to accomplish the highest system efficiency. They are also inconvenient for long-term planning of fuel usage and impractical to apply to systems where achieving optimal fuel consumption rate is the main goal. Although the strategies discussed above for improving overall system performance and reducing fuel cost have their own merits, the key to minimizing fuel consumption lies in introducing a function that accurately represents the fuel consumption rate. By directly representing fuel consumption in the optimization process, researchers can effectively tackle the core challenge of reducing fuel usage and achieving greater energy efficiency in the system. Therefore, future studies may benefit from exploring methodologies considering this crucial aspect when addressing system efficiency optimization.This paper presents a novel approach by introducing a unique sum-of-ratios objective function, which directly represents the fuel consumption rate. Unlike conventional methods that optimize the polynomial generator efficiency function or cost of operation, this sum-of-ratios formulation offers a practical and more efficient solution. However, solving multiple ratio optimization problems, known as Fractional Programming (FP), has been proven to be NP-hard <cit.>. The convergence of these problems with established nonlinear optimization methods can take an extensive amount of time. In addition, when the sum-of-ratios problem involves more than 20 ratios, the current approaches struggle to find a solution within a reasonable timeframe <cit.>,<cit.>. Due to these challenges, directly solving the sum-of-ratios optimization problem for energy management systems (EMS) is not feasible. Especially this method will be inapplicable for large-scale systems with a high number of variables. It necessitates the development of an efficient reformulation and solution technique to address multiple ratio problems effectively. Finding innovative approaches to tackle these difficulties will be crucial in making the proposed sum-of-ratios method a practical and scalable solution for optimizing fuel consumption in real-world energy systems. A substantial body of literature exists on Fractional Programming (FP), but the emphasis has primarily been on single-ratio problems. Among the renowned reformulation techniques, the Charnes-Cooper Transform, <cit.> <cit.> is notable for proposing an algorithm to solve single ratio linear FP problems by introducing two new variables and converting the fractional problem into a linear problem. Another classical technique, Dinkelbach's Transform <cit.>, reformulates the single ratio problem using a new auxiliary variable updated iteratively until convergence is achieved. <cit.> presents a formulation of a linear problem equivalent to a single ratio linear FP problem where some duality properties are used to prove the equivalence. For quadratic FP problems, where both the numerator and denominator are quadratic functions, a new method called the decomposition fractional separable method is proposed in <cit.> using linear programming techniques.An alternative approach to solving single-ratio quadratic FP is outlined in <cit.>, employing Taylor series expansion for effective reformulation.The literature discussed in references <cit.>-<cit.> primarily focused on solving single-ratio FP problems and cannot be directly extended to handle multi-ratio problems. Addressing multiple ratio problems, as encountered in the sum-of-ratios function, remains a challenge that requires innovative and efficient reformulation and solution techniques. However, in <cit.>, the authors proposed an extension of Dinkelbach's Transform specifically tailored to address multi-ratio FP problems. Nonetheless, this method was later refuted by Falk and Palocsay <cit.>, who demonstrated its limitations through a numerical example.To find the globally optimal solution for the sum-of-ratios problem, <cit.> introduced a practical method that involves solving a sequence of convex programming problems. In <cit.>, a convexification strategy was employed to decompose fractional terms into convex and concave components. Then, a piecewise linearization technique was applied to approximate the concave terms effectively. Additionally, <cit.> proposed a quadratic transform to tackle concave-convex multiple ratio minimization problems. In the case of generalized convex multiplicative functions, a reformulation technique was presented in <cit.>, where the main problem was reformulated as a concave minimization problem with 2p variables. This reformulation technique could also be applied to sum-of-ratios FP problems if the multiplicative terms were replaced with a convex over a concave function <cit.>. In our study, the objective function is defined as a sum-of-ratios minimization problem with non-negative-convex numerator and positive-concave denominator terms. Due to this specific form of the problem, an appropriate algorithm should be selected to solve the formulated multiple ratios FP problem effectively. As a result, the reformulation technique presented in <cit.> is adopted in this paper. By leveraging this reformulation technique, the complexities of the sum-of-ratios problem can be effectively addressed, and an optimized solution can be found with a feasible convergence time. The contributions of this paper are the following:* A novel fractional objective function is introduced in this literature, which directly represents the fuel consumption rate of the generators. Unlike typical system efficiency optimization problems that use the efficiency function or the operating cost function as the objective, this unique formulation directly accounts for fuel consumption. This approach proves to be more efficient and practical compared to previous studies since it directly targets the core issue of minimizing fuel usage and improving overall system efficiency.* To address the optimization problem in this study, the sum-of-ratios fractional programming (FP) algorithm is employed. to the best of our knowledge, this literature represents the first application of the FP method to solve the optimization problem for EMS efficiently. The reformulation technique with FP can also be applied to different power or communication system research where sum-of-ratios functions are used. * The successful application of the FP algorithm, combined with the convex relaxation of nonlinear constraints, demonstrates that the proposed model is suitable for handling large-scale systems. This capability is exemplified through the model's effective implementation on the IEEE 118-bus system. By demonstrating its applicability to such a complex and extensive system, the paper establishes the scalability of the proposed approach for real-world energy management scenarios. The remainder of the paper is organized as follows: the fuel efficiency problem formulation with sum-of-ratios objective function and its convex reformulation are presented in Section <ref>. In Section <ref>, the solution algorithm for the reformulated problem is described. The results for the notional MVAC ship system, IEEE 30-bus, and IEEE 118-bus system, accompanied with the performance comparisons, are demonstrated in Section <ref>. Finally, Section <ref> represents the conclusion and future work.§ PROBLEM FORMULATION §.§ Optimization Model for System Efficiency This section presents a unique sum-of-ratios objective function for the optimization problem that directly represents the fuel consumption of the generators. The objective of the minimization problem is to minimize the fuel consumption rate of the generators over the planning horizon, which will maximize the system efficiency. The fuel consumed by a generator can be expressed by taking into account the generator's efficiency and the output power it produces. The objective function is the following:minimizef = 1/α∑_t=0^T∑_i∈ N_gP_i,g^t/η_i,gΔ t.where, generator efficiency, η = a_i p_i,g^2 + b_i p_i,g + c_i with a, b, and c are generator specific constants, α is the fuel energy density (MWh/L),p_i,g is the per-unit output of the i-th generator: p_i,g = P_i,g/P_i,b;P_i,g^t is the generator output power at time t, and P_i,b is the base power of i-th generator. N_g = total number of generators, T = planning horizon, t = each time period. The following active and reactive power balance constraints are associated with the system: P_i,inj = P_i,g - P_i,l Q_i,inj = Q_i,g - Q_i,l where P_i,inj = ∑_k∈ B V_i V_k (G_ik cosθ_ik + B_ik sinθ_ik)Q_i,inj = ∑_k∈ B V_i V_k (G_ik sinθ_ik + B_ik cosθ_ik) where (<ref>) and (<ref>) are the AC power flow constraints for the system.The following constraints should be included in the problem formulation to maintain the operational limits of the system:P_i,g^min≤ P_i,g^t ≤ P_i,g^max,Q_i,g^min≤ Q_i,g^t ≤ Q_i,g^max,V_i^min≤ V_i^t ≤ V_i^max,θ_i^min≤θ_i^t ≤θ_i^max,-P_ik≤ P_ik^t ≤ P_ik,-Q_ik≤ Q_ik^t ≤ Q_ik,-DR_i≤ P_i,g^t+1 - P_i,g^t≤ UR_i. where (<ref>) and (<ref>) represent the generators' real and reactive power generation limits, (<ref>) and (<ref>) are the voltage and voltage angle limits, (<ref>) and (<ref>) are the line limits for real and reactive power, and (<ref>) is the ramp rate limit.The energy storage system (ESS) plays a vital role in minimizing the fuel consumption by the generators. The following Energy Storage System (ESS) constraints are included in the optimization problem:E_i,b^t = E_i,b^t-1 - η_b P_i,b^r,t t-P_i^C,max≤ P_i,b^r,t≤ P_i^D,max,∑_t=0^t P_i,b^r,t = 0,SOC_i^min≤ SOC_i^t≤ SOC_i^max, where (<ref>) indicates the energy conservation constraint, (<ref>) is the limit for charging or discharging rate, and (<ref>) is the state of charge (SOC) limit of the ESS. For the ESS, although the SOC can vary from 0 to 1 (0% to 100%), fully discharging can damage the battery permanently and shorten the life cycle of the battery <cit.>. In this paper, the minimum SOC is selected as 0.2 (20%). Eq.(<ref>) ensures that the sum of the total charging and discharging power over a planning period will be zero, which helps the system to recharge the battery before the next planning cycle. §.§ Reformulation of the Objective Function The objective function (<ref>) for the formulated optimization problem is a highly nonlinear sum-of-ratios case. In this section, the objective function is reformulated to a convex function using a technique used for solving generalized convex multiplicative problems. Later, the convex minimization method is utilized to solve the reformulated problem iteratively.The general convex multiplicative minimization problem has the following structure:minimizeh(x) + ∑_i=1^p f_i(x)g_i(x) subject tox ∈ Xwhere h, f_i(x) and g_i(x) for all i are convex functions and X⊂ R^n is a convex set. If h(x)=0, f_i(x)=A_i(x), and g_i(x)=1/B_i(x), (<ref>) will be in the following form:minimizeH(x) = ∑_i=1^mA_i(x)/B_i(x) subject tox ∈ X which is a sum-of-ratios problem, where A_i(x) are non-negative, convex and B_i(x) are positive, concave functions for all i.The authors in <cit.> defined the following problem by introducing 2m auxiliary variables ζ_i and β_i, where i=1,2,3,....m: minimizeF(x, ζ, β) = 1/2∑_i=1^m [ζ_i (A_i(x))^2 + β_i (B_i(x))^2] subject tox ∈ X ζ_i β_i ≥ 1 (ζ, β) > 0where ζ = (ζ_1, ζ_2, ....ζ_m) and β = (β_1, β_2, ..... β_m). It can be proved that, if (x^*, ζ ^*, β ^*) is an optimal solution of (<ref>), then x^* will be an optimal solution of (<ref>) and H(x^*) = F(x^*, ζ ^*, β ^*) <cit.>.As a result, the optimization problem in section <ref>(A) can be written as the following problem:minimizef(P, ζ, β) = 1/2 [∑_i=1^m (ζ_i (P_i,g^t)^2 + β_i η_i,g^2)] Δ tsubject to(<ref>)-(<ref>), ζ_i β_i ≥ 1,(ζ, β) > 0,§.§ Convex Relaxation TechniqueThe convex optimization methods can only be applied to problems where the objective function and all constraints are finite and convex. Although the reformulated problem in section <ref>(A) has a convex objective function for a fixed set of η_i and β_i, several constraints still have nonlinearity. In this section, the nonlinear power flow constraints (<ref>), (<ref>), and line flow constraints (<ref>),(<ref>) are replaced with the following linear and quadratic constraints:P_i,inj = √(2)u_i G_ii + ∑_k ∈ B(G_ikW_R_ik + B_ikW_I_ik),Q_i,inj = -√(2)u_i B_ii + ∑_k ∈ B(G_ikW_I_ik - B_ikW_R_ik),P_ik = √(2)u_i G_ik - (G_ikW_R_ik + B_ikW_I_ik),Q_ik = -√(2)u_i B_ik + (B_ikW_R_ik - G_ikW_I_ik). W_R_ik^2 + W_I_ik^2 ≤ 2u_i u_k.θ_ik = tan^-1( W_I_ik/W_R_ik). Here, (<ref>), (<ref>) are the linear real and reactive power flow equations and (<ref>), (<ref>) are linear line flow equations. The relationship between the convex variables u_i, W_I_ik, and W_R_ik are defined by the equations (<ref>) and (<ref>). Since (<ref>) is still nonlinear, Taylor series expansion can be used to linearize the equation:tan^-1W_I_ik^(q)/W_R_ik^(q)=θ_ik+W_I_ik^(q)/W_R_ik^(q)2+W_I_ik^(q)2 W_R_ik-W_R_ik^(q)/W_R_ik^(q)2+W_I_ik^(q)2W_i_ik. where, the higher order terms are neglected, and (W_I_ik^(q), W_R_ik^(q)) are the initial estimation. A detailed description of this technique can be found in <cit.>.The final problem formulation will be as follows:minimize f(x, ζ, β)subject to(<ref>), (<ref>), (<ref>)-(<ref>), (<ref>), (<ref>), (<ref>) and (<ref>).where, P_i,inj, Q_i,inj, P_ik, and Q_ik are defined by (<ref>), (<ref>), (<ref>), and (<ref>), respectively. § SOLUTION TECHNIQUEIn this section, an iterative method is described to solve the reformulated problem.For a fixed set of (ζ, β), let us consider the following sub-problem of (<ref>): minimizeF(x; ζ, β) = 1/2∑_i=1^m [ζ_i (A_i(x))^2 + β_i (B_i(x))^2] Equation (<ref>) can be solved using any standard convex optimization technique. If the optimal solution for (<ref>) is x^*(ζ, β) , then for a fixed set of x^* (<ref>) is reduced to the following problem of 2m variables (ζ, β): minimizeF_aux(ζ, β) = 1/2∑_i=1^m [ζ_i (A_i(x^*))^2 + β_i (B_i(x^*))^2] subject to,ζ_i β_i ≥ 1 (ζ, β) > 0where, F_aux denotes the auxiliary problem of F. Equation (<ref>) and (<ref>) are solved iteratively until convergence is achieved. The following algorithm is used to solve the fractional optimization problem: In this paper, the MOSEK optimization toolbox is used to solve the formulated problem <cit.>. The optimization problem is transformed into the conic quadratic format to fit into MOSEK. MOSEK supports two types of quadratic cones: * General quadratic cones: Q^n = [x ∈ R^n: x_0≥√(∑_j=1^n-1 x_j^2)]* Rotated Quadratic cones:Q_r^n = [x ∈ R^n: 2x_0 x_1≥∑_j=2^n-1 x_j^2], x_0 ≥ 0, x_1 ≥ 0 All the quadratic parts of the reformulated minimization problem in section <ref>(C) are replaced with rotated quadratic cones and corresponding linear equations. As a result, the transformed problem can be solved using the MOSEK solver. § CASE STUDIESIn this literature, the proposed system efficiency model is tested with a notional 12-bus MVAC ship system, a modified IEEE 30-bus, and an IEEE 118-bus system. Each system consists of multiple generators, distribution lines, ESS, and loads at different buses. The load data is generated using the demand pattern of Real-time Dashboard, NYISO <cit.>. The load profile for different systems can be observed in Fig.<ref>. This paper considers a 24-hour time horizon while solving the optimization problem, where each time step is 1 hour. However, any time horizon and length of time step can be selected based on the system requirement. §.§ Notional 12-Bus MVAC Ship System The notional four-zone 12 bus MVAC ship system <cit.> (shown in Fig. <ref>) consists of two main gas turbine generators (MTG) and two auxiliary gas turbine generators (ATG). The generator parameters can be observed from TABLE <ref>. The system also has 8 ESS (each zone has 2 ESS) and multiple loads, including 2 propulsion motor modules (PMM) at buses 6 and 7 and AC load centers (ACLC) at buses 1,2,3,4,9,10,11, and 12. The ESS data are shown in TABLE <ref>.The proposed optimization model was run for the notional MVAC ship system; the simulation time was 34.414s, taking 178 iterations to converge. The output power generation can be observed in Fig. <ref>. The system is likely to dispatch the ATGs more than the MTGs to improve the overall efficiency since the capacities of the ATGs are lower than the MTGs. The ESS charging and discharging schedule and the SOC of the ESS are shown in Figs. <ref> and <ref>, respectively.§.§ IEEE 30-bus SystemThe IEEE 30-bus system <cit.> has 6 generator buses, 16 load buses, and 42 transmission lines. In addition, the system is modified by including six energy storage systems (ESS) at buses 5, 11, 15, 19, 23 and 27. The proposed model was then tested for the IEEE 30-bus system, and the observed convergence time was 91.728s for 476 iterations. The generator output schedule is shown in Fig. <ref>. Although the system has six generators, only three produced output during the simulation. Generator graphs with zero output are not included in the figure.The state of charge (SOC) of the ESS is shown in Fig. <ref>.§.§ IEEE 118-bus SystemThe IEEE 118-bus system <cit.> has 21 generator buses, 113 load buses, and 179 transmission lines. In addition, the system is modified by including 16 energy storage systems (ESS). In this study, the IEEE 118-bus system is the largest system where the system efficiency model is applied. The model was run successfully with a reasonable convergence time of 316.22s. Fig. <ref> represents the output power of the generators. For the IEEE 118-bus system, only 9 generators generated power. Only the generators with output are shown in the figure.Convergence curves of both objective and tolerance values are shown inFig. <ref>. The number of iterations required to converge depends on the number of variables in the system. Since the IEEE 118-bus system has the highest number of variables among the tested systems, it takes the most iteration to converge.The summary of the results for all systems is shown in TABLE <ref>. The simulations were run on Intel Core i7-10700 CPU, 2.90 GHz processor with 32.0 GB RAM.§.§ Performance Comparison In this section, a comparative analysis is conducted of the performance of the proposed fuel efficiency model in two distinct domains:* Comparison in terms of convergence time: This comparison indicates that the model proposed in this paper can be solved efficiently within a reasonable convergence time. As a result, the model can be applied to large-scale systems where the nonlinear programming model takes excessive time to converge.* Comparison in terms of fuel consumption: The proposed model consumes significantly less amount of fuel compared to the other models with different generator dispatches. The results indicate that the proposed model is the most efficient and optimal.§.§.§ Comparison in terms of convergence timeThe convergence time of NLP and FP models is compared in this subsection. The nonlinear optimization model from section <ref>(A) is solved with MATLAB NLP (fmincon function) for the notional MVAC ship system. The solution took an extensive time (more than 8 hours) to converge for the notional MVAC ship system with 24-time steps where the convergence time of the FP model was only 34.414s for the same system. As a result, the NLP makes the solution procedure impractical to apply to extensive systems. Moreover, the fuel consumed during the operation was 6.53×10^4L, higher than the fuel consumed with the proposed FP model. The FP model is clearly more advantageous than the nonlinear programming optimization, even with a higher number of variables.The performance comparison between FP and NLP is shown in TABLE <ref>. §.§.§ Comparison in terms of fuel consumptionThe proposed system efficiency model is compared with three other models (as listed in TABLE <ref>) to demonstrate the fuel efficiency. The difference between the models is in the generator dispatch allowed during the simulation. The number and type of generators allowed for each model during the operation are indicated by the 'Dispatch' column. The comparison results are shown in Fig. <ref>. It can be observed that the proposed model has the lowest fuel consumption among all models. Initially, all models consume almost similar amounts of fuel. However, the difference in fuel consumption increases with the number of time steps. This observation highlights the superior efficiency of the proposed model in comparison to the models examined in this section. § CONCLUSION This study has addressed the challenge of the fuel consumption minimization problem to enhance the system efficiency and reduce the operating cost of the power generation units. The traditional approaches typically focus on maximizing the efficiency function or minimizing the generator cost function to achieve optimal fuel consumption for the system. However, these approaches do not account for the fuel consumption rate directly and are impractical to implement in real-world systems where optimizing fuel use is the objective. In addition, existing studies that have used the fuel consumption curve to formulate the optimization problem have numerous limitations, including incompatibility to apply to large AC systems.As a result, it is crucial to incorporate a function that directly represents fuel consumption to enhance the system's fuel efficiency. This study introduced a novel objective function based on a sum-of-ratios approach, providing a straightforward representation of the fuel consumption rate. The sum-of-ratios problem was effectively solved by leveraging a fractional programming (FP) reformulation technique, resulting in successful fuel consumption minimization. Moreover, the low convergence time of the solution makes the model suitable for large-scale systems. While the model stands out in its uniqueness and effectiveness compared to other approaches, future research will concentrate on implementing a distributed algorithm to enhance scalability for larger and more complex systems. § ACKNOWLEDGEMENT The information, data, or work presented herein was partly funded by the U.S. Office of Naval Research under the award numbers N000142212239 and N000142112124. IEEEtran | http://arxiv.org/abs/2310.17913v1 | {
"authors": [
"Md Isfakul Anam",
"Tuyen Vu"
],
"categories": [
"math.OC"
],
"primary_category": "math.OC",
"published": "20231027061518",
"title": "An Advanced Fuel Efficiency Optimization Model with Fractional Programming"
} |
Machine Learning Infused Distributed Optimization for Coordinating Virtual Power Plant Assets Meiyi Li, Student Member, IEEE, Javad Mohammadi, Senior Member, IEEE2023-10-25 =================================================================================================== Multiple Instance Learning (MIL) is a sub-domain of classification problems with positive and negative labels and a “bag” of inputs, where the label is positive if and only if a positive element is contained within the bag, and otherwise is negative. Training in this context requires associating the bag-wide label to instance-level information, and implicitly contains a causal assumption and asymmetry to the task (i.e., you can't swap the labels without changing the semantics). MIL problems occur in healthcare (one malignant cell indicates cancer), cyber security (one malicious executable makes an infected computer), and many other tasks. In this work, we examine five of the most prominent deep-MIL models and find that none of them respects the standard MIL assumption. They are able to learn anti-correlated instances, i.e., defaulting to “positive” labels until seeing a negative counter-example, which should not be possible for a correct MIL model. We suspect that enhancements and other works derived from these models will share the same issue. In any context in which these models are being used, this creates the potential for learning incorrect models, which creates risk of operational failure.We identify and demonstrate this problem via a proposed “algorithmic unit test”, where we create synthetic datasets that can be solved by a MIL respecting model, and which clearly reveal learning that violates MIL assumptions. The five evaluated methods each fail one or more of these tests. This provides a model-agnostic way to identify violations of modeling assumptions, which we hope will be useful for future development and evaluation of MIL models. § INTRODUCTION In Multiple Instance Learning (MIL) we have a dataset of N labeled points, which we will represent as 𝒳 with associated labels y ∈{-1, 1} for the negative and positive labels respectively. As originally described, the MIL problem involves each datum X_i ∈𝒳 being a bag of multiple instances, where X_i = {𝐱_1, 𝐱_2, …, 𝐱_n_i} is a bag of n_i instances. Each instance 𝐱_j ∈ X_i is a D-dimensional vector, and every bag X_i may have a different total number of items n_i. Given instant level classifier h(·), most MIL algorithms work by predicting ŷ_i = max_∀ x_j ∈ X_i h(x_j). As originally described, the positive/negative label of each bag X_i has a special meaning. By default, a bag's label is negative (y=-1). The label of a bag will become positive (y=1) if and only if a positive instance 𝐱_𝐣 is present inside the bag, at which point the entire bag's label becomes positive. Because instance-level labels mapping each 𝐱_𝐣→ y ∈{-1, 1} are not given, the MIL problem is to infer the instance-level labels from the whole-bag level labeling. This implies a critical asymmetric nature to the given labels and how they must be handled. A value of y=-1 tells us that all instances are negative in the given bag, whereas a label of y=1 tells us that one or more instances have a positive label. For this reason, swapping the positive and negative labels in a MIL problem is not semantically meaningful or correct, whereas, in a standard classification problem, the labels can be interchanged without altering the semantics of the learning task. The MIL problem occurs with frequency in many real-world applications, in particular in the medical community where the presence of any abnormal cell type (i.e., instance) is the confirmative indicator for a larger organism's disease (i.e, bag and label). As the MIL problem implies, and the medical example makes explicit, the MIL model has an implicit casual assumption: the right combination of positive indicators dictate the output label, and so the MIL model is both a valuable inductive bias toward the solution and a guard against physically implausible solutions. Algorithms that fail, or intentionally forgo, the MIL constraints may appear to obtain better accuracy "in situ" (i.e., the lab environment). But if it is known that the MIL assumption is true, ignoring it creates a significant risk of failure to generalize "in vivo" (i.e., in real production environments). In the clinical context, this is important as many ML algorithms are often proposed with superior in situ performance relative to physicians <cit.>, but fail to maintain that performance when applied to new clinical populations <cit.>. In this case, respecting underlying MIL properties eliminates one major axis of bias between situ and vivo settings and higher confidence in potential utility. In the cyber security space, respecting the MIL nature eliminates a class of "good word" style attacks <cit.> where inconsequential content is added to evade detection, an attack that has worked on production anti-virus software <cit.>. These reasons are precisely why MIL has become increasingly popular, and the importance of ensuring the constraints are satisfied. Notably, this creates a dearth of options when more complex MIL hypotheses are required, as CausalMIL and mi-Net succeed by restricting themselves to the Standard MIL assumption. The creation of MIL models that satisfy this, and other more complex hypotheses, are thus an open line of research that would have potentially significant clinical relevance. Similarly, users with more niche MIL needs may desire to more thoroughly test their models respect the constraints critical to their deployment. Our work has demonstrated that many articles have not properly vetted the more basic MIL setting, and so we suspect other more complex MIL problems are equally at risk.Our work contributes the identification of this issue, as well as a strategy to avoid repeat occurrences, by developing algorithmic unit tests where a synthetic dataset is created that captures specific properties about the desired solution. The test will fail if an invariant of the algorithm's solution is not maintained, as summarized in <ref>. Such failures indicate that a MIL method is not properly constrained, and the learning goal is not being achieved. We construct three such datasets for the MIL problem, which can be reused by any subsequent MIL research to mitigate this problem. Based on these results, we would suggest practitioners/researchers begin with CausalMIL and mi-Net as a solid foundation to ensure they are actually satisfying the MIL hypothesis, and thus avoiding excess risk in deployment.This paper is organized as follows. In <ref> we will review broadly related works, including prior work in non-deep-MIL and deep-MIL literature. It is in this related work we will denote the baseline algorithms we test, in particular the five deep-MIL models that form the foundation of most current deep-MIL research, and a sixth deep-MIL method that is little-known but does pass our tests. Next in <ref> we will define three algorithmic unit tests for MIL models. The first tests the fundamental MIL assumption that all models must respect, and the second and third tests extend to a generalized version of the MIL problem known as “threshold” MIL. Prior deep-MIL works might tacitly assume they can tackle the generalized MIL, but make no formal specification of the types of MIL models they tackle.Then we apply our tests to six deep-MIL and seven older Support Vector Machine based MIL models in <ref>, demonstrating how different algorithms pass and fail different unit tests. In doing so we provide hard evidence that the foundations of most current deep-MIL works are invalid, and thus dangerous to use in any case where the MIL assumption is used for casual or clinically relevant constraints. For example, although a cancer diagnosis should occur only because cancer was detected, a non-MIL model could learn the absence of something unrelated as a false signal, causing it to overfit. In addition, we discuss cases where a known non-MIL algorithm still passes the unit test, prompting a discussion on how unit tests should be used for invalidation, not certification. Finally, we conclude in <ref>. § RELATED WORKConcerns about reproducibility within the fields of machine and deep learning have increased in recent years. Prior works have studied reproducibility issues with respect to dataset labels/integrity <cit.>, comparison methodology, and making conclusions on improvement <cit.>,discrepancies between math and floating-point precision <cit.>, discrepancies between code and paper, <cit.>, and false conclusions in repeatability <cit.>. We note that none of these prior efforts on reproducibility would have identified the MIL assumption violation that we identify in this work. Our situation is a different aspect of reproducibility in that the methods under test can be reproduced/replicated, but the methods themselves fundamentally are not designed to enforce the modeling assumptions, and no testing was done to ensure that they do. By developing tests meant to elicit certain and specific behaviors of a MIL model, we show how unit tests may be developed for an algorithm in the form of synthetic datasets. Within the reproducibility literature, we believe the work by <cit.> is the most similar to ours, where they develop a mathematical framework for characterizing the reproducibility of an optimization procedure when initialization and gradient computations are not exact. This is motivated by the fact that proofs of optimization do not account for floating point precision in the majority of cases, so specialized domain tests can be useful. A key point is that having source code is helpful, but does not confer correctness of the property of interest <cit.>. In contrast, our work is more empirical in that we actually implement our proposed tests, and our tests are born not from a mismatch between math and implementation but from the observation that prior works have neglected the mathematical work to ensure their methods follow the MIL assumptions. Other relevant works in reproducibility have looked at methodological errors <cit.>.The MIL problem bears resemblance to a niche set of defenses used within the malware detection literature. To defend against adversarial attacks, “non-negative” <cit.> or “monotonic” <cit.> models were developed where features can only be positive (read, malicious) indicators, and by default, all files would be marked negative (benign) absent any features. This is similar to the MIL model assumption that there is no positive response unless a specific instance is present, and indeed, MIL approaches have been used to build malware detectors that are interpretable and not susceptible to attack <cit.>.§.§ Relevant Multi-Instance Learning Work An explosion of interest in MIL literature has occurred due to its relevance in medical imaging and other tasks where the MIL assumption aligns with clinically relevant or important physical/causal constraints of the underlying system. Shockingly, we find much of the literature does not test or ultimately respect this core MIL assumption, resulting in models that are at risk of over-fitting their training data and learning clinically/physically invalid solutions. Simulated benchmarks are common in MIL literature, but focus primarily on the Standard formulation and accuracy <cit.>. <cit.> built synthetic benchmarks of MIL tasks, but did not formalize what kinds of MIL tasks or attempt to check if a model was violating the underlying generative MIL hypothesis[Two tasks are Standard MIL, one is Threshold MIL, and a fourth is indeterminate but closest to the Generalized MIL of <cit.>]. The key difference of our work is to create synthetic datasets to test that a model respects the MIL assumptions, rather than benchmark accuracy.We will first review the older, predominantly Support Vector Machine (SVM) history of MIL models that we will test in this work. Then we consider the more recent deep learning counterparts. §.§.§ Historical Non-Deep MILIssues with under-specification of the MIL problem had been previously identified in the seminal survey of <cit.>, who synthesized many implicit MIL extensions into a set of generalized and well-specified MIL types. As noted by this prior work, many MIL papers from this time period do not provide proof or attempt to enforce the MIL assumption.Thus while they may be exploring a broader scope of the MIL hypothesis space, the developed solutions may still fundamentally not satisfy the definition of any MIL model. We will briefly review some significant non-deep MIL models that we include as comparison points, and their status with respect to the MIL assumption. Most notably the mi-SVM and MI-SVM algorithms <cit.> are correct by construction to the standard MIL model that we will discuss further in <ref>. The MI-SVM in particular introduces the idea of a “witness”, where the bag label is inferred from a singular maximum-responding instance, thus incorporating the standard MIL assumption.SIL is intentionally MIL violating by construction <cit.>. NSK and STK algorithms <cit.> were previously recognized to not abide by the MIL hypothesis <cit.>, even though the paper includes formal proofs on the learning theory, the MIL constraints were neglected. Not analyzed previously, we also include two additional models. Firstly, the MissSVM <cit.>, which uses a semi-supervised SVM approach that uses the “single witness” approach to guarantee the standard MIL model. Secondly, the MICA model <cit.>, which is invalid under the standard MIL model because it uses a convex combination of points in the positive bag, and thus does not preclude the possibility of a negative sample. §.§.§ Deep MIL ModelsThe first MIL neural network by <cit.> was later re-invented as the “”[Notably this was a mischaracterization and should have been named “MI-Net” going by the original naming scheme, but the names mi-Net and MI-Net with incorrect designation have stuck, and so we repeat them.] model, and directly translates the “witness” strategy <cit.> to a neural network, using weight sharing to process each bag independently, produce a maximal score, and then takes the max over those scores to reach a final decision. This re-invention was done by <cit.> who added “” as a “better” alternative by concatenating the results across bags, allowing a final fully-connected layer to make the prediction by looking at all instances without any constraints. This error allows the MI-Net to learn to use the absence of an instance as a positive indicator, thus violating the MIL assumption. This is true of thelayer by <cit.> (which forms the basis of their Attention MIL), the Graph Neural Network basedof <cit.>, the Transformer based<cit.>, and theMIL model of <cit.>. These latter five deep-MIL models have formed the foundation of many extensions that have the same fundamental designs/prediction mechanisms, with various tweaks to improve training speed or handle large medical images <cit.>. For this reason, we will test these five deep-MIL models as exemplars of the broader deep-MIL ecosystem, and show that all five models fail a simple test. Two additional deep tests are included, which we note as distinct (because they respect MIL but are rarely used) from the preceding five highly popular methods. The mi-Net, which is the older and not widely used model of <cit.> respects the standard MIL assumptions. Second is CausalMIL <cit.>, the only recent line of MIL research of which we are aware that properly considers the standard MIL assumption, producing an enhanced version of the “witness” strategy. It does so by representing the problem as a graphical model to infer per-instance labels. While <cit.> note the causal nature of MIL modeling to inform their design, they did not document that other deep-MIL approaches fail to respect the MIL assumptions. § MIL UNIT TESTS The prior works in deep-MIL research have all cited the seminal <cit.> for the MIL problem without elaborating further on the assumptions of the MIL model. As denoted by <cit.>, there are many different generalizations of the MIL hypothesis to more complex hypothesis spaces, all of which require respecting that it is the presence of some item(s) that induce a positive label. We will focus on Weidmann’s Concept Hierarchy <cit.> that includes <cit.> as the most basic MIL hypothesis space, and test it along with a generalization of the MIL problem. We note that an algorithm passing a test is not a certificate of correctness. Thus, if an algorithm passes the generalized Weidmann MIL tests (specified below), but fails the basic Dietterich test (specified below), it means the model fails all possible MIL models because it has failed the most foundational MIL test. Our code for these tests can be found at <github.com/NeuromorphicComputationResearchProgram/AlgorithmicUnitTestsMIL>.We will now formalize the general MIL problem in a notation that can capture both the standard and Weidmann versions of the MIL problem. We leverage this formalization to make it clear what properties our unit tests are attempting to capture, and to discuss how a non-MIL model learns invalid solutions.For all of the tests we consider, let h(𝐱) be a function that maps a instance vector 𝐱 to one of K concept-classes ∈𝒞 = {∅, 1, 2, …, K} (i.e., h(𝐱) ∈𝒞), which includes the null-class ∅. This null class has the role of identifying “other” items that are unrelated to the positive output decision of the MIL problem. The null-class is the fundamental informative prior and useful constraint of the MIL problem space, where any item belonging to ∅ does not contribute to a negative class label prediction. That is to say, only the occurrence of the concept classes c_1, …, c_K can be used to indicate a positive label in a valid MIL model <cit.>. For all k ∈ [1, …, K] where c_k ∈ℤ_≥ 0 let g({c_1, c_2, …, c_K}) be a function that takes in the set of the number of times concept c_k occurred in a bag, and outputs a class label y ∈{-1, 1} for a negative or positive bag respectively.Given a MIL bag X = {𝐱_1, …, 𝐱_n }, let 1[predicate] be the indicator function that returns 1 if and only if the predicate is true. Then we can express the generalized MIL decision hypothesis space by <ref>.g( ⋃_k=1^K {∑_∀𝐱'∈ X1[h(𝐱') = k ] })This generalized form can cover multiple different versions of the MIL problem by changing the constraints on the size of the concept class 𝒞 and the decision function g(·). In the remaining sub-sections, we will use this framework to specify the MIL model being tested, how the test works, and how an invalid MIL-model can “solve” the problem by violating the constraints.This is done by specifying constraints on 𝒞 and g(·) that define the class of MIL models, and a unit test that checks that these constraints are being respected by the algorithm. We will do so by specifying aandfunction that returns bags X that should have negative and positive labels respectively. Each function will have an argument called , as a boolean variable indicating if the bag is meant to be used at training or testing time. This is because we will alter the training and testing distributions in a manner that should be invariant to a valid MIL model, but have a detectable impact on non-MIL models. For this reason, we will refer to data obtained whenas the training distribution andas the testing distribution. In each unit test, our training bags will have a signal that is easy to learn but violates the MIL assumption being tested. There will be a second signal corresponding to the true MIL decision process, that is intentionally (mildly) harder to detect. At test time (i.e., ), the easy-but-incorrect signal will be altered in a way that does not interfere with the true MIL classification rule.If a model receives a training distribution AUC > 0.5, but a testing distribution AUC of < 0.5, then the model is considered to have failed a test. This is because a normally degenerate model should receive an AUC of 0.5, indicating random-guessing performance. To obtain an AUC < 0.5 means the model has learned a function anti-correlated with the target function. If this occurs simultaneously with an AUC of > 0.5, it means the model has learned the invalid non-MIL bait concept, which is designed to be anti-correlated in the testing distribution. To simplify the reading of each algorithmic unit test, we will use ∼𝒩(a, I_d · b) to indicate a vector is sampled from the multivariate normal distribution with d dimensions that has a mean of μ = 1⃗· a and a covariance Σ = I_d · b. In all cases, we use d=16 dimensions, but the test is valid for any dimensionality. In many of our tests, the number of items will be varied, and we denote an integer z sampled from the range [a,b] as z ∼𝒰(a,b) when an integer is randomly sampled from a range. When this value is not critical to the function of our tests, the sampling range will be noted as a comment in the pseudo-code provided.§.§ Presence MI Assumption and TestWe begin with the simplest form of the MIL decision model as expressed by <cit.>. In this case, the concept class is unitary with K=1, 𝒞 = {∅, 1}, giving the positive classes as the only option, and the non-contributing null-class ∅. The decision function g({c_1}) = c_1 ≥ 1, that is, the label is positive if and only if the positive concept c_1 has occurred at least once within the bag. Given these constraints, we design a simple dataset test to check that an algorithm respects these learning constraints on the solution.We will abuse notation with h(𝒩(0,I_d ·1)) ∅ to indicate that the space of samples from a normal distribution as specified is defined as corresponding to the null-class ∅. This first value will be the general “background class” that is not supposed to indicate anything of importance. [24]r0.45 0.45To make a learnable but not trivial class signal, we will have two positive class indicators that never co-occur in the training data. Half will have h(𝒩(0, I_d · 3))c_1 and the other half will have h(𝒩(1, I_d · 1))c_1. We remind the reader that this is a normal distribution in a d-dimensional space, so it is not challenging to distinguish these two classes from the background class 𝒩(0, I_d · 1), as any one dimension with a value ≥ 3 becomes a strong indicator of the c_1 class. Finally will have a poison class h(𝒩(-10, I_d · 0.1)) ∅ that is easy to distinguish from all other items, and at training time always occurs in the negative classes only. If we let g̃(·) and h̃(·)represent the MIL-violating class-concept and decision function that should not be learned. This creates an easier-to-learn signal, where h̃(𝒩(-10, I_d · 0.1)) ∅ and the remaining spaces h̃(𝒩(0, I_d · 1)) = h̃(𝒩(0, I_d · 3)) =h̃(𝒩(1, I_d · 1))c_1 , with a decision function ofg({∅, c_1}) ∅≤ 0. This g is easier to learn, but violates the MIL assumptions by looking for the absence of an item (the ∅ class) to make a prediction. It is again critical to remind the reader that the MIL learning problem is asymmetric — we can not arbitrarily re-assign the roles of ∅ and c_1, and so g(·) ≠ g(·) because we can not use ∅ in place of c_1.The entire algorithmic unit test is summarized in Alg. <ref>, that we term the Single-Concept Standard-MIL Test. We choose this name because there is a single concept-class c_1 to be learned, and this test checks obedience to the most basic MIL formulation. Because this test is a subset of all other MIL generalizations, any algorithm that fails this test is not respecting the MIL hypothesis. Given an algorithm 𝒜(·):𝒳→ℝ, if trained on Alg. <ref> and tested on the corresponding distribution. 𝒜 fails to respect the MIL hypothesis if the training AUC is above 0.5, and the test AUC is below 0.5. g({∅, c_1}) = c_1 ≥ 1 is the target function. Using ∅_p to represent the background poison signal and ∅_B to represent the indiscriminate background noise. Let ĉ_1 denote the 𝒩(0,I_d · 3) samples andĉ_2 the 𝒩(1,I_d · 1) samples. The training distribution contains negative samples (y=-1) of the form {∅_p=1, ∅_B}, and positive samples (y=1) of the form {∅_B ≥ 1, ĉ_1=1} and {∅_B ≥ 1, ĉ_2=1}. By exhaustive enumeration, only two possible logic rules can distinguish the positive and negative bags. Either the (MIL) rule ĉ_1 ≥ 1 ĉ_2 ≥ 1 ≡ c_1 ≥ 1 (where c_1 ĉ_1 ĉ_2, which is allowed under <cit.>), or the non-MIL rule ∅_p = 0. However, a MIL model cannot legally learn to use ∅_p because it occurs only in negative bags. Thus if the training distribution has an AUC >0.5 but test distribution ACU <0.5, it has learned the non-MIL rule and failed the test. §.§ Threshold-based MI Assumption and Tests [20]r0.45 0.45 We now turn to the threshold-based MIL assumption, of which the presence-based assumption is a sub-set. In this case, we now have a variable number of concept classes K, and we have a minimum threshold t_k for the number of times a concept-class c_k is observed. Then ∀ k ∈ [1, K], it must be the case that c_k ≥ t_k for the rule to be positive. More formally, we have 𝒞 = {∅, 1, 2, …, K} and we define the decision function g(·) as: g({c_1, c_2, …, c_k }) = ⋀_k=1^K c_k ≥ t_k where ⋀ is the logical “and” operator indicating that all K predicates must be true. It is easy to see that the Presence-based MIL is a subset by setting t_1 = 1 and t_k = 0, ∀ k > 1. Thus any case that fails Alg. <ref> is not a valid Threshold MIL model, even if it passes the test we devise. We will implement two different tests that check the ability to learn a threshold-MIL model.§.§.§ Poisoned Test For our first test, we use a similar “poison” signal h(𝒩(-10, I_d · 0.1)) ∅ that is easier to classify but would require violating the threshold-MIL decision function in <ref>. This poison occurs perfectly in all negative bags at training time, and switches to positive bags at test time. For the threshold part of the assumption under test, we use a simple K=2 test, giving 𝒞 = {∅, 1, 2}. The two exemplars of the classes will have no overlap this time, given by h(𝒩(2, I_d · 0.1))c_1 and h(𝒩(3, I_d · 0.1))c_2, with one item selected at random occurring in every negative bag, and both items occurring between 1 and 4 times in the positive labels. This tests that the model learns that t_1 = t_2 = 1. Last, generic background instances h(𝒩(0, I_d · 1)) ∅ occur in both the positive and negative bags. The overall procedure is detailed in Alg. <ref>.As with the presence test, the MIL-violating decision function g({∅, c_1, c_2}) = c_∅≤ 0 to indicate a positive label, which is looking for the absence of a class to make a positive label, fundamentally violating the MIL hypothesis. Though this test is fundamentally a similar strategy to the presented unit test, the results are significantly different, as we will show in <ref>. This test will help us highlight the need to produce algorithmic unit tests that capture each property we want to ensure our algorithms maintain. Given an algorithm 𝒜(·):𝒳→ℝ, if trained on Alg. <ref> and tested on the corresponding distribution. 𝒜 fails to respect the threshold MIL hypothesis if the training AUC is above 0.5, and the test AUC is below 0.5. See appendix, structurally similar to proof of <ref>. §.§.§ False-Frequency Reliance Our last test checks for a different kind of failure. Rather than a violation of the MIL hypothesis entirely, we check that the model isn't learning a degenerate solution to the threshold-MIL model.To do so, we will again use K=2 classes as before, so the decision function g(·) does not change with the same t_1 = t_2 = 1 thresholds, with the same positive instances h(𝒩(2, I_d · 0.1))c_1 and h(𝒩(-2, I_d · 0.1))c_2. The negative training bags X will include one or two samples of either c_1 or c_2, not both. The positive training will contain one or two samples of each c_1 and c_2. This gives a direct example with no extraneous distractors of the target threshold-MIL model, g({c_1, c_2}) = (c_1 > t_1 ) ∧ (c_2 > t_2). [22]r0.45 0.45 However, it is possible for a model that is not well aligned with the MIL model to learn a degenerate solution h that maps h(𝒩(2, I_d · 0.1))c_1 and h(𝒩(-2, I_d · 0.1))c_1, and thus learns an erroneous g({c_1, c_2})c_1 ≥t_1. While this solution does respect the overall MIL hypothesis, it indicates a failure of the model to recognize two distinct concept classes c_1 and c_2, and thus does not fully satisfy the space of threshold-MIL solutions. Given an algorithm 𝒜(·):𝒳→ℝ, if trained on Alg. <ref> and tested on the corresponding distribution. 𝒜 fails to respect the threshold MIL hypothesis if the training AUC is above 0.5, and the test AUC is below 0.5. See appendix, structurally similar to the proof of <ref>. § RESULTSWe will now review the results of our three unit tests across both deep-MIL models and prior SVM based MIL algorithms. In every deep learning case, we generate 100,000 training bags with 10,000 test bags. Each model was trained for 20 epochs. Each network was trained using the Adam optimizer using three layers of the given deep model type. We found this was sufficient for each model type to nearly perfectly learn the training set, with the exception of the Hopfield network that struggled to learn under all tests even in extended testing with varying layers and model sizes.For the SVM models the (N^3) training complexity limited the training size. MissSVM and MICA were trained on only 200 samples because larger sizes took over a day. All others were trained on 1,000 samples. A test set of 10,000 was still used. For each SVM model, we use a Radial Basis Function (RBF) kernel K(𝐱, 𝐱^')=exp(-γ𝐱-𝐱^'^2), where γ was set to be 0.100 in all tests. This value was found by running each algorithm on a sample of N=50 training bags across each of the training sets, to find a single value of γ from 10^-4 to 10^3 that worked across all SVM models and tests. This was done because the SVM results took hours to run, and obtaining the best possible accuracy is not a goal. The point of our tests is to identify algorithms that appear to learn (high training numbers) but learn the wrong solution (< 0.5 test AUC). For this reason, a simple and fast way to run the algorithms was more important and equally informative. In our experiments, the only models known and designed to conform to the standard Presence MIL assumption are mi-Net, mi-SVM, MI-SVM, and MissSVM. For this reason, we expect these models to pass the first test of Alg. <ref>.We note that none of the models being tested was designed for the Threshold MI assumptions that comprise the second two tests. Still, we will show how the results on the Threshold tests are informative to the nature of the model being investigated. We remind the reader that each unit test can be solved perfectly by a model respecting the appropriate MIL assumptions. §.§ Presence Test Results Our initial results are in <ref>, showing the training and testing accuracy and AUC for each algorithm against the unit test described by Alg. <ref>. All deep-MIL models introduced after mi-Net <cit.> and tested here have failed the test, with the exception of <cit.>. This makes the increased accuracy/improvement[24]r0.45 Results for the standard MIL assumption test Alg. <ref>. Any algorithm that fails this test (testing AUC < 0.5) is fundamentally invalid as a MIL algorithm under all circumstances, and should not be used in cases where the MIL assumptions are important. Failing algorithms are shown in italics. max width=0.451c2cTraining 2cTesting(lr)2-3 (lr)4-5 1cAlgorithm 1cAcc. 1cAUC 1cAcc. 1cAUC mi-Net0.9910.998 0.9931.000 MI-Net 1.000 1.0000.000 0.000 MIL-Pooling1.000 1.0000.000 0.000 Tran-MIL 1.000 1.0000.000 0.000 GNN-MIL1.000 1.0000.000 0.000 CausalMIL 0.9990.999 0.9961.000 Hopfield 0.624 0.4950.500 0.488 (l)2-5mi-SVM0.9991.000 0.9351.000 MI-SVM1.0001.000 0.9861.000 SIL 0.9921.000 0.7660.998 NSK1.000 1.0000.000 0.000 STK1.000 1.0000.466 0.000 MICA0.5001.000 0.5001.000 MissSVM 0.9951.000 0.4490.551 on MIL problems of many prior work suspect. This is because any test could be learning to check for the absence of a feature, a violation of the MIL assumption that Alg. <ref> tests, and thus learning the kinds of relationships that are explicitly forbidden by the hypothesis space.The results of the older SVM literature are interesting. As noted by <cit.>, the NSK and STK models are not actually MIL-respecting, and thus fail the test. However, the SIL model was explicitly designed to ignore the MIL assumption, yet still passes this test. The MICA algorithm, while not designed to ignore MIL explicitly is not designed to enforce it either, so it also passes the test. While the MIL respecting MissSVM passes but only marginally. [21]r0.45Results for the Threshold MIL assumption test Alg. <ref>. Any algorithm that fails this test (testing AUC < 0.5) learns the invalid relationship that the absence of an instance indicates a positive label.Failing algorithms are shown in italics. max width=0.452cTraining 2cTesting(lr)2-3 (lr)4-5 1cAlgorithm 1cAcc. 1cAUC 1cAcc. 1cAUC mi-Net 0.735 0.9990.500 0.000 MI-Net0.9910.807 0.0000.827 MIL-Pooling 0.9991.000 1.0001.000 Tran-MIL 0.955 0.9490.500 0.000 GNN-MIL 0.9780.997 0.6240.678 CausalMIL 0.7170.745 0.5000.500 Hopfield 0.624 0.5400.500 0.503 (l)2-5mi-SVM0.5000.857 0.5000.818MI-SVM0.7590.887 0.7270.828SIL 0.5000.861 0.5000.732 NSK 1.0000.889 0.8890.966 STK0.947 0.9910.000 0.000 MICA 0.500 0.9980.500 0.490 MissSVM 0.6400.943 0.4990.763We find these results informative and instructive. They demonstrate that algorithmic unit tests are not certificates of correctness. Rather, failure of these tests is a certificate of an errant algorithm, but may produce false positives. While the design of a more powerful test is beyond the scope of this article, the work presented here provides practical caveats for the use of such tests in future studies. Any future MIL paper can use the tests and provide results to the reader to help boost confidence, but the test should not itself be used as a means of proving the correctness. Of note, CausalMIL is the only recent deep-MIL model we evaluated which is designed to respect the standard MIL assumption, and passes the test accordingly. While CausalMIL was not designed for the threshold MIL, it still passes the next two tests - but with a marginal AUC, near 0.5. This is reasonable since it is testing a scenario beyond CausalMIL's design. Indeed it would be acceptable even if CausalMIL failed the next tests, because they are beyond its scope (which happens to mi-Net). The goal is that models are tested to the properties they purport to have.§.§ Threshold Results Our next two unit tests cover two different aspects of the Threshold MIL assumption: 1) that they can learn to require two concepts to denote a positive class, and 2) that they do not degrade to relying on frequency (i.e., perform the desired counting behavior of each class). Any algorithm that passes either of these tests, but fails the Presence test, is still an invalid MIL algorithm by both the Presence and Threshold models because the Presence MIL model is a subset of the Threshold model. The results of our first test of Alg. <ref> on learning two concepts are shown in <ref>, where only the MIL-Pooling model learns the completely correct solution. This test is most valuable in showing how mi-Net, which is a valid Presence MIL model, is not a valid Threshold MIL model, reaching an AUC of 0. One may wonder why the mi-Net performs poorly, while the mi-SVM and MI-SVM pass the test with peculiar results. In the case of the mi-SVM, its label propagation step means that instance ∼𝒩(2,I_d · 0.1) and instance ∼𝒩(3,I_d · 0.1) will receive inferred negative labels (from negative bags), and positive labels (from positive bags). There are proportionally more ∼𝒩(3,I_d · 0.1) samples with positive labels, though, and each positive bag, by having more samples, can select the most-extreme data point (largest positive values in each coordinate) to infer that the positive bags are “more positive” than a negative bag. This results in a non-trivial AUC of 82%. In the mi-SVM case, the 50% accuracy remains because the overlapping and conflicting labels cause the optimization of the slack terms ξ to become degenerate. Because the MI-SVM does not result in conflicted labels by using the “witness” strategy, it instead can respond to the most maximal item in a bag learning to key off of the most right-tail extreme values of ∼𝒩(3,I_d · 0.1) to indicate a positive label, because the positive bags are more likely to have such extreme values by having more samples, and avoiding the conflicting label problem of mi-SVM. By contrast, the mi-Net model fails due to the increased flexibility of the neural network to learn a more complex decision surface, “slicing” the different maximal values to over-fit onto the training data, resulting in degenerate performance. Note that mi-Net's results do not change with the removal of the poisoned item at test time, as otherwise, its accuracy would degrade to zero. The MI-Net instead suffers from this problem, and by using the poison token ironically learns a less over-fit solution, allowing it to obtain a non-trivial AUC. [23]r0.45Results for the Treshold MIL assumption test Alg. <ref>. Any algorithm that fails this test (testing AUC < 0.5) is not able to learn that two concepts are required to make a positive bag.Failing algorithms are shown in italics.max width=0.451c2cTraining 2cTesting(lr)2-3 (lr)4-5 1cAlgorithm 1cAcc. 1cAUC 1cAcc. 1cAUC mi-Net 0.689 0.7440.740 0.496MI-Net0.9570.992 0.5001.000 MIL-Pooling0.997 0.9990.500 0.477Tran-MIL0.9890.998 0.9941.000GNN-MIL0.965 0.9950.475 0.000 CausalMIL 0.6880.752 0.4960.602Hopfield0.6250.493 0.5000.515 (l)2-5mi-SVM 0.500 0.7380.500 0.054 MI-SVM0.7700.875 0.5110.518 SIL0.500 0.7780.500 0.180 NSK1.000 1.0000.500 0.000 STK 1.0001.000 0.9961.000 MICA 0.985 0.9990.482 0.481 MissSVM0.785 0.9350.327 0.093The discussion on why mi-SVM and MI-SVM are able to pass the Alg. <ref> test is similarly instructive as to why they perform worse on the Alg. <ref> test as shown in <ref>. This test checks that the models do not learn to “cheat” by responding to the magnitude of the values or the frequency of a specific concept class occurrence. Because the frequency of concept classes changes from train-to-test, {mi, MI}-SVMs learn to over-focus on the magnitude of coordinate features to indicate a positive direction, which inverts at test time. Thus the performance of both methods drops significantly, and the mi-SVM ends up failing the test. We also note that between the two Threshold tests, we see different algorithms pass/fail each test. MIL-Pooling, Tran-MIL and STK, and NSK have dramatic changes in behavior from test to test. By developing unit tests that exercise specific desired properties, we are able to immediately elucidate how these algorithms fail to satisfy the Threshold-MIL assumption. Because Tran-MIL and STK pass <ref> but fail <ref>, we can infer that both Tran-MIL and STK are able to successfully learn the concept that “two concepts are required to occur” property, but are also able to learn to detect the absence of an instance as a positive indicator, and so fail the test. § CONCLUSIONOur article has proposed the development of algorithmic unit tests, which are synthetic training and testing sets that exercise a specific learning criterion/property of an algorithm being tested. By developing three such unit tests for the Multiple Instance Learning problem, we have demonstrated that only one post-2016 deep-MIL algorithm that we tested, CausalMIL, appears to actually qualify as a MIL model. We conclude that this is because the algorithms were designed without verifying that the MIL assumptions were respected. § ACKNOWLEDGEMENTS We would like to thank the reviewers of this work for their valuable feedback that has improved this paper. We note that some formatting changes were attempted but we could not make them a readable font and within the page limits, and so we apologize that the formatting requests could not be fully satisfied. IEEEtranN § PROOFS Proof of <ref>: g({∅, c_1, c_2}) = c_1 ≥ 1c_2 ≥ 1 is the target function. Using ∅_p to represent the background poison signal and ∅_B to represent the indiscriminate background noise, The training distribution contains negative samples (y=-1) of the form {∅_p=1, ∅_B ≥ 1, c_1=1} and {∅_p=1,∅_B ≥ 1, c_2 =1}, and positive samples (y=1) of the form {∅_B ≥ 1, c_1=1, c_2 =1}. By exhaustive enumeration, only two possible logic rules can distinguish the positive and negative bags. Either the (MIL) rule c_1 ≥ 1c_2 ≥ 1, and the non-MIL rule ∅_p = 0. However, a MIL model cannot respect the MIL hypothesis and learn to use ∅_p simultaneously, because ∅_p occurs only in negative bags.By changing the test distribution to evaluate the sample ∅_B=1, c_1=1, c_2=1 and observing the model produce the negative label y=-1, the only possible conclusion is it has learned the non-MIL hypothesis.Proof of <ref>: g({∅, c_1, c_2}) = c_1 ≥ 1c_2 ≥ 1 is the target function. Using ∅_B to represent the indiscriminate background noise, The training distribution contains negative samples (y=-1) of the form{∅_B ∈[1, 10], c_1∈[1,2]} and{∅_B ∈[1, 10], c_2 ∈ [1, 2] },and positive samples (y=1) of the form{∅_B ∈ [1, 10], c_1∈ [1, 2], c_2∈ [1, 2]}. By exhaustive enumeration, only two possible logic rules can distinguish the positive and negative bags: c_1 ≥ 1c_2 ≥ 1. However, there is a naive MIL rule that can obtain non-random, but not perfect accuracy, c_1+c_2 ≥ 3.By changing the test distribution to evaluate the samples ∅_B=1, c_1 ≥ 35 and ∅_B=1, c_2 ≥ 35 and observing the model produce the positive label y=1, the only possible conclusion is it has learned the non-threshold MIL hypothesis. | http://arxiv.org/abs/2310.17867v1 | {
"authors": [
"Edward Raff",
"James Holt"
],
"categories": [
"stat.ML",
"cs.AI",
"cs.LG"
],
"primary_category": "stat.ML",
"published": "20231027030511",
"title": "Reproducibility in Multiple Instance Learning: A Case For Algorithmic Unit Tests"
} |
Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA As demonstrated by Planck, SPT, and ACT, the abundance of Sunyaev-Zeldovich-detected galaxy clusters across mass and redshift is a powerful cosmological probe. Upcoming experiments such as the Simons Observatory (SO) will detect over an order of magnitude more objects than what previous experiments have found, thereby providing an unprecedented constraining potential. However, in order for this potential to be realised, the cluster detection and analysis pipelines will have to be built and understood to a much higher level of accuracy than has been demonstrated to date.Here we discuss ongoing efforts towards the accurate modelling of tSZ cluster counts, focusing on the improvements regarding optimisation bias, covariance estimation, and foreground deprojection of <cit.>, which are implemented in the publicly-availablepackage. Next, we briefly discuss the application of these improved cluster detection methods to Planck data. Finally, we introduce , a new cluster number count likelihood code that will be publicly available soon. Towards precision SZ cluster cosmology: from Planck to the Simons ObservatoryÍ. Zubeldia1,2e-mail: [email protected] January 14, 2024 ============================================================================== § INTRODUCTIONThe abundance of galaxy clusters as a function of mass and redshift has long been recognised as a powerful cosmological probe, sensitive to cosmological parameters such as Ω_m, σ_8 and the equation of state of dark energy (see, e.g., <cit.>). The thermal Sunyaev-Zeldovich (tSZ) effect <cit.> offers a unique window into the cluster population, allowing for cluster detection to high redshift. These tSZ-selected catalogues can be, in turn, used for cosmological inference, as demonstrated by Planck, SPT, and ACT (see, e.g., <cit.>). Upcoming mm experiments such as the Simons Observatory and CMB-S4 are set to revolutionise cluster cosmology, with them expected to find about 20 000 and 10^5 clusters, respectively <cit.>. These numbers will come with an unprecedented constraining potential. However, in order for this potential to be realised, the analysis pipeline, from cluster detection to cosmological parameter inference, will have to be constructed and understood to a much higher level of accuracy than what has been demonstrated in previous analyses.In this contribution, we outline several several analysis improvements made towards the maximal exploitation of upcoming tSZ cluster cosmology data. In particular, in Section <ref> we describe two significant improvements to the multi-frequency matched filter (MMF) cluster detection method, namely iterative noise covariance estimation and foreground spectral deprojection, which have been implemented in thepackage. Next, in Section <ref> we briefly discuss the application of these enhanced cluster detections methods to Planck data, in what is a currently ongoing project. Finally, in Section <ref> we introduce , a soon-to-be-released cluster-number-count theory package designed for fast likelihood computation.§ : A NEW MMF CLUSTER FINDER WITH ITERATIVE NOISE COVARIANCE ESTIMATION AND FOREGROUND DEPROJECTIONMulti-frequency matched filters (MMFs) <cit.> have become the standard tool with which clusters are detected at mm wavelengths. They rely on our knowledge of the cluster tSZ signal, both spectrally, as given by the tSZ spectral energy distribution (SED), and spatially, as given by the cluster pressure profile. Here we introduce , the Sunyaev-Zeldovich iterative Finder, a new implementation of the MMF cluster finding method that has been developed in order to study systematics in cluster detection and, ultimately, to apply it to upcoming mm data.is fully written in Python and publicly available[]. It incorporates a number of novel features, most notably iterative noise covariance estimation and foreground spectral deprojection, which we describe next. §.§ Iterative noise covariance estimation The motivation for iterative noise covariance estimation is the following. In order for it to be applied, a MMF needs as input the covariance matrix of the noise in the data, where by noise we mean all of the components in the data other than the one that is being targeted, i.e., the cluster tSZ signal. The noise covariance is typically estimated from the data and taken to be equal to the covariance of the data (e.g., <cit.>). However, as we discuss in detail in <cit.>, doing this creates two problems. First, the noise covariance is overestimated, which leads to a loss of signal-to-noise. Second, as the covariance is estimated from the data, the response of the MMF to the data becomes nonlinear. As we show in <cit.>, if the tSZ signal is present in the data-estimated covariance, a bias is induced in the cluster observables (cluster Compton-y estimate and signal-to-noise). This bias, which is analogous to the ILC bias encountered in map-based component separation (see, e.g., <cit.>), can be as large as 0.5 σ per cluster, and therefore potentially significant in a cosmological analysis.As demonstrated in <cit.>, iterative noise covariance estimation is a highly effective solution for these two problems. In this approach, the noise covariance is first estimated by taking it to be equal to the data covariance, which leads to a first, non-iterative cluster catalogue. Significant detections are then masked from the data and the noise covariance is re-estimated and used in a second run of the MMF algorithm, which delivers an updated, iterative cluster catalogue. If the masking signal-to-noise threshold is low enough, one iteration suffices to completely remove the ILC-like bias and to boost the signal-to-noise of the detections to the expected level (see Fig. <ref>). §.§ Foreground spectral deprojection As we argue in <cit.>, any signal that is spatially correlated with the tSZ field will induce a bias in the MMF cluster observables (cluster Compton-y estimate and signal-to-noise). In particular, simulations suggest that the Cosmic Infrared Background (CIB) can cause significant biases <cit.>. In order to address this problem, we have developed a spectrally constrained MMF that is able to completely null, or `deproject', the contribution from one or several foregrounds with given SEDs at the expense of a certain signal-to-noise penalty.In the particular case of the CIB, its SED is not perfectly well constrained and, even if it were, it is not fixed but varies as a function of redshift. Due to this, deprojecting some fiducial CIB SED may not be enough to effectively remove the CIB-induced bias from the cluster observables. Following the moment expansion approach of <cit.>, however, one can also deproject the first-order moments of the CIB SED with respect to some of the parameters describing it (e.g., the spectral emissivity index β or the inverse dust temperature at z=0, β_T). As we show in <cit.> for simulated data, our spectrally constrained MMF together with the moment expansion approach can be highly effective at suppressing the CIB-induced bias in the cluster observables (see Fig. <ref>). §.§ Optimisation bias The signal-to-noise is the preferred cluster observable in tSZ cluster analyses, and indeed is the observable through which the cluster sample is typically selected, with cluster candidates being identified as the peaks in the MMF signal-to-noise maps. This means that the signal-to-noise for each cluster is obtained by maximising the MMF signal-to-noise over a number of parameters, typically three: two sky coordinates and the cluster angular size. In modelling these `optimal' signal-to-noise measurements, the optimisation procedure must be taken into account. As first proposed in <cit.> and extensively studied in <cit.>, this can be done through a very simple analytical prescription accounting for the number of degrees of freedom over which the signal-to-noise is maximised. As we show in <cit.>, this prescription works to a high level of accuracy for the cluster catalogues delivered by .§ APPLICATION TO PLANCK DATAAs a demonstration of our improved cluster detections methods, we are currently applying them, as implemented in , to data from the Planck satellite, with the goals of (1) producing new cluster catalogues and (2) obtaining cosmological constraints from them. The catalogues and derived constraints will be published in the coming months.§ : A FAST AND FLEXIBLE CLUSTER NUMBER COUNT LIKELIHOOD PACKAGEWe have developed a new cluster number count likelihood code, , which will be made public soon along with an accompanying paper (Zubeldia & Bolliet, in prep.). Written in Python,is fast and very flexible, having been designed with the hope that it can be used to in order to perform a cosmological analysis with any cluster sample with little modification. Its main features are the following: * It supports three types of likelihoods: an unbinned likelihood, a binned likelihood, and an extreme value likelihood.* It also supports stacked data (e.g., stacked lensing profiles), which is modelled in a consistent way with the cluster catalogue.* It links the cluster mass observables (e.g., tSZ signal-to-noise, lensing mass estimate, or X-ray luminosity) to the cluster mass and redshift through a hierarchical model with an arbitrary number of layers, allowing for correlated scatter between the different mass observables. In each layer, the mass–observable scaling relations and the scatter covariance matrix can be defined in a custom way and can depend on sky location.* It incorporates several widely-used halo mass functions.* The unbinned likelihood supports an arbitrary number of cluster mass observables for each cluster in the sample, and it allows for the set of mass observables to vary from cluster to cluster. It also allows for redshift measurement uncertainties.* It allows for the presence of non-validated (i.e., potentially false) detections in the catalogue, modelling them in a consistent way.* It also allows for the generation of synthetic cluster catalogues for a given observational set-up, which can be used for, e.g., accuracy tests.* It can parallelise several of its computations using Python'smodule, boosting its performance.* It is interfaced with the Markov chain Monte Carlo (MCMC) code<cit.>, allowing for easy-to-run MCMC parameter estimation.has been benchmarked against [], which also allows for cluster number count likelihood calculation, although in more restrictive scenarios, finding excellent agreement between the two codes. Allen Allen S. W., Evrard A. E., Mantz A. B., 2011, ARA&A, 49, 409Sunyaev Sunyaev R. A., Zeldovich Y. B., 1972, Comments on Astrophysics and Space Physics, 4, 173Planck Planck 2015 Results XXIV 2016, A&A, 594, A24Bleem Bleem L. E., Stalder B., de Haan T., et al., 2015, ApJS, 216, 27Hilton Hilton M., et al., 2021, ApJS, 253, 3Bocquet Bocquet S., et al., 2019, ApJ, 878, 55Zubeldia2019 ZubeldiaI., Challinor A., 2019, MNRAS, 489, 401Simons Simons Observatory Collaboration 2019, J. Cosmology Astropart. Phys., 2019, 056S4 Abazajian K. N., et al., 2016, preprint,(arXiv:1610.02743)Melin Melin J. B., Bartlett J. G., Delabrouille J., 2006, A&A, 459, 341Zubeldia2022a ZubeldiaI., Rotti A., Chluba J., Battye R., MNRAS,522, 3, pp.4766-4780Coulton Coulton W. et al., arXiv:2307.01258Zubeldia2022b ZubeldiaI.,./ Chluba J., Battye R., MNRAS, 522, 4, pp.5123-5141 Vanderlinde Vanderlinde K., Crawford T. M., de Haan T., Dudley J. P., Shaw L., et al., 2010, ApJ, 722, 1180 Zubeldia2021 ZubeldiaI., Rotti A., Chluba J., Battye R., 2021, MNRAS, 507, 4852 Chluba Chluba J., Hill J. C., Abitbol M. H., 2017, MNRAS, 472, 1195Torrado Torrado J., Lewis A., JCAP, 2021, 05, id.057, 29 pp. | http://arxiv.org/abs/2310.18082v1 | {
"authors": [
"Íñigo Zubeldia"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20231027120602",
"title": "Towards precision SZ cluster cosmology: from Planck to the Simons Observatory"
} |
FormalGeo: Human-like IMO-level Geometric Automated Reasoning X. Zhang et al. School of Computer Engineering and Science, Shanghai University, Shanghai, China Institute of Artificial Intelligence, Shanghai University, Shanghai, China College of Sciences, Shanghai University, Shanghai, ChinaFormalGeo: The First Step Toward Human-like IMO-level Geometric Automated Reasoning Xiaokai Zhang1 Na Zhu1,2 Yiming He1,2 Jia Zou1,2 Qike Huang1,2 Xiaoxiao Jin1,2 Yanjun Guo1,2 Chenyang Mao1,2 Yang Li1 Zhe Zhu1,2 Dengfeng Yue1,2 Fangzhen Zhu1 Yifan Wang1,2 Yiwen Huang1 Runan Wang1,2 Cheng Qin1,2 Zhenbing Zeng3 Shaorong Xie1 Xiangfeng Luo1 Tuo Leng1,2,*January 14, 2024 ========================================================================================================================================================================================================================================================================================*Corresponding author: Email - [email protected] This is the first paper in a series of work we have accomplished over the past three years. In this paper, we have constructed a consistent plane geometry formal system. This will serve as a crucial bridge between IMO-level plane geometry challenges and readable AI automated reasoning. Within this formal framework, we have been able to seamlessly integrate modern AI models with our formal system. AI is now capable of providing deductive reasoning solutions to IMO-level plane geometry problems, just like handling other natural languages, and these proofs are readable, traceable, and verifiable. We propose the geometry formalization theory (GFT) to guide the development of the geometry formal system.Based on the GFT, we have established the FormalGeo, which consists of 88 geometric predicates and 196 theorems. It can represent, validate, and solve IMO-level geometry problems. we also have crafted the FGPS (formal geometry problem solver) in Python. It serves as both an interactive assistant for verifying problem-solving processes and an automated problem solver. We've annotated the formalgeo7k and formalgeo-imo datasets. The former contains 6,981 (expand to 133,818 through data augmentation) geometry problems, while the latter includes 18 (expand to 2,627 and continuously increasing) IMO-level challenging geometry problems. All annotated problems include detailed formal language descriptions and solutions. AI can be integrated into formal systems in various roles. It can act as a parser, enabling the autoformalization of natural language and geometric diagrams. It can also serve as a solver, running search tree pruning. Implementation of the formal system and experiments validate the correctness and utility of the GFT. The backward depth-first search method only yields 2.42% problem-solving failure rate on formalgeo7k. We can incorporate deep learning techniques to achieve lower one. The source code of FGPS and datasets are available https://github.com/BitSecret/FGPShere.§ INTRODUCTION Since the inception of mathematics, it has inherently encompassed both structure and computation. These two facets not only interact with each other but also mutually reinforce each other. With the advent of modern computing, it has started to exert its influence on mathematics in two distinct ways. On one hand, it serves as a mathematical tool, empowering mathematical computations more than ever before, thereby directly impacting the balance of values and methodologies within mathematics. On the other hand, as a mathematical medium, it indirectly reshapes the content and structure of mathematics through innovative applications, ushering in an unprecedented era of prosperity and development for the field.Within this transformative landscape, the field of mathematical mechanization emerged, situated at the intersection of mathematics and computer science. The translation and conversion of mathematical knowledge into a language comprehensible to computers are evidently the first and indispensable steps in the fusion of mathematics and computer science. This is the essence of formal mathematics.Formal mathematics serves as the foundation for computer-aided mathematical problem-solving. It employs a symbol system that adheres to a particular artificial grammar, enabling the representation of any concept, proposition, and inference. It is only when mathematical knowledge is rigorously formalized that the problem-solving process in mathematics can be described as a deterministic algorithm and implemented within a computer. Over the course of several decades, numerous formal mathematics systems and tools have emerged, such as Lean, Isabelle and Coq.The development of artificial intelligence (AI) has introduced a new paradigm for computer-aided mathematical problem-solving. AI-assisted mathematical problem solving is a rapidly developing area which aims to apply deep learning technology to math problem. AI systems can assume various roles within formal mathematical systems. They can serve as mathematical problem solvers <cit.>, automatically generating problem solutions. They can act as mathematical problem parsers <cit.>, assisting humans in converting mathematical knowledge into formal descriptions. Moreover, they can even take on the role of mathematical problem proposers <cit.>, suggesting mathematical conjectures.A consistent mathematical formal system is an indispensable component of AI-assisted mathematical problem-solving. On the one hand, a unified format for mathematical problem descriptions and datasets helps in avoiding interference from other factors and serves as a standard for evaluating the capabilities of AI models. On the other hand, the answers generated by AI models must be verified for credibility, and manual verification is both inefficient and error-prone. This necessitates the capability for computers to automatically validate the answers. The Stanford 2021 AI100 report <cit.> designates the IMO grand challenge <cit.> as a landmark event in the development of artificial intelligence: To create an AI system capable of receiving a formal problem representation, generating a formal (i.e., machine-checkable) proof for the problem, and attaining a gold medal in the International Mathematical Olympiad. AIMO Prize <cit.> was established to motivate the development of AI models that are capable of winning a gold medal in the IMO. Serving as a bridge that connects the two crucial research domains of AI and mathematics, formal mathematics has garnered increasing attention from researchers.The intersection and fusion of AI and formal mathematics have yielded a series of achievements. However, as an essential branch of formal mathematics, the formalization and mechanized solving of geometric problems have still made slow progress. As depicted in Fig. <ref>, this domain has long been constrained by three major challenges: inconsistent knowledge form, unreadable solving process and non-mechanized solving method.Addressing the first challenge, we introduced the geometry formalization theory (GFT) to unify the representation of geometric knowledge, including geometry ontology and geometry representation theory. geometry ontology provides a comprehensive overview of plane geometry from a highly abstract philosophical perspective, creating the geometry ontology domain. The geometry ontology domain consists of four quadrants based on the dimensions of number-shape and dynamic-static. Each quadrant maps the relationships between modern geometric axiom systems, geometry formal systems, and solvers across three hierarchical levels. Leveraging the geometry ontology domain, we can ensure the comprehensiveness of formal system design. Geometry representation theory investigates how to represent static geometric knowledge and dynamic problem-solving processes. Based on topological mapping method and topological construction method, we can transform geometric diagrams into textual or symbolic descriptions. Using geometry predicate logic, we can unify relation reasoning, logical operations, and algebraic calculations into rigorous formal representations.Addressing the second challenge, we designed a set of geometry formal languages to serve as a bridge for communication between humans and computers. geometry formal languages consist of geometry definition language and condition declaration language, where the former is used to personalize the configuration of solvers and import into the formal system, and the latter is used for inputting problem descriptions. Geometry formal languages possess rigorous syntactic descriptions that can be mechanically processed by computers, while their syntax is similar to predicate logic, providing good readability. We abstract the process of geometry problem-solving as a hyper tree (as shown in Fig. <ref>), where tree nodes represent known conditions, tree edges represent geometric theorems, and the problem-solving process is a path from the root node to the target leaf node. We can also utilize modern AI techniques to automatically translate between natural language and formal language, further enhancing readability.Addressing the third challenge, we transformed the mathematical problems of geometric problem-solving into computational search problems in the field of computer science, achieving both forward search and backward search. Forward search starts with known conditions and continuously applies theorems to obtain new conditions until the target condition is achieved. Backward search starts from the problem goal, expands the goal into multiple subgoals according to theorems, and repeats this process until all new subgoals are known conditions. The vast search space gives rise to the problem of combinatorial explosion, with the search time for difficult problems exhibiting exponential growth. Pruning of the search tree is essential. In addition to classical pruning techniques such as Monte Carlo Tree Search and Alpha-Beta pruning, we leverage deep learning as a powerful pruning method to accelerate the solving process.Our contributions can be summarized as follows:1.We propose the GFT, which comprises geometry ontology and geometry representation theory. This theory provides a comprehensive framework for the field of plane geometry, unifying the representation of symbolic geometric knowledge and graphic geometric knowledge. It encompasses various operations, including relation reasoning, logical operations, and algebraic calculations, within a unified representation framework.2.We have established the FormalGeo based on the GFT, which consists of 88 geometric predicates and 196 theorems. It can represent, validate, and solve geometry problems from STA-level to IMO-level.3.We have crafted the formal geometry problem solver (FGPS) in Python. It serves as both an interactive assistant for verifying problem-solving processes and an automated problem solver, utilizing various methods such as forward search, backward search and AI-assisted search and various strategy such as depth-first search, breadth-first search, random search and beam search. FGPS incorporates features such as formal statement parsing, condition validity checks, automatic diagram construction, and condition expansion. It is capable of executing backtrackable, interpretable algebraic equation solving and relational reasoning.4.We've annotated the formalgeo7k and formalgeo-imo datasets. The former contains 6,981 (expand to 133,818) geometry problems, while the latter includes 18 (expand to 2,627 and continuously increasing) IMO-level challenging geometry problems. Each problem comprises a complete natural language description, geometric shapes, formal language annotations, and theorem sequences annotations.5.We conducted experiments on the formalgeo7k, comparing two search-based problem-solving methods (forward and backward) and 4 search strategies (breadth-first, depth-first, random, beam) in terms of their success rate, time-consuming, and search step. The forward random search method yields a 39.7% problem-solving accuracy rate, and we can incorporate deep learning techniques to achieve higher one. § RELATED WORK Gelernter et al. developed the pioneering automated geometry problem-solving system known as the Geometry Theorem Prover <cit.>, which employed a backward search approach to solve pre-formalized problems. Nevins pointed out that the forward chaining method <cit.> can also be effective by efficiently representing the known conditions of the problem and limiting the typical application of those conditions. The development of geometry problem solving has led to the emergence of various downstream tasks, including geometry problem formalization <cit.>,geometric knowledge extraction <cit.>, geometric diagram parsing <cit.>, geometric theorem proving <cit.>, and geometry problem solving <cit.>.Wen-Tsun proposed the Wu's Method <cit.>, which transforms geometry problem into a system of algebraic equations consisting of polynomials and inequalities and leverages various algebraic techniques to solve these equations. The study of algebraic approaches to geometry problems has given rise to a range of research achievements, such as Buchberger's Gröbner bases method <cit.>, numerical parallel methods <cit.>, polynomial system triangulation elimination algorithm <cit.>, cylindrical algebraic decomposition for solving inequalities <cit.>, dimensionality reduction methods <cit.>, and software tools like GEOTHER <cit.>.Zhang proposed the point elimination method based on geometric invariants <cit.>. This approach employs constructive methods to describe problems and is capable of generating concise and meaningful readable proofs for a large number of non-trivial geometric problems. Subsequently, research on machine proofs of geometric theorems based on geometric invariants rapidly advanced <cit.>, leading to the development of practical software tools such as Geometry Explorer <cit.>, Geometry Expert <cit.> and Java Geometry Expert <cit.>. The method based on geometric invariant can also be extended to solid geometry <cit.> and non-Euclidean geometry <cit.>.The machine proof of geometric theorems can generally be categorized into the three aforementioned approaches <cit.>: search-based synthesis methods, algebra methods, and points elimination methods based on geometric invariants. Synthesis method can provide proofs of traditional style but can only prove a small subset of carefully chosen plane geometry theorems due to the limited computational power of computers and the combinatorial explosion inherent in these methods. Algebra methods can handle a wide range of problem types, but the solving process typically involves algebraic expressions that are difficult to manually verify and obtain readable proof procedures. Methods based on geometric invariants can provide readable proof procedures, but the types of problems that can be solved are limited by the types of geometric invariants available.Geometry problem solving has been gaining more attention in the NLP community recently. Several geometry formal systems and datasets have been constructed, such as Geometry3K <cit.>, GeoQA <cit.>, GeometryQA <cit.>. Geometry3K translates the known conditions of geometric problems into formal statements, defining theorems as a set of rules for converting between formal statements. This approach, referred to as Formal Language, is also used in GeoRE <cit.>, which focuses on geometric relation extraction, and PGDP5k <cit.>, which is designed for geometric image parsing. While these methods are intuitive, they lack theoretical guidance, are not comprehensive, and are not easily extensible with additional predicates and theorems. GeoQA employs the formal method of Program, transforming the geometric problem-solving process into a sequence of programs consisting of variables and operators. Executing this program sequence yields the solution. Subsequent work extended the number and types of questions and rules, resulting in GeoQA+ <cit.>, UniGeo <cit.>, and PGPS9K <cit.>. These formal methods can represent algebraic and symbolic problem-solving processes, but compared to formal language methods, they are less intuitive and cannot represent traditional geometric problem-solving processes. Additionally, adding new rules requires modifying the solver's code, making them less extensible. GeometryQA employs a formal method known as the Expression Tree, which transforms the problem-solving process into a solving tree composed of operators and variables. This method is similar to the programmatic approach but is more structured.Shared benchmarks and datasets have significantly advanced research in AI-assisted geometric problem solving. Several AI systems, such as CL-based model <cit.>, SCA <cit.>, GeoDRL <cit.>, have been constructed to achieve higher success rates in problem solving. As problem-solving success rates continue to improve, there is a growing demand for datasets with higher quality and difficulty. Previous work focused on AI system research but overlooked research into geometry formalization theory. Expanding datasets in existing systems requires substantial modifications to solver code, making it challenging to extend both the formal systems and datasets.§ GEOMETRY FORMALIZATION THEORY In the realm of geometry, a typical problem consists of known conditions described in natural language, the problem objective, and a geometric diagram. The problem-solving process can be construed as the application of multiple theorems, involving relational reasoning and algebraic computation. The study of GFT focuses on how to transform geometric problems and their solution processes, described in natural language and images, into a unified and precise formal language for mechanical processing by computers. GFT comprises 3 major components: geometry ontology, geometry representation theory, and geometry formal language.Geometry ontology studies the fundamental ontology within the field of geometry and the relationships between these ontological elements. It employs the geometry ontology domain, which provides a comprehensive and systematic summary of the knowledge of Euclidean plane geometry. The geometry ontology domain further refines into the geometry knowledge graph, guiding the design of geometry formal systems.Geometry representation theory investigates how to express geometric knowledge using formal language. Formal systems are abstract descriptions and simulations of the real world, with a one-to-one correspondence to the real world. Consistency theory explores how to establish a correct formal system. Under the guidance of consistency theory, we propose geometry representation theory. When transforming various geometric knowledge into a unified formal language, we must ensure the consistency of static representations (such as the formal representation of diagrams) and the consistency of dynamic processes (such as theorems described in formal language).Geometry formal language takes the form of structured geometric knowledge, divided into two categories: geometry definition language (GDL) and condition declaration language (CDL). GDL includes predicate definition language and theorem definition language. The former defines various types of geometric relations and properties, while the latter defines various theorems that may be used in the problem-solving process. During the initialization phase of a geometric problem solver, GDL is used to configure the solver, achieving its shareability and extensibility. CDL is employed to describe the known conditions of individual geometric problems and is divided into three categories: construction statements, condition statements, and objective statements. CDL allows us to input geometric problems into the solver. §.§ Geometry Ontology Geometry ontology domain consists of two dimensions: number and shape, and static and dynamic. Number refers to precise and quantitative descriptions of geometric knowledge, while shape pertains to generalized and qualitative descriptions of geometric knowledge. Static refers to various geometric knowledge elements, such as the properties of geometric figures and their interrelationships. Dynamic encompasses rules for transforming different types of geometric knowledge, including common knowledge and theorems.These two dimensions divide the geometry ontology domain into four quadrants, each further subdivided into three levels of mapping relationships: axiomatic systems, formal systems, and solvers, as shown in Fig. <ref>. The outermost layer, axiomatic systems, refers to geometric knowledge described in natural language or diagrams, such as problem conditions and theorem definitions. The intermediate layer, Formal Systems, transforms vague and uncertain natural language descriptions and knowledge in different forms from text and diagrams into precise, human-readable, and computable formal languages, serving as a bridge between humans and computers. The innermost layer, Solvers, represents the specific internal form of geometric knowledge within a computer, including data structures for problem conditions and applying methods for theorems. App. <ref> provides examples for various components of the geometry ontology domain to facilitate understanding.By further expanding geometry ontology domain, we construct geometry knowledge graph as shown in Fig. <ref>. Construction statements contain all the structural information of geometric figures. Starting from three types of construction statements, we derive all basic entities. By imposing further constraints on basic entities, we obtain general entities. Basic entities and general entities interact internally and with each other, forming entity relationships. Attributes represent quantifiable descriptions of geometric objects. Construction statements, basic entities, general entities, entity relationships, and attributes describe the static aspects of geometric knowledge. Properties, definitions, and judgments describe the dynamic processes of transforming geometric knowledge, i.e., theorems. The knowledge graph of geometry provides a detailed representation of the relationships and hierarchical structure among various geometric knowledge components. Using the knowledge graph of geometry ensures that the constructed formal system is more comprehensive and avoids omissions. §.§ Geometry Representation Theory App. <ref> introduces the consistency theory of formal systems. Guided by this theory, when constructing a geometry formal system, we must ensure the consistency of static representation and the consistency of dynamic processes.§.§.§ Consistency in Static Representation In the field of geometry, various geometric knowledge is conveyed through textual descriptions or geometric diagrams. Geometric knowledge in textual form is presented using natural language and mathematical symbols, making it relatively straightforward to transform into structured formal language. However, the challenge lies in formalizing the geometric knowledge implicit in geometric diagrams, which necessitates establishing a reversible mapping between geometric diagrams and their formal representations.We classify the information inherent in geometric diagrams into two categories: topological structure information (TSI) and metric information (MI). TSI defines the fundamental structure of a diagram and serves as a crucial basis for classifying different geometric shapes. MI further characterizes various properties of the diagram, such as the length of lines and the measure of angles. MI is closely related to TSI and is relatively amenable to summarization and formalization. The primary challenge lies in formalizing the TSI.The most fundamental elements that compose a geometric diagram are points, which can be used to describe the TSI of the diagram. We can define a set of rules that utilize the points constituting the diagram to depict its TSI.Closed geometric figures can be transformed into topologically equivalent circles, as shown in Fig. <ref>. Unfolding the circle from any position of the topologically equivalent circle into a straight line, the relative positions of points on the line record the topological structural information of the original diagram. We can use an ordered list of points as the formal representation of the geometric topological structural information. For non-closed geometric figures, we can first transform them into closed geometric diagrams before proceeding with formalization, as shown in Example.4. of Fig. <ref>. By unfolding the topologically equivalent circle from different positions, we may obtain different ordered lists of points. All these ordered lists together form a set, serving as the formal representation of the topological structural information of geometric diagram. Any element in the set contains all the topological structural information of the original diagram.The basic transformations of a geometric diagram can be defined as several operations on its formal representation, as shown in App. <ref>.The above method is called the topological mapping method, which can be used for the formalization of a simple geometric diagram. In practice, a geometric problem's diagram is often composed of a combination of simple geometric diagrams. We propose the topological construction method, which utilizes the formal representations of multiple simple geometric diagrams to obtain the formal representation of a composite diagram.The topological construction method decomposes the composite diagram into simple geometric diagrams while preserving the TSI between them, TSI of each simple geometric diagram and MI of each simple geometric diagram. When constructing a diagram, it first reconstructs the basic structure of the original diagram based on the TSI and then adjusts the diagram further according to the MI to obtain the original diagram. The topological construction method is a constructive drawing method that is not affected by the order of construction statements, as shown in Fig. <ref>. We have implemented topological construction method in FGPS, and the algorithm description and time complexity analysis can be found in App. <ref>.The process of constructing a geometric diagram can be denoted as ⊕. If diagram C is composed of diagrams A and B, their formal representations of TSI are represented as sets R_a, R_b, and R_c, respectively. Diagram A contains m points, while diagram B contains n points. There are k (k≥2)common points shared between A and B. Two elements P_a=(p^(a)_1, …, p^(a)_s_1, p_i, …, p_i+k, p^(a)_s_2, …, p^(a)_m) and P_a=(p^(b)_1, …, p^(b)_s_1, p_i+k, …, p_i, p^(b)_s_2, …, p^(b)_n) are selected from sets R_A and R_B. ⊕ is defined in two steps, as shown in Eq. <ref> and Eq. <ref>. In the first step, ⊗ operation is applied to P_a and P_b to obtain the element P_c. In the second step, the P_c undergoes the rotate (Eq.<ref>) i times to construct the set R_c. P_a ⊗ P_b = (p^(a)_1, …, p^(a)_s_1, p_i, p^(b)_s_2, …, p^(b)_n, p^(b)_1, …, p^(b)_s_1, p_i+k)R_a ⊕ R_b= {rotate^(i)(P_a ⊗ P_b)|i=1,2, …, m+n-2k+2}= R_c The formal representation of composite diagrams, along with ⊕, forms a semigroup, satisfying closure, commutative law and associative law. The proof can be found in App. <ref>.§.§.§ Consistency in Dynamic Process In essence, geometric theorems are rules governing the transformation of various geometric knowledge, involving relation reasoning, logical operations, and algebraic calculations. Geometric predicate logic (GPL) translates the diverse operations inherent in geometric theorems into a unified formal language, enabling precise theorem descriptions that can be mechanically executed by computers.Geometric knowledge comprises geometric relations and quantitative relations, with geometric relations as the core, and quantitative relations are attributes of geometric relations. Multiple geometric relations can lead to new relations through relation reasoning, where elements in these new relations can be derived through operations on the elements of existing relations.GPL categorizes operations in geometry into external relation reasoning and internal relation reasoning, as illustrated in Fig. <ref>. If the operation results in a new relation with a different structure, it is termed external relation reasoning, also known as relation composition. Confining a relation with constraints to obtain a stricter relation while preserving the original relation's structure constitutes internal relation reasoning. Depending on the type of constraint applied, this can be further categorized into geometric constraints and algebraic constraints on relations. Relation reasoning is subdivided into basic operations &, |, and ∼, which correspond to logical operations AND, OR, and NOT, respectively.Geometric relations are denoted as R(v_1, v_2, …, v_n), v ∈ V, where R is referred to as the relation name, specifying the type of geometric relation, and V represents point variables, describing the TSI of the geometric relation. A geometric relation with N elements is represented as a set R(v_1, v_2, …, v_n) = { (p^(i)_1,p^(i)_2, …, p^(i)_n) | i = 1, 2, …, N}, where p are the points constituting the geometric relation. For any element r_i = (p^(i)_1,p^(i)_2, …, p^(i)_n) of a geometric relation, we define r_i(v_j) as the value of the position v_j of element r_i, i.e., r_i(v_j) = p_j. Quantity relations can be represented by algebraic constraints denoted as R_A, where R_A(r) = 1 indicates that an element satisfies the constraint. Given relations R_1(v^(1)_1, v^(1)_2, …, v^(1)_n) and R_2(v^(2)_1, v^(2)_2, …, v^(2)_m), they have k common variables and their point variables are denoted as V_1 and V_2. We can obtain a new relation R_3 through GPL.& is referred to as the constrained Cartesian product and is analogous to the logical operation AND. The operation is denoted as R_1 & R_2 → R_3. Initially, the Cartesian product operation is applied to R_1 and R_2, yielding R^'_3. Constraints are imposed on R^'_3 to select elements that adhere to the constraint conditions. These constraints necessitate that the elements within R^'_3 exhibit identical values at the common variable positions of R_1 and R_2, as shown in Eq. <ref>, where × represents the Cartesian product. Then, duplicate common variables are eliminated, leading to the derivation of a new geometric relation R_3, as shown in Eq. <ref>. R^'_3(v^(1)_1, v^(1)_2, …, v^(1)_n, v^(2)_1, v^(2)_2, …, v^(2)_m)= {(r^1_i, r^2_j) | (r^1_i, r^2_j) ∈ R_1 × R_2, r^1_i(v) = r^2_j(v), v ∈ V_1 ∩ V_2 }R_1(v^(1)_1, v^(1)_2, …, v^(1)_n) & R_2(v^(2)_1, v^(2)_2, …, v^(2)_m)= {(r(v^(3)_1), r(v^(3)_2), …, r(v^(3)_m+n-k)) | r ∈ R^'_3, v^(3)∈ V_1 ∪ V_2}= R_3(v^(3)_1, v^(3)_2, …, v^(3)_m+n-k) The definition of & is more straightforward when the second relation is a quantitative relation. The operation is denoted as R_1 & R_A → R_3. The point variables of R_A are a subset of those in R_1. We take the elements from R_1 and incorporate them into R_A corresponding to the point variables to construct algebraic constraints. The set of all elements that satisfy these algebraic constraints forms a new geometric relation R_3, as shown in Eq. <ref>. R_3(v^(1)_1, v^(1)_2, …, v^(1)_n) ={r | r ∈ R_1, R_A(r) = 1} | corresponds to the logical operation OR, represented as R_1 | R_2 → R_3, as shown in Eq. <ref>. | is commonly nested together with & in practical applications.R_3(v^(1)_1, v^(1)_2, …, v^(1)_n) ={r | r ∈ R_1 ∪ R_2} ∼ corresponds to the logical operation NOT, represented as ∼ R_1 → R_3, as shown in Eq. <ref>. E is defined as all possible elements within a certain relation, specifically, all permutations of known points that conform to the structure of V_1. R_3(v^(1)_1, v^(1)_2, …, v^(1)_n) ={r | r ∈ E - R_1} GPL satisfies commutative, associative, and distributive laws, as demonstrated in App. <ref>. The nested use of GPL connectives with geometric and numeric relations provides a powerful expressive capability and can be employed for the formalization of theorems, as illustrated in App. <ref>. §.§ Geometry Formal Language Formal languages are categorized into geometry definition language (GDL) and conditional declaration language (CDL). The former is used to define entities, attributes, theorems, and other elements of a geometry formal system, while the latter is employed for declaring known conditions and problem-solving objectives in geometric problems.GDL comprises predicate definition language and theorem definition language. Predicate definition language is used to define different types of geometric relations and geometric properties. A typical predicate definition statement includes the name of the geometric relation, point variables, existential constraints on entities, format validity constraints, and automatic extension. The solver can read and interpret predicate definition language to ensure the legitimacy of input problem conditions.CDL consists of three main parts: construction statements, condition statements, and goal statements. Construction statements describe the TSI of the geometric problem's diagram, such as basic shapes, collinear, and cocircular. Condition statements are used to input the known conditions in geometric problems, including both geometric and algebraic relations. goal statements declare the problem-solving objectives.Both types of formal languages share the same syntax format, which is similar to predicate logic syntax. The fundamental concepts are predicates and terms. Predicates are used to define categories of geometric knowledge, encompassing geometric relations and algebraic relations. Terms specify the specific content of this knowledge. If it's a geometric relation, the term is an ordered sequence of points; if it's an algebraic relation, the term is an expression composed of geometric attributes, operators, free variables, and numbers. Functions are mappings from individual geometric relation terms to individual algebraic relation terms. Through such mappings, ordered sequences of points can be used to represent numeric relations, unifying the representation formats of geometric and numeric relations. Examples of formal language can be found in App. <ref>, App. <ref>, and App. <ref>.§ GEOMETRY FORMAL SYSTEM: FORMALGEO Guided by GFT, we have constructed the geometry formal system, FormalGeo. This formal system comprises 88 predicates and 196 theorems (see App. <ref> and App. <ref>) and can represent, verify, and solve geometric problems ranging from SAT-level to IMO-level. Based on FormalGeo, we annotated the formalgeo7k and formalgeo-imo datasets.The geometric definition language serves as the specific form of the geometry formal system. The design of the geometry formal system includes the design of the predicate definition language and the theorem definition language. This section introduces the design methodology of FormalGeo and provides an overview of formalgeo7k and formalgeo-imo datasets.Utilizing the simplified geometric knowledge graph presented in Fig. <ref> as a template, we formulate the predicate definition language and theorem definition language for FormalGeo. §.§ Predicate Definition Language Structure predicate are used to describe the TSI of geometric figures. FormalGeo comprises three fundamental structure predicates: Shape, Collinear, and Cocircular. In the problem-solving initialization phase, the solver executes topological construction method based on recognized structure predicate statements, automatically expanding to 6 basic entities: Point, Line, Angle, Polygon, Arc and Circle. Basic entities are a more intuitive representation of the TSI of geometric figures and serve as the core for various geometric and algebraic relations. Structure predicate and basic entity together are referred to as construction predicate, which describe the TSI of the figures.By applying additional constraints to basic entities, general entities are obtained. Various geometric entities are interconnected, forming entity relations. Properties are used to describe the MI of entities and entity relations, in conjunction with free variables, operators, and real numbers, to establish quantitative relations. General entity, entity relation, and property are collectively referred to as custom predicate. Fig. <ref> illustrates the hierarchical structure and extension relationships among various geometric predicates. The solver can automatically extend conditions based on the extension rules defined in the predicate definition language. It should be noted that the principle of extension rules is followed that construction predicate are only extended by other construction predicate, and custom predicate are only extended by other custom predicate. §.§ Theorem Definition Language Geometric theorems in a formal system are described using geometric predicate logic, consisting of premises and conclusions. Based on the characteristics of their premises and conclusions, geometric theorems can be broadly categorized into three types: properties, definitions, and determinations. If the premises of a theorem contain only one geometric relation, the theorem falls into the category of properties or definitions. Definitions are considered common knowledge and are automatically invoked and extended by the solver, while properties require explicit invocation. If the conclusion of a theorem contains only one geometric relation, the theorem serves as a determination for that particular geometric relation.As previously mentioned, geometric theorems are rules for converting various geometric knowledge, and these rules exhibit a hierarchical structure, as depicted in Fig. <ref>, where each arrow represents a method of theorem definition. When defining theorems, we can directly combine multiple predicates to create a determination theorem, as shown in Eq. <ref>, or we can break it down into several theorems, as shown in Eq. <ref>. Different ways of defining theorems can impact the speed of theorem application in the problem-solving process. Therefore, it is essential to explore the most suitable method for defining theorems. A & a & b & c → DA & a → B, B & b → C, C & c → D We introduce the concept of the abstract hierarchy of theorems to describe the level of structure in theorem definitions. The abstract hierarchy of a theorem, denoted as K, represents the minimum number of theorems required to derive a higher-level predicate from lower-level predicates. Along the red paths in Fig. <ref>, we have K_A → B = 1, K_A → C = 2, and K_A → D = 3.Theorems are intended to facilitate the solving of problems. We prioritize theorem definition based on solving time, while temporarily disregarding other factors such as readability. For search-based problem-solving algorithms, the solving time T is influenced by the length of the theorem sequence d, the number of theorems N in theorem library, and the average application time t̅ of a single theorem. This can be defined more specifically in Eq. <ref>. Here, K is directly proportional to d and inversely proportional to N and t̅. T ∝ f(d, N, t̅) ∝ (N + N^2 + … + N^d)t̅ In our practical observations, we have found that there exists an approximate inverse relationship between T and K. In the process of defining theorems, FormalGeo tends to favor higher abstract hierarchy. We leave more detailed comparative experiments and mechanistic analyses for future work. §.§ Datasets Most of the existing datasets for geometry problem-solving suffer from the following issues: 1.Limited data volume or non-open source availability. 2.Lack of annotations or incomplete and low-quality annotations. 3.Absence of formalazation theory support, resulting in incoherent and inconsistent formal systems. 4.Low scalability,Defining new predicates and theorems require solver's code modifications. 5.Lower difficulty level of the problems. To address the aforementioned issues, we have annotated formalgeo7k and formalgeo-imo.Our data is collected from various sources, including Geometry3k <cit.>, GeoQA <cit.>, GeoQA+ <cit.>, and online resources. We carefully curated, classified, deduplicated, and standardized the problem statements. The creation of the formalgeo7k involved 16 trained master's students over a period of around 13 weeks. The creation of the formalgeo-imo involved 4 trained master's students over a period of around 1 week. Excluding the time spent on collaboration and dataset allocation, annotating datasets took approximately 1000 person-hours. formalgeo7k comprises 6981 geometric problems that are accompanied by natural language descriptions, geometric diagrams, formal language annotations, and solution theorem sequence annotations. A annotated problem is illustrated in in Fig. <ref>. The problem-solving process can be represented as a hypertree with conditions as hypernodes and theorems as hyperedges. The solution theorem sequence is a path from the root node (known conditions) to a leaf node (the problem-solving objective). By selecting any intermediate node along this path as the problem-solving objective, we can generate new problems, allowing us to expand the problem number to 133,818. formalgeo-imo is constructed with the same standards but with more challenging problem difficulty.We use the length of theorem sequences required for problem solving as a rough metric for assessing problem difficulty. All annotated and expanded problems have been verified by the solver, and their average solution times varying with problem difficulty also show in the Fig. <ref>. The number of questions with a difficulty level of 15 or higher in formalgeo7k is quite small, leading to significant fluctuations. formalgeo-imo follows the same reasoning. After data augmentation, datasets exhibit a larger scale of data and a smoother difficulty curve. In general, more challenging problems require longer solving time. § GEOMETRY PROBLEM SOLVER: FGPS Guided by GFT, we have constructed a geometric problem solver, FGPS. Any geometric formal system designed based on GFT and any geometric problem that adheres to the syntax of formal language can be input into FGPS for verification and solution.In this section, we introduce the implementation of the core solving engine. Detailed descriptions about solver's structure and other functionalities can be found in the App. <ref>. §.§ GPL Executor The process of geometric problem solving can be represented as a sequence of theorem applications. Theorems are defined using GPL, and, as a result, the process of geometric problem solving within the solver is essentially the execution of GPL. GPL statements can consist of multiple logical conjunction words, geometric relations, and quantitative relations nested together. The application process can be divided into 4 steps:In the GPL parsing phase, the solver expands complex GPL statements into disjunctive normal form (DNF) using the distributive law. Each simple conjunction represents a branch of the theorem. This not only meets the requirements for backward reasoning and facilitates the generation of sub-goals but also speeds up theorem execution by skipping irrelevant branches.In the GPL ordering phase, for each branch of the theorem, the solver adjusts the positions of geometric relations and quantitative relations within simple conjunctions according to the commutative law. The guiding principles for this adjustment are as follows: 1.Transforming relation composition into geometric constraints 2.Moving geometric constraints forward 3.Moving algebraic constraints backward. This approach not only helps filter out geometric relation elements that do not comply with the constraints, preventing the explosion of combinatorics caused by Cartesian product operations, but also reduces the number for algebraic equation solving, thereby improving theorem application speed.In the GPL execution phase, the solver reads geometric and quantitative relations sequentially and performs relational inference (Eq. <ref> ∼ <ref>) in the order of their appearance.The GPL execution process can be illustrated with an example. Suppose that we have a theorem defined as shown in Eq. <ref>, which includes 5 geometric relations R_1(v_1, v_2), R_2(v_2, v_3), R_3(v_2), R_4(v_2, v_3) and R_5(v_2) and 1 quantitative relation R_A(v_1, v_2). R_1 & (R_2 | ( ∼ R_3 | R_A) & R_4 & R_5) During the GDL parsing phase, it is expanded into a DNF according to the distributive law, as shown in Eq. <ref>. This DNF consists of 3 simple conjunctions, with each simple conjunction serving as a theorem branch. R_1 & R_2 | R_1 &∼ R_3 & R_4 & R_5 | R_1 & R_A & R_4 & R_5 In the GDL reordering phase, let's take branch R_1 & R_A & R_4 & R_5 as an example. It adjusts the order of its statements according to the commutative law, resulting in the form shown in Eq. <ref>. R_1 & R_5 & R_4 & R_A In the GPL execution phase, the GDL statements are read and executed in order, and the process is as shown in Eq. <ref>. R_1 & R_5 & R_4 & R_A → R_1,5& R_4 & R_A → R_1,5,4& R_A → R_1,5,4,A§.§ Minimum Dependency EquationsThe known conditions of geometric problems can be categorized into geometric and quantitative relationships. Quantitative relationships eventually represented as a set of algebraic equations or inequalities. When performing algebraic constraint in the execution of GPL, the satisfaction of algebraic constraint under the known algebraic equations or inequalities of the problem is checked.Algebraic constraints can be transformed into algebraic expressions represented by a, creating the target equation g-a. Among the several known equations X in the problem conditions, those relevant to g-a are selected to construct the target equation group G, which is subsequently solved. If g=0 is obtained as a solution, the algebraic constraints are satisfied. Typically, only a few equations in X are related to g-a, and this subset of equations is referred to as the minimum dependency equations.The solving of equations accounts for the majority of the time spent in the entire process of solving geometric problems. Accelerating the equation solving process is crucial for enhancing the speed of geometric problem solving. To address this, we propose a method for constructing the minimum dependency equations. Without loss of generality, we examine the intermediate process of constructing G. At time t (t=1,2,…), G_t contains t equations and m unknowns, with the set of unknowns denoted as M_t. We need to select a candidate equation x_t from X to add to G_t in a way that increases the likelihood of obtaining a solution for the unknown g. The set of unknowns in x_t is represented as B_t. This process is repeated until |M_t|=t or no new equations can be added. |B_t ∩ M_t| > 0min(|B_t - M_t|)max(|B_t ∩ M_t|) The selection criteria for x_t are as follows:1.B_t must intersect with M_t, as shown in Eq. <ref>. If they do not intersect, it implies that x_t is unrelated to G_t.2.Under the condition of satisfying Eq. <ref>, adding x_t should introduce as few new unknown variables as possible, as depicted in Eq. <ref>. The closer the number of t and |M_t| are, the higher the likelihood of solving G_t. In the initial stages of constructing G, which only contains g-a, M_1 - 1 > 1. The number of Added equation each time is a fixed value 1. If we aim to minimize the gap between t and |M_t|, we should try to introduce as few new equations when selecting x_t.3.Under the condition of satisfying Eq. <ref> and Eq. <ref>, the equation to be added should encompass more unknown variables, as demonstrated in Eq. <ref>. These additional unknown variables are often associated with other equations within G_t, providing more choices for simplifying G_t. If there are multiple equations that satisfy these conditions, we can choose any of them at random.§ EXPERIMENTS We conducted experiments on the formalgeo7k, comparing different search methods and strategies in terms of problem-solving success rate, solution time, and the number of steps required for problem-solving.Forward search (FW) starts from the known conditions of the problem and continuously apply theorems to derive new conditions until the goal is achieved. Backward search (BW), on the other hand, begins with the problem-solving goal, expands it into multiple sub-goals, and repeats this process until all sub-goals are resolved. A detailed description of the search algorithms can be found in App. <ref>.The search-based methods construct a search tree during the problem-solving process. We have the flexibility to choose various strategies to traverse the search tree and reach the goal. Breadth-first search (BFS) begins by expanding the top-level nodes of the search tree and then proceeds layer by layer into the depth. Depth-first search (DFS) recursively selects nodes from the search tree from shallow to deep and continues this process. Random search (RS) randomly selects an expandable node at each stage of expansion. Beam search (BS) selects k nodes in each stage of expansion and can be viewed as a trade-off between BFS and RS.We conducted experiments on 2 Intel i9-10900X, 1 AMD Ryzen 9 5900X, and 1 AMD Ryzen 9 7950X, running the search algorithms using multiple processes while maintaining a CPU utilization rate of 80%. The maximum search depth was set to 15, and the beam size was set to 20. The total duration of the experiments was approximately 3 days. When the timeout for each problem was 300 seconds, the best success rate for problem-solving was approximately 30%. When the timeout for each problem was increased to 600 seconds, the specific results are as follows.An overview of search-based automated problem-solving results is presented in Tab. <ref>. The highest problem-solving success rate was achieved by forward random search, reaching 39.708%. Most of the remaining problems were due to timeouts. As timeout settings are extended and computational resources increase, the proportion of timeout problems is expected to decrease. The number of unsolved problems using beam search was significantly higher compared to other strategies. This is because when selecting k branches, beam search occasionally discards the correct branch. Other contributing factors may include code bugs, equation solving timeouts, and the omission of theorems related to trigonometric.In accordance with the length of the theorems required for problem-solving, we roughly categorize the difficulty of the questions into 6 levels, denoted as l_1(length <= 2), l_2 (3 <= length <= 4), l_3 (5 <= length <= 6), l_4 (7 <= length <= 8), l_5 (9 <= length <= 10), l_6 (length >= 11), with corresponding problem numbers of 2407, 1898, 1247, 824, 313 and 292. The success rates for solving geometric problems of varying difficulty are presented in Tab. <ref>. As problem difficulty increases, the success rate of problem-solving rapidly declines. This phenomenon can be attributed to the fact that search-based problem-solving methods exhibit exponential growth in solving time as the length of the theorem sequence increases, often resulting in timeouts before achieving the goal. For problems of lower difficulty, backward search demonstrate a relatively higher success rate, while forward search outperforms in the case of more challenging problems.The efficiency of the problem-solving algorithm can be measured by the search time and step. The experimental results of search-based automated problem-solving algorithms on the formalgeo7k are presented in Fig. <ref>.In terms of average search time, backward search is slightly better than forward search overall. For solved problems, the search time is roughly proportional to the difficulty of the problems when problems are of low difficulty. However, as the difficulty increases, the search time for both forward search and backward search decreases. On the one hand, this is because there are very few successfully solved high-difficulty problems, leading to significant statistical errors. On the other hand, when dividing the difficulty of problems, we only consider the length of the solution theorem sequence but do not consider the time required for each theorem execution. The solved high-difficulty problems are precisely those that require less solution time. For unsolved problems, the search time is roughly proportional to the difficulty of the problems.Comparing different search strategies, it can be observed that in the forward search, BFS has a slightly lower success rate compared to the RS, but it takes the most time. BS has the lowest success rate but the least time consumption. For forward search, RS is the optimal strategy as it has the highest success rate and only slightly higher time consumption than BS that has the lowest success rate. In backward search, BFS is the optimal strategy, with the highest success rate and only slightly higher time consumption than DFS.We observe a significant difference in the solution time of the BS strategy in backward search for solved and unsolved problems. This difference may be due to the characteristics of the backward search, where even if possible solution branches were discarded in previous steps, they may be reconstructed in later search steps. Therefore, as for BS of backward search, discarding potential solusation branches does not lead to solution failure but takes longer search time.Regarding the search step, forward search statistics are based on the number of nodes, while backward search statistics are based on the number of super nodes. Hence, they cannot be directly compared. The search step length in forward search is positively correlated with the difficulty of problems, while in backward search, it is negatively correlated with problem difficulty. The results of backward search are counterintuitive, and this could be because, for higher difficulty problems, the super nodes in backward search may contain more nodes, leading to increased time spent traversing a single super node and a reduction in the total number of traversed super nodes. Additionally, it can be observed that the search step length for unsolved problems in backward search is significantly higher than the average step length for solved problems. This is because, compared to forward search, backward search is less likely to halt, and it continues searching even if it misses a potential solution branch.Comparing different strategies, DFS has the highest search step length, BS has the lowest search step length, and RS and BFS strategies have approximately the same average step length. For forward search, RS strategy is still the optimal strategy because it has the highest success rate and its search step length is only slightly higher than BS. Backward search does not exhibit a significantly superior strategy.§ CONCLUSION We have introduced GFT, which includes geometry ontology and geometry representation theory, to guide the formalization of geometric problems. Building upon GFT, we have developed the geometric formal system FormalGeo and constructed the solver FGPS. Furthermore, we have annotated the geometric problem-solving datasets, formalgeo7k and formalgeo-imo. Experiments have demonstrated the correctness and utility of GFT. We have also analyzed the success rate and efficiency of the solver's automatic problem-solving algorithm.In the future, we plan to enhance GFT to make it more comprehensive and endowed with stronger representational capabilities. We also intend to further improve FormalGeo by expanding the types of predicates and theorems, as well as annotating formalgeo-imo datasets. Additionally, we aim to apply deep learning techniques to search tree pruning for the automatic solving of IMO-level geometric problems.§ ACKNOWLEDGEMENT Thanks to all researchers involved in academic discussions and datasets annotation. The research was supported by Geometric Cognitive Reasoning Group of Shanghai University (GCRG, SHU).§ CONTRIBUTIONS Xiaokai Zhang: Conceptualization, Methodology, Coding, Dataset annotation, Writing – original draft & review & editing. Na Zhu: Conceptualization, Methodology, Dataset annotation, Writing – review & editing. Yiming He, Jia Zou: Conceptualization, Methodology, Dataset annotation. Qike Huang, Xiaoxiao Jin, Yanjun Guo, Chenyang Mao, Zhe Zhu, Dengfeng Yue, Fangzhen Zhu, Yang Li, Yifan Wang, Yiwen Huang, Runan Wang, Cheng Qin: Dataset annotation. Zhenbing Zeng, Shaorong Xie, Xiangfeng Luo: Writing – review. Tuo Leng: Supervision, Funding acquisition, Conceptualization, Methodology, Writing – review & editing.§ EXAMPLES Guided by the GFT, we have developed the geometry formal system, FormalGeo. In this section, we provide several examples related to the design of FormalGeo predicates and theorems to help readers better understand the GFT. App. <ref> and App. <ref> correspond to the application of geometry ontology, while App. <ref>, App. <ref>, and App. <ref> correspond to the application of geometry representation theory. §.§ Geometry ontology domainThe geometry ontology domain comprehensively summarizes and categorizes geometric knowledge. It is divided into 4 quadrants along 2 dimensions: dynamic versus static, and number versus shape. Each quadrant encompasses mappings for axiomatic system, formal system, and solver, as depicted in Tab. <ref>. §.§ Geometry knowledge graphThe geometry knowledge graph is an extension of the geometry ontology domain, which structurally presents the design process of predicate definition language and theorem definition language, preventing omissions or redundancies. FormalGeo comprises n predicates and m theorems, and its geometry knowledge graph is illustrated in Fig. <ref>. §.§ Predicate definition languageFormalGeo comprises 88 predicates, including 25 fundamental predicates (Tab. <ref>) built into the solver and 12 entities (Tab. <ref>), 30 entity relationships (Tab. <ref>), and 21 attributions (Tab. <ref>) defined using the predicate definition language. The structured relationships between predicates are depicted in Fig. <ref>.The detailed statements for defining a predicate is as shown in the Tab. <ref>, including the predicate name and point variable declaration, validity check declaration, multiple representations, and automatic expansion. Additionally, when defining attributes, it also includes symbolic form declaration. §.§ Theorem definition languageTheorems are defined using the GPL, comprising two parts: premises and conclusions, as shown in the Tab. <ref>. FormalGeo encompasses 196 theorems, and their structured relationships are illustrated in Fig. <ref>.§.§ Condition declaration languageWe can use CDL to transform the description of geometric problems into a formal language. CDL consists of three main parts: construction statements, condition statements, and goal statements. An example of a transformed problem is illustrated in Fig. <ref>, with theorem_seqs denoting the annotated problem-solving theorem sequence.We abstract the process of geometry problem-solving as a hyper tree, where tree root nodes represent known conditions (labeled in green in Fig. <ref>), tree hyperedges represent geometric theorems, and the problem-solving process is a path from the root node to the target leaf node (labeled in yellow in Fig. <ref>). § CONSISTENCY THEORY OF FORMAL SYSTEMA formal system is an abstraction of the real world that simplifies and clarifies problems or concepts by removing certain details and complexities. Its primary objective is to provide a precise, consistent, and reliable method for studying and solving problems while eliminating ambiguity and uncertainty. By formalizing problems, we can apply mathematical, logical, and computational methods to address a wide range of real-world issues.In the real world, a concept c, which could represent anything, the properties of things, relationships between things, etc., can be represented by a symbol s^(c) within the formal system. A single concept may have multiple symbol representations within the formal system, denoted as the set R_c. The rules governing the conversion of concepts in the real world are represented as various rules in formal system, denoted as . These rules define relationships and operations between symbols, determining the methods of inference and computation.When designing a formal system, it is essential to ensure the consistency of static representations. For any symbol representation s^(c)_i of a concept c, there should exist an operation multi to obtain the symbol set R_c, as shown in Eq. <ref>. The mapping between c and R_c should be a reversible mapping, as indicated in Eq. <ref>. multi(s^(c)_i) = R_c, s^(c)_i ∈ R_cf(c) = R_c, f^-1(R_c) = c Additionally, the design of a formal system should ensure the consistency of dynamic processes. For any ruledesigned to govern the conversion of concepts c_a and c_b in the real world, it should satisfy the condition that, for any symbol representation s^(c_a)_i of c_a, the application ofresults in and can only yield the symbol representation s^(c_b)_j of c_b. c_ac_bs^(c_a)_is^(c_b)_j, s^(c_a)_i ∈ R_c_a, s^(c_b)_j ∈ R_c_b § TOPOLOGICAL MAPPING METHOD When we utilize topological mapping method to transform the TSI of a diagram into a formal representation, we can define the fundamental transformations of the diagram as operations on its TSI formal representation.The formal representation of the TSI of a geometric diagram is denoted as (p_1, p_2, …, p_n). Clockwise rotation of the diagram can be defined as moving the first point in the sequence to the end of the sequence, as shown in Eq. <ref>. Reflection can be defined as reversing the order of the sequence, as shown in Eq. <ref>. Translation, shearing, and scaling do not alter the TSI of the diagram, as shown in Eq. <ref>. rotate((p_1, p_2, …, p_n)) = (p_2, …, p_n, p_1)reflect((p_1, p_2, …, p_n)) = (p_n, p_n-1, …, p_1)linear((p_1, p_2, …, p_n)) = (p_1, p_2, …, p_n) § TOPOLOGICAL CONSTRUCTION METHOD The operation ⊕ defined by Eq. <ref> and Eq. <ref> is meaningful only when two shapes are non-overlapping simple closed shapes and when the two shapes share adjacent edges (the number of common points > 2).The formal representation of composite diagrams, along with ⊕, forms a semigroup, satisfying closure, commutative law and associative law. The formal representation of composite diagrams is defined as the collection of the formal representations of all its constituent shapes, so closure is evidently satisfied. Next, we will prove that the operation ⊕ satisfies commutative law and associative law.If diagram C is composed of diagrams A and B, their formal representations of TSI are represented as sets R_a, R_b, and R_c, respectively. Diagram A, B, C contains m, n, o points, respectively. There are x (x≥2)common points shared between A and B and y (y≥2)common points shared between B and C. The formal representations of their TSI can be represented as Eq. <ref>, Eq. <ref> and Eq. <ref>. P_a=(p^(a)_1, …, p^(a)_s_1,p^(a,b)_i, …, p^(a,b)_i+x,p^(a)_s_2, …, p^(a)_l)P_b= (p^(b)_1, …, p^(b)_s_1,p^(a,b)_i+x, …, p^(a,b)_i, p^(b)_s_2, …, p^(b)_s_3, p^(b,c)_j, …, p^(b,c)_j+y, p^(b)_s_4, …, p^(b)_m)P_c=(p^(c)_1, …, p^(c)_s_1,p^(b,c)_j+y, …, p^(b,c)_j,p^(c)_s_2, …, p^(c)_n) We shall begin by proving the commutative law. As demonstrated in Eq. <ref> and Eq. <ref>, P_a ⊗ P_b = rotate^(i)(P_b ⊗ P_a), where i=|{p^(b)_1, …, p^(b)_s_1, p^(a,b)_i+x, p^(a)_s_2, …, p^(a)_l}|. Then the commutative law can be demonstrated by Eq. <ref>. P_a ⊗ P_b = (p^(a)_1, …, p^(a)_s_1, p^(a,b)_i,p^(b)_s_2, …, p^(b)_s_3,p^(b,c)_j, …,p^(b,c)_j+y,p^(b)_s_4, …, p^(b)_m,p^(b)_1, …, p^(b)_s_1, p^(a,b)_i+x,p^(a)_s_2, …, p^(a)_l)P_b ⊗ P_a = (p^(b)_1, …, p^(b)_s_1, p^(a,b)_i+x,p^(a)_s_2, …, p^(a)_l,p^(a)_1, …,p^(a)_s_1, p^(a,b)_i,p^(b)_s_2, …, p^(b)_s_3,p^(b,c)_j, …, p^(b,c)_j+y,p^(b)_s_4, …, p^(b)_m)R_a ⊕ R_b= {rotate^(i)(P_a ⊗ P_b)|i=1,2, …, l+m-2x+2}= {rotate^(i)(P_b ⊗ P_a)|i=1,2, …, l+m-2x+2}= R_b ⊕ R_a Next, we proceed to prove the associative law. As demonstrated in Eq. <ref> and Eq. <ref>, (P_a ⊗ P_b) ⊗ P_c = P_a ⊗ (P_b ⊗ P_c), where Then the commutative law can be demonstrated by Eq. <ref>. (P_a ⊗ P_b) ⊗ P_c = (p^(a)_1, …, p^(a)_s_1, p^(a,b)_i,p^(b)_s_2, …, p^(b)_s_3,p^(b,c)_j, …,p^(b,c)_j+y,p^(b)_s_4, …, p^(b)_m,p^(b)_1, …, p^(b)_s_1, p^(a,b)_i+x,p^(a)_s_2, …, p^(a)_l) ⊗ (p^(c)_1, …, p^(c)_s_1, p^(b,c)_j+y, …, p^(b,c)_j, p^(c)_s_2, …, p^(c)_n)= (p^(a)_1, …, p^(a)_s_1, p^(a,b)_i,p^(b)_s_2, …, p^(b)_s_3,p^(b,c)_j, p^(c)_s_2, …, p^(c)_n,p^(c)_1, …, p^(c)_s_1, p^(b,c)_j+y,p^(b)_s_4, …, p^(b)_m,p^(b)_1, …, p^(b)_s_1, p^(a,b)_i+x,p^(a)_s_2, …, p^(a)_l)P_a ⊗ (P_b ⊗ P_c) = (p^(a)_1, …, p^(a)_s_1, p^(a,b)_i, …, p^(a,b)_i+x, p^(a)_s_2, …, p^(a)_l) ⊗ (p^(b)_1, …, p^(b)_s_1, p^(a,b)_i+x, …, p^(a,b)_i,p^(b)_s_2, …, p^(b)_s_3,p^(b,c)_j,p^(c)_s_2, …, p^(c)_n, p^(c)_1, …, p^(c)_s_1, p^(b,c)_j+y, p^(b)_s_4, …, p^(b)_m)= (p^(a)_1, …, p^(a)_s_1, p^(a,b)_i,p^(b)_s_2, …, p^(b)_s_3,p^(b,c)_j,p^(c)_s_2, …,p^(c)_n, p^(c)_1, …,p^(c)_s_1, p^(b,c)_j+y, p^(b)_s_4, …, p^(b)_m,p^(b)_1, …, p^(b)_s_1, p^(a,b)_i+x, p^(a)_s_2, …, p^(a)_l)(R_a ⊕ R_b) ⊕ R_c = {rotate^(i)((P_a ⊗ P_b) ⊗ P_c)|i=1,2, …, l+m+n-2x-2y+4}= {rotate^(i)(P_a ⊗ (P_b ⊗ P_c))|i=1,2, …, l+m+n-2x-2y+4}= R_a ⊕ (R_b ⊕ R_c) The aforementioned properties enable us to construct diagrams from any two simple components of a composite diagram without considering the order of construction. This laws makes the TCM an order-independent diagram construction method. When formalizing problems and implementing TCM, we no longer need to be concerned about the order of construction statements.We have implemented TCM in FGPS, and the algorithm description can be found in Alg. <ref>, where the multi function extends a single TSI formal representation of a diagram to the set of all formal representations, as defined in Eq. <ref>, and the combine function is used to achieve the combination representation of two simple diagrams, as defined in Eq. <ref>.The computational cost of the algorithm per iteration is shown in Tab. <ref>, where n represents the number of simple closed diagrams that make up the complex diagram. For the complex diagram, it takes at most n-1 combinations of simple figures to obtain the complex diagram. The number of combined diagrams generated after each iteration depends on the number of edges in the simple diagrams. For example, when n triangles are combined, the number of quadrilaterals obtained is approximately 3n/2, denoted as k_i · n.The time complexity of the implemented TCM is given by Eq. <ref>. n^2 + k_1n^2 + … + k_n-2n^2 =(1 + k_1 + … + k_n-2)n^2 ≈ O(n^3) § GEOMETRY PREDICATE LOGICThe GPL operation & is, in fact, a more general form of operation ⊕. ⊕ requires that the two geometric relations involved in the operation are formalized TSI, and the result has a fixed form. & is not limited to the same type of geometric relations and can even operate across different types of relations, such as geometric and quantitative relations. The result of the operation is a set of point variables, which can be combined into specific structures as needed. In addition, GPL has introduced the | and ∼ to enhance its expressive power.For the & operation, when the second relation is a quantitative relation, it is denoted as R_1 & R_A. The operation is only meaningful when the point variables of R_A are a subset of those in R_1. As for the | operation, it is only meaningful when the two relations involved have the same point variable structure. The & operation satisfies commutative law and associative law, while the combined operation of & and | satisfies the distributive law. R_1(v^(1)_1, v^(1)_2, …, v^(1)_l) R_2(v^(2)_1, v^(2)_2, …, v^(2)_m) R_3(v^(3)_1, v^(3)_2, …, v^(3)_n) There are 3 relations as shown in Eq. <ref>, Eq. <ref> and Eq. <ref>. In the subsequent proof process, for the sake of clarity, we temporarily omit the operation of removing duplicate variables, as defined in Eq. <ref>.R_1 & R_2 = {(r^1_i, r^2_j) | (r^1_i, r^2_j) ∈ R_1 × R_2, r^1_i(v) = r^2_j(v), v ∈ V_1 ∩ V_2 }= {(r^2_j, r^1_i) | (r^2_j, r^1_i) ∈ R_2 × R_1, r^1_i(v) = r^2_j(v), v ∈ V_1 ∩ V_2 }= R_2 & R_1(R_1 & R_2) & R_3= {(r^1_i, r^2_j, r^3_k) | (r^1_i, r^2_j, r^3_k) ∈ (R_1 × R_2) × R_3, r^1_i(v) = r^2_j(v), v ∈ V_1 ∩ V_2,r^1_i(u) = r^3_k(u), u ∈ V_1 ∩ V_3, r^2_j(w) = r^3_k(w), w ∈ V_2 ∩ V_3 }= {(r^1_i, r^2_j, r^3_k) | (r^1_i, r^2_j, r^3_k) ∈ R_1 × (R_2 × R_3), r^1_i(v) = r^2_j(v), v ∈ V_1 ∩ V_2,r^1_i(u) = r^3_k(u), u ∈ V_1 ∩ V_3, r^2_j(w) = r^3_k(w), w ∈ V_2 ∩ V_3 }= R_1 & (R_2 & R_3)R_1 & (R_2 | R_3)= {(r^1_i, r^2_j) | (r^1_i, r^2_j) ∈ R_1 × (R_2 ∪ R_3), r^1_i(v) = r^2_j(v), v ∈ V_1 ∩ (V_2 ∪ V_3) }= {(r^1_i, r^2_j) | (r^1_i, r^2_j) ∈ R_1 × R_2, r^1_i(v) =r^2_j(v), v ∈ V_1 ∩ V_2 }∪{(r^1_i, r^2_j) | (r^1_i, r^2_j) ∈ R_1 × R_3, r^1_i(v) =r^2_j(v), v ∈ V_1 ∩ V_3 }= (R_1 & R_2) | (R_1 & R_3) The proof of ommutative law, associative law and distributive law can be found in Eq. <ref>, Eq. <ref> and Eq. <ref>. We utilize the laws of the Cartesian product × to prove the laws of the &. The elements of the relation R are treated as unordered sets. Therefore, (r^1_i, r^2_j) and (r^2_j, r^1_i) are considered equivalent in Eq. <ref>. As mentioned earlier, | is meaningful only when the two relations involved have the same point variable structure. Hence, in Eq. <ref>, we have V_2 = V_3 = V_2 ∪ V_3. The aforementioned laws allow us to simplify GPL statements before their execution, speeding up the process.§ FGPSThe structure of FGPS can be divided into five main components: Main Logic Control, Core Solving Engine, Formal Language Parser, Data Loader, and AI Interface. The relationships between these modules are illustrated in Fig. <ref>.Main is control module of FGPS, invoking other modules to enable interactive problem-solving and automated problem-solving. The automated solving component implements both forward search and backward search, allowing for the configuration of various search strategies (breadth-first, depth-first, random, beam) and defining interfaces for AI-assisted searches.Engine is the core component of FGPS, responsible for parsing and executing GPL and consists of two sub-modules, GPL Executor for relational inference and Equation Solver for algebraic computation.Parser facilitates bidirectional conversion between formal language and machine language. It consists of 3 sub-modules. GDL Parser parses GPDL and GTDL into machine language, enabling custom configuration of the Solver. CDL Parser parses the formal describing of problems into machine language for subsequent inference. Inverse Parser translates machine language back into formal language, facilitating the verification and checking of the solution process.Data preserves all details of the problem-solving process and comprises 2 sub-modules. The Problem module ensures the correctness and consistency of the problem input conditions, implementing automatic diagram construction, condition auto-expansion, and validity checks. The Condition module is responsible for data storage.AI Interface defines the interface for interaction between the AI system and FGPS. Both the AI Automatic Formalization and the AI Problem Solver can be seamlessly integrated with FGPS.Guided by GFT and modular design, FGPS boasts exceptional extensibility beyond its fundamental features like formal language parsing, GPL execution, human-readable problem-solving processes, and structured output. By invoking FGPS's core modules, we've developed both an interactive solver and a search-based problem solver. Next, we will introduce the forward search algorithm and the backward search algorithm.Forward search starts from the known conditions of the problem and continuously apply theorems to derive new conditions until the goal is achieved. The search process involves the construction of a search tree, with nodes representing sets of known conditions and edges denoting theorems, as depicted in Fig. <ref>. The description of the forward search algorithm is provided in Alg. <ref>. The function get_expandable() traverses the search tree based on pre-defined strategies (BFS, DFS, RS and BS) and returns nodes with the EXPANDABLE state. The function apply_theorem() applies the theorem associated with the current node and returns whether the problem solved. The function get_theorem_seqs() returns a list of theorems applied from the root node to the current node. The function expand(), guided by the known conditions of the current node, checks the list of applicable theorems and extends new nodes.Backward search (BW), on the other hand, begins with the problem-solving goal, expands it into multiple sub-goals, and repeats this process until all sub-goals are resolved. The search process involves the construction of a search tree, with nodes representing subgoals, hypernodes representing sets of subgoals, and edges representing theorems, as illustrated in Fig. <ref>. The description of the backward search algorithm is provided in Alg. <ref>. The function get_expandable() traverses the search tree based on pre-defined strategies, returning hypernodes with the EXPANDABLE state. The function node.check() updates the state of the superNode based on the known problem conditions, while the function super_node.check() updates its own state based on the states of its nodes. The function expand() extends the current goal into several subgoals based on the list of theorems. The function update() propagates the state update from child nodes to parent nodes, starting from the leaves and progressing up to the root. The function get_theorem_seqs() provides a list of theorems applied from the current node to the root node.We conducted experiments on the formalgeo7k. Search time and search step of different search methods and strategies can be found in Tab. <ref>. | http://arxiv.org/abs/2310.18021v5 | {
"authors": [
"Xiaokai Zhang",
"Na Zhu",
"Yiming He",
"Jia Zou",
"Qike Huang",
"Xiaoxiao Jin",
"Yanjun Guo",
"Chenyang Mao",
"Yang Li",
"Zhe Zhu",
"Dengfeng Yue",
"Fangzhen Zhu",
"Yifan Wang",
"Yiwen Huang",
"Runan Wang",
"Cheng Qin",
"Zhenbing Zeng",
"Shaorong Xie",
"Xiangfeng Luo",
"Tuo Leng"
],
"categories": [
"cs.AI"
],
"primary_category": "cs.AI",
"published": "20231027095512",
"title": "FormalGeo: The First Step Toward Human-like IMO-level Geometric Automated Reasoning"
} |
Machine Learning Infused Distributed Optimization for Coordinating Virtual Power Plant Assets Meiyi Li, Student Member, IEEE, Javad Mohammadi, Senior Member, IEEE2023-10-25 ===================================================================================================Amid the increasing interest in the deployment of Distributed Energy Resources (DERs), the Virtual Power Plant (VPP) has emerged as a pivotal tool for aggregating diverse DERs and facilitating their participation in wholesale energy markets. These VPP deployments have been fueled by the Federal Energy Regulatory Commission's Order 2222, which makes DERs and VPPs competitive across market segments. However, the diversity and decentralized nature of DERs present significant challenges to the scalable coordination of VPP assets.To address efficiency and speed bottlenecks, this paper presents a novel machine learning-assisted distributed optimization to coordinate VPP assets.Our method, named as (Learning to Optimize the Optimization Process for Multi-agent Coordination), adopts a multi-agent coordination perspective where each VPP agent manages multiple DERs and utilizes neural network approximators to expedite the solution search.The method employs a gauge map to guarantee strict compliance with local constraints, effectively reducing the need for additional post-processing steps. Our results highlight the advantages of , showcasing accelerated solution times per iteration and significantly reduced convergence times. The method outperforms conventional centralized and distributed optimization methods in optimization tasks that require repetitive and sequential execution. Alternating Direction Method of Multipliers (ADMM), distributed optimization, Virtual Power Plants (VPPs), Distributed Energy Resources (DERs), Learning to Optimize the Optimization Process (LOOP), Multi-agent Machine Learning, collaborative problem-solving § INTRODUCTION §.§ Motivation As global energy sectors transition towards sustainability, the role of Distributed Energy Resources (DERs) has become increasingly significant. However, the participation of DERs in competitive electricity markets remains a challenge <cit.>. While many DERs are capable of providing wholesale market services, they often individually fall short of the minimum size thresholds established by Independent System Operators (ISOs) and may not meet performance requirements <cit.>. As a solution to these challenges, Virtual Power Plants (VPPs) have emerged to aggregate diverse DERs, creating a unified operating profile for participation in wholesale markets and providing services to system operators <cit.>. Further promoting the aggregation of DERs, the Federal Energy Regulatory Commission's (FERC's) Order 2222, issued in September 2020, allowed DERs to compete on equal terms with other resources in ISO energy, capacity, and ancillary service markets <cit.>. The FERC regulatory advancement strengthens the position of DERs and VPPs in the market. Despite their promising potential, the massive, decentralized, diverse, heterogeneous, and small-scale nature of DERs poses significant challenges to traditional centralized approaches, especially in terms of computational efficiency and speed. Centralized controls for VPPs require global information from all DERs, making them susceptible to catastrophic failures if centralized nodes fail and potentially compromising the privacy of DER owners' information. To address these issues, there is a growing demand for efficient, scalable, distributed and decentralized optimization techniques. Our study aims to tackle these challenges and develop a solution that can efficiently harness the benefits of DERs, thereby unlocking the full potential of VPPs. §.§ Related Work §.§.§ VPP Functionalities and Objectives VPPs act as aggregators for a variety of DERs, playing a pivotal role in mitigating integration barriers between DERs and grid operations <cit.>. In what folows, we will highlight recent insights gained from extensive research conducted on strategies for coordinating DERs within VPPs. For instance, optimization schemes for coordinating DERs within VPPs can be customized to achieve various objectives including: *VPP's self financial and operational objectives: * Maximizing revenue from energy trading across different markets <cit.>.* Decreasing operational and maintenance costs of operating VPPs <cit.>.* Optimizing load curtailment <cit.> or energy exportation <cit.>.* Reducing end-user discomfort from joining demand response efforts <cit.>.* Narrowing the discrepancy between actual power consumption and predetermined set points and schedules <cit.>.* Mitigating financial burden of operational risks <cit.>.* Contributing to system-level initiatives:* Curtailing greenhouse gas emissions <cit.>.* Advancing the reliability and resilience of the overall energy system <cit.>.§.§.§ Shortcomings of Centralized Coordination MethodsToday's centralized optimization methods are not designed to cope with decentralized, diverse, heterogeneous, and small-scale nature of DERs. Recent studies have shown that integrating DERs at scale may adversely impact today's tools operation's efficiency and performance speed <cit.>.Major challenges of centralized management strategies include: *Scalability issues become more pronounced with the addition of more DERs to the network, resulting in increased computational demands due to the management of a growing set of variables * Security and privacy risks as centralized decision-making models requires comprehensive data from all DERs<cit.>. * Severe system disruptions resulting from dependence on a single centralized node, as a failure in that node may pose a significant operational risk. * Significant delays in the decision-making process due to the strain on the communication infrastructure, a situation worsened by continuous data communication and the intermittent nature of DERs. * Adaptability challenges as the centralized systems struggle to provide timely responses to network changes. This limitation stems from their requirement for a comprehensive understanding of the entire system to make informed decisions <cit.>. * Logistical and political challenges given the diverse and intricate nature of DERs within a comprehensive centralized optimization strategy that spans across various regions and utilities <cit.>. In response to these challenges, there is a growing demand and interest in the development and implementation of efficient, scalable, and decentralized optimization approaches. §.§.§ State-of-the-art in Distributed CoordinationDistributed coordination methods organize DERs into clusters, with each one treated as an independent agent with capabilities for communication, computation, data storage, and operation, as demonstrated in previous work <cit.>. A distributed configuration enables DERs to function efficiently without dependence on a central controller. Distributed coordination paradigms, which leverage the autonomy of individual agents, have played a crucial role in the decentralized dispatch of DERs, as highlighted in recent surveys <cit.>. Among the numerous distributed optimization methods proposed in power systems, the Alternating Direction Method of Multipliers (ADMM) has gained popularity for its versatility across different optimization scenarios. Recent examples include a distributed model to minimize the dispatch cost of DERs in VPPs, while accounting for network constraints <cit.>. Another noteworthy contribution is a fully distributed methodology that, combines ADMM and consensus optimization protocols to address transmission line limits in VPPs <cit.>. Li et al. <cit.> introduced a decentralized algorithm to enable demand response optimization for electric vehicles within a VPP. Contributing to the robustness of VPPs, another decentralized algorithm based on message queuing has been proposed to enhance system resilience, particularly in cases of coordinator disruptions <cit.>.§.§.§ Challenges of Existing Distributed Coordination Methods Despite their many advantages, most distributed optimization techniques, even those with convergence guarantees, require significant parameter tuning to ensure numerical stability and practical convergence. Real-time energy markets impose operational constraints that require frequent updates, sometimes as frequently as every five minutes throughout the day, as indicated by <cit.>.The frequent update demands that the optimization of DERs dispatch within VPPs is resolved frequently and in a timely manner. Nevertheless, the iterative nature of these optimization techniques can significantly increase computation time, restricting their utility in time-sensitive scenarios. Moreover, the optimization performance may not necessarily improve, even when encountering identical or analogous dispatching problems frequently, leading to computational inefficiency.To address these limitations, machine learning (ML) has been deployed to enhance the efficiency of optimization procedures, as discussed in <cit.>. The utilization of neural networks can expedite the search process and reduce the number of iterations needed to identify optimal solutions. Furthermore, neural approximators can continually enhance their performance as they encounter increasingly complex optimization challenges, as demonstrated in <cit.>.ML-assisted distributed optimizers can be broadly categorized into three distinct models: supervised learning, unsupervised learning, and reinforcement learning. In the realm of supervised learning, a data-driven method to expedite the convergence of ADMM in solving distributed DC optimal power flow is presented in <cit.>, where authors employ penalty-based techniques to achieve local feasibility. Additional applications of supervised learning are demonstrated in <cit.> and <cit.>, where ML algorithms are used to provide warm-start points for ADMM. On the other hand, unsupervised learning is exemplified in <cit.>, where a learning-assisted asynchronous ADMM method is proposed, leveraging k-means for anomaly detection. Reinforcement learning has been applied to train neural network controllers for achieving DER voltage control <cit.>, frequency control <cit.>, and optimal transactions <cit.>.Although these studies showcase the potential of ML for adaptive, real-time DER optimization in decentralized VPP models, they do not fully develop ML-infused distributed optimization methods to improve computation speed while ensuring solution feasibility. §.§ ContributionsIn this paper, we propose a ML-based to replace building blocks of the ADMM-based distributed optimization technique with neural approximators, a method termed (Learning to Optimize the Optimization Process for Multi-agentCoordination). We will employ our method to find a multi-agent solution for the power dispatch problem in DER coordination within a VPP. In the muti-agent VPP configuration, each agent may control multiple DERs. The proposed method enables each agent to predict local power profiles by communicating with its neighbors. All agents collaborate to achieve a near-optimal solution for power dispatch while adhering to both local and system-level constraints.The utilization of neural networks expedites the search process and reduces the number of iterations required to identify optimal solutions. Additionally, unlike restoration-based methods, the approach doesn't necessitate post-processing steps to enhance feasibility because local constraints are inherently enforced through a gauge mapping method <cit.>, and coupled constraints are penalized through ADMM iterations.§ PROBLEM FORMULATION §.§ Compact Formulation§.§.§ The compact formulation for original optimization problemThe centralized optimization function is:min_𝐮 f(𝐮,𝐱) s.t. 𝐮∈𝒮(𝐱) where 𝐮=⊕_i𝐮^i represents the collection of optimization variables across all agents. Note,⊕denotes vector concatenation, and 𝐮^i indicates the optimization variable vector of agent i. Similarly, 𝐱=⊕_i𝐱^i encompasses all input parameters across agents, with 𝐱^i indicating the input parameter vector for agent i. The overall objective function is captured by f=∑_i f^i (𝐱^i,𝐮^i) where f^i stands for the objective function of agent i. Lastly, 𝒮 refers to the collection of all agent's constraint sets.§.§.§ The compact formulation at the multi-agent-levelHere, we will introduce the agent-based method to distribute computation responsibilities among all agents. Let the variable vector of each agent, 𝐮^i, consist of both local and global variables, which can be partitioned as 𝐮^i = [𝐮^i_,𝐮^i_]. Here, 𝐮^i_ captures the local variables of agent i, while 𝐮^i_ encapsulates the global variables shared among neighboring agents. To enable distributed computations, each agent i maintains a local copy vector of other agents' variables, 𝐮^i_, from which this vector mimics the global variables owned by neighboring agents.Agent-level computationsSolving (<ref>) in a distributed fashion requires agent i to solve the following problem before communication. min_𝐮^i,𝐮^i_ f^i([𝐮^i_,𝐮^i_],𝐮^i_,𝐱^i) s.t. [𝐮^i; 𝐮^i_ ]∈𝒮_^i(𝐱^i)𝐮^i_=𝐈_^i[⊕_j≠ i𝐮^j_] where 𝒮_^i denotes the agent i's local constraint set. Here, 𝐈_^i[⊕_j≠ i𝐮^j_] denotes the global variables owned by neighboring agents, and 𝐈_^i is an element selector matrix. The distributed optimization process and intra-agent information exchange will ensure agreement among local copies of shared global variables. Intra-agent Information Exchange λ^i^[k]=λ^i^[k-1]+ρ(𝐮^i^[k-1]_-𝐈_^i[⊕_j≠ i𝐮_^j^[k-1]])[𝐮^i^[k]; 𝐮^i^[k]_ ]=h^i(λ^i^[k]) The dual update procedure (<ref>) adjusts the Lagrangian multipliers λ^i, which enforces consensus between agent i and its neighbors. Here, λ^i represents the differences between agent i's local copies and the global variables from neighboring agents, and ρ>0 is a penalty parameter.In (<ref>), h^i captures the compact form of an optimization problem that reduces the gap between local copies of global variables while respecting the constraints of individual agents. §.§ VPP Model The considered VPP consists of a number of N_ agents,each denoted by index i, i∈𝒩_. Every agent is responsible for aggregating a diverse set of DERs, which encompasses flexible loads (FLs), energy storage systems (ESSs), heating, ventilation, and air conditioning (HVAC) systems, plug-in electric vehicles (PEVs), and photovoltaic (PV) arrays, as shown in Fig.<ref>. These agents might be connected to networks of different utilities. The primary objective of the VPP is to optimize the aggregate behavior of all agents while accounting for agents' utility functions.In this paper, we propose that the VPP operates within a two-settlement energy market, composed of a day-ahead and a real-time market. Upon the clearing of the day-ahead market, the VPP decides on hourly production schedules. The real-time market, also known as the imbalance market, is designed to settle potential day-ahead commitment violations. The real-time market productions are set in 5-minute increments. The production schedules every 5 minutes are denoted as 𝐏_.The method is designed for the real-time market, where a VPP solves a dispatch optimization across its assets (agents) to honor its commitment over a given time scale, [t_, t_], where t_ and t_ represent the starting and ending times, respectively.Put differently, the VPP needs to fulfill the production schedule 𝐏_ = [ P_^t | t = t_, …, t_]while minimizing the overall cost of agents. Generally, the VPP implements 5-minute binding intervals (Δ t=5/60 h) for the real-time market, and adopts look-ahead horizon (t_-t_), ranging from 5 minutes up to 2 hours <cit.>, for the real-time dispatch optimization. The detailed dispatch optimization problem is presented next.§.§ Centralized Formulation of the VPP Coordination Problem This subsection presents the centralized form of the power dispatch problem solved by a VPP over various assets for every time step t ∈ [t_,t_]. The asset constraints are: §.§.§ Constraints Pertaining to Flexible Loads The power of a flexible load should be within a pre-defined operation range [P_^i,t,P_^i,t], ∀ t ∈ [t_,t_], ∀ i∈𝒩_:P_^i,t≤P_^i,t≤P_^i,t §.§.§ Constraints Pertaining to Energy Storage Systems ∀ i and ∀ t ∈ [t_,t_], the charging P_^i,t (or discharging P_^i,t) power of the energy storage system must not exceed P_^i, as indicated in (<ref>). Also, (<ref>) and (<ref>) define R_^i,t as the state of charge (SoC) and bound its limits. Here η _^i and η _^i denote the charging and discharging efficiency. Finally E_^i refers to the capacity.0≤ P_^i,t ≤ P_^i, 0≤ P_^i,t≤ P_^iR_^i,t+1= R_^i,t+(P_^i,tη _^i- P_^i,t/η _^i)Δ t /E_^iR_^i≤ R_^i,t+1≤ R_^i §.§.§ Constraints Pertaining to Heating, Ventilation, and Air Conditioning SystemsThe inverter-basedheating, ventilation, and air conditioning model <cit.> is presented below with consumption power denoted as P_^i,t.T_^i,t+1=ε^i_ T_^i,t+(1-ε^i_ ) (T_^i,t-η_^i /A_^iP_^i,t ) Where T_^i,t is the indoor temperature at time t, T_^i,t is the forecasted outdoor temperature, ε^i_ is the factor of inertia, η_^i is the coefficient of performance, A_^i is thermal conductivity. Equation (<ref>) introduces the concept of adaptive comfort model [T^i_,T^i_]. Equation (<ref>) enforces the control range within the size of air-conditioning P^i_.T^i_≤ T_^i,t+1≤ T^i_0≤ P_^i,t≤ P^i_ §.§.§ Constraints Pertaining to Plug-in Electric Vehicles ∀ i and ∀ t ∈ [t_,t_], the plug-in electric vehicles charging power P_^i,t must adhere to the range [P_^i,P_^i] as described in (<ref>). Further,(<ref>) mandates that agent i's cumulative charging power meet the necessary energy E_^i for daily commute <cit.>. P_^i≤ P_^i,t≤ P_^i ∑_t=t_^t_ P_^i,t≥ E_^i §.§.§ Constraints Pertaining to Photovoltaic ArraysThe photovoltaic power generation, given by (<ref>) and is determined by the solar irradiance-power conversion function. Here, R_^t, represents the solar radiation intensity, A_ denotes the surface area, and η_ is the transformation efficiency. P_^i,t= R_^t A_η_ §.§.§ Constraints of Network SharingThe net power of agent i, P_^i,t, is given below. Note, P_^i,t indicates the inflexible loads.P_^i,t=P_^i,t+ P_^i,t-P_^i,t- P_^i,t-P_^i,t-P_^i,t-P_^i,tLocal distribution utility constraints are enforced by (<ref>), while (<ref>) guarantees that VPP's output honors the production schedule of both energy markets. P_^i≤ P_^i,t≤ P_^i∑_i∈𝒩_ P_^i,t=P_^t §.§.§ Objective Function The objective function for the power dispatch problem, given by (<ref>), and comprises the following terms:Minimizing maintenance and operation costs of energy storage systemsα_^i represents the unit maintenance cost. Balancing the differences between actual and preset consumption profiles for flexible loadsα_^i,t is the inconvenience coefficient. Here, P_^i,t specifies the preferred consumption level <cit.>. Mitigating thermal discomfort costs for HVAC systems α_^i,t is the cost coefficient, T_^i,t indicates the optimal comfort level, and binary variable β_^i,t denotes occupancy state, where 1 means occupied and 0 indicates vacancy. f=∑_t=t_^t_∑_i∈𝒩_ ( α_^i(P_^i,t+P_^i,t )+α_^i( P_^i,t..- P_^i,t )^2+ β_^i,tα_^i(T_^i,t- T_^i,t)^2)§.§.§ CentralizedOptimization Problem Combining the constraints (<ref>)-(<ref>) and the objective function (<ref>), we formulate the power dispatch problem. Note the formulated dispatch problem requires frequent resolution at each time instance t_ in the real-time market. For a given agent i, the optimization variables over the time interval [t_,t_] are denoted by 𝐮^i(t), while its inputs over the same interval are represented as 𝐱^i;𝐮^i= [ P_^i,t,P_^i,t,P_^i,t,R_^i,t+1, P_^i,t, T_^i,t+1,P_^i,t,P_^i,t,..P_^i,t| t = t_, …, t_ ]𝐱^i=[P_^i,t,P_^i,t,R_^i,t_,T_^i,t_,T_^i,t,E_^i,R_^t,P_^i,t,.. P_^t, P_^i,t,β_^i,t, T_^i,t| t = t_, …, t_]Let 𝐮=⊕_i𝐮^i and 𝐱=⊕_i𝐱^i.The DER coordination problem can be formulated as (<ref>) or as follows, min f(𝐮,𝐱) 𝐀_𝐮+𝐁_𝐱+𝐛_=0𝐀_𝐮+𝐁_𝐱+𝐛_≤0where 𝐀_, 𝐁_ and 𝐛_ represent the compact form of parameters in equations (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) we have formed before. And 𝐀_, 𝐁_ and 𝐛_ captures parameters in equations (<ref>), (<ref>), (<ref>), (<ref>)-(<ref>), (<ref>).§.§ Agent-based Model for the VPP Coordination Problem Agent-based problem-solving lends itself well to addressing the computational needs of the VPP coordination problem. In this subsection, we focus on finding a distributed solution for (<ref>) (or (<ref>)). While each sub-problem optimizes the operation of individual agents, communication enables individual agents to collectively find the system-level optimal solution.In the context of distributed problem-solving, it is important to point out the unique challenges posed by coupling constraints such as (<ref>). These constraints introduce intricate relationships among several agents where some variables of agent i are tied with those of agent j. These coupled constraints prevent separating (<ref>) into disjointed sub-problems.As discussed in Section IIA, we define the variables present among multiple agents' constraints as global variables, 𝐮^i_,𝐮^i_=[P_^i,t| t = t_, …, t_]^ In contrast, the variables solely managed by non-overlapping constraints are referred to as local variables. That is, 𝐮^i = [𝐮^i_,𝐮^i_]. We refer to agents whose variables are intertwined in a constraint as neighboring agents.The ADMM method finds a decentralized solution for (<ref>) by creating local copies of neighboring agents' global variables and adjusting local copies iteratively to satisfy both local and consensus constraints. The adjustment continues until alignment with original global variables is achieved, at which point the global minimum has been found in a decentralized manner.In the power dispatch problem, we introduce P_^i,j,t, which is owned by agent i, and represents a copy of P_^j,t. Then, coupled constraint (<ref>) become a local constraint (<ref>) and a consensus constraint (<ref>):P_^i,t+∑_ j≠ i P_^i,j,t=P_^t P_^i,j,t=P_^j,t, ∀ j≠ iLet 𝐮^i_=[P_^i,j,t| t = t_, …, t_] denote all local copiesowned by agent i imitating other neighboring agents’ global variables.Then, one could reformulate the problem (<ref>) in accordance to 𝐮^i and 𝐮^i_ as,min∑_i f^i(𝐮^i,𝐮^i_,𝐱^i) 𝐀^i_[𝐮^i; 𝐮^i_ ]+𝐁^i_𝐱^i+𝐛^i_=0 ,∀ i 𝐀^i_[𝐮^i; 𝐮^i_ ]+𝐁^i_𝐱^i+𝐛^i_≤0 ,∀ i𝐮^i_=𝐈_^i[⊕_j≠ i𝐮_^j], ∀ i where, 𝐀^i_, 𝐁^i_, 𝐛^i_, 𝐀^i_, 𝐁^i_, and 𝐛^i_ in (<ref>) and (<ref>) capture the compact form of constraints (<ref>)-(<ref>), (<ref>). And(<ref>) is the compact form of constraints (<ref>). Here 𝐈_^i is the element selector matrix that maps elements from vector ⊕_j≠ i𝐮_^jto vector 𝐮^i_based on a consensus constraint (<ref>). Each row of 𝐈_^i contains a single 1 at a position that corresponds to the desired element from ⊕_j≠ i𝐮_^j and 0s elsewhere. Therefore, 𝐈_^i[⊕_j≠ i𝐮_^j] represents the vector of global variables that are required to be imitated by agent i.Let 𝒮_^i be the set of local constraints associated with agent i, i.e.,(<ref>)-(<ref>). Therefore, the compact form of decentralized formulation at the agent-level as defined in (<ref>).§.§ Updating Rules Within Agents The standard form of ADMM solves problem (<ref>) (or (<ref>)) by dealing with the augmented Lagrangian function L:minL= ∑_i (f^i(𝐮^i,𝐮^i_,𝐱^i)+λ^i(𝐮^i_-𝐈^i_[⊕_j≠ i𝐮_^j]). . +ρ𝐮^i_-𝐈_^i[⊕_j≠ i𝐮^j]_2^2)[𝐮^i; 𝐮^i_ ]∈𝒮_^i(𝐱^i),∀ i where ρ>0 is a positive constant. λ^i denotes the vector of all Lagrangian multipliers for the corresponding consensus equality relationship between agent i's copy and neighboring agent j's global variable. The search for a solution to (<ref>) is performed through an iterative process (indexed by [k],k=1,...,N_). All N_ agents will execute this process simultaneously and independently before communicating with neighboring agents. At the agent level, these updates manifest themselves asfollows, λ^i^[k] =λ^i^[k-1]+ρ(𝐮^i^[k-1]_-𝐈_^i[⊕_j≠ i𝐮_^j^[k-1]])[𝐮^i^[k]; 𝐮^i^[k]_ ] = minL( λ^i^[k],⊕_j ≠ i( 𝐮^j^[k-1], 𝐮^j^[k-1]_, λ^j^[k]) ), [𝐮^i^[k]; 𝐮^i^[k]_ ]∈𝒮_^i(𝐱^i) The dual update equation, given by (<ref>), modifies the Lagrangian multipliers to estimate the discrepancies between an agent's local copy of variables (designed to emulate the global variables of its neighbors) and the actual global variables held by those neighbors. Subsequently, (<ref>) provides an optimization solution leveraging prior iteration data from other agents. It's essential to note that agent i doesn't require all the updated values from other agents to update equations (<ref>) and (<ref>). Agent i primarily needs:* Neighboring agents' global variables: 𝐈_^i[⊕_j ≠ i𝐮_^j]. In the context of the distributed DER problem, agent i requires values of P_^j,t^[k-1]from their neighboring agent j.* Neighboring agents' local copies mirroring agent i's global variables: 𝐈_^i[⊕_j ≠ i𝐮_^j^[k-1]], where 𝐈_^i functions as a selector matrix. In the distributed DER context, agent i requires P_^j,i,t^[k] from their neighboring agent j. We use 𝐮^i^[k-1]_ to represent the set of variables owned by other agents but are needed by agent i to update (<ref>) and (<ref>).Finally, the intra-agent updates are represented by (<ref>) and (<ref>).The standard form of ADMM guarantees the feasibility of local constraints by (<ref>) and penalizes violations ofconsensus constraints by iteratively updating Lagrangian multipliers as (<ref>). In what follows, we will propose a ML-based method to accelerate ADMM for decentralized DER coordination. The ADMM iterations will guide the consensus protocol, while the gauge map <cit.> is adopted to enforce hard local constraints.§ PROPOSED METHODOLOGY §.§ Overview of the MethodThis section provides a high-level overview of the method to incorporate ML to accelerate the ADMM algorithm. As shown in Fig. <ref>, instead of solving agent-levellocal optimization problems (<ref>) by an iterative solver, we will train N_ agent-level neural approximators ξ^i,i∈𝒩_ to directly map inputs to optimized value of agent's optimization variables in a single feed-forward.The resulting prediction of each agent i, denoted as 𝐮^i^[k], will be trained to approximate the optimal solution of (<ref>).𝐮^i^[k],𝐮^i^[k]_=ξ^i(𝐱^i ,𝐮^i^[k-1]_) Pseudo code of the proposed method is given in Algorithm <ref>. method includes two steps for each iteration. First, each agent receives variables of prior iteration from neighboring agents. Second, each agent uses a neural approximator to predict its optimal values. §.§ Design of Neural Approximators Structures Violations of consensus constraints could be penalized by ADMM iterations. Further, we will design each neural approximator's structure to guarantee that its output satisfies the local constraints, i.e., ξ^i∈𝒮_^i(𝐱^i). We adopt the ℒ𝒪𝒪𝒫-ℒ𝒞 (Learning to Optimize the Optimization Process with Linear Constraints) model proposed in <cit.> to develop each neural approximator ξ^i. The ℒ𝒪𝒪𝒫-ℒ𝒞 model learns to solve optimization problems with hard linear constraints. It applies variable elimination and gauge mapping for equality and inequality completions, respectively. The ℒ𝒪𝒪𝒫-ℒ𝒞 model produces a feasible and near-optimal solution. In what follows, we will present the main components of ℒ𝒪𝒪𝒫-ℒ𝒞 and how it applies to the VPP coordination problem. §.§.§ Variable Elimination Based on the equality constraints given in (<ref>), the variables 𝐮^i and 𝐮^i_ can be categorized into two sets: the dependent variables 𝐮^i_ and the independent variables 𝐮^i_. The dependent variables are inherently determined by the independent variables. For instance in (<ref>), the variable T_^i,t+1 is dependent on P_^i,t; hence, once P_^i,t is derived, T_^i,t+1 can be caculated.The function 𝔽^i is introduced to establish the relationship between 𝐮^i_ and 𝐮^i_, such that 𝐮^i_ = 𝔽^i(𝐮^i_), shown in Fig. <ref>. A comprehensive derivation of 𝔽^i can be found in <cit.>. By integrating 𝔽^i into the definition of 𝒮_^i and substituting 𝐮^i_, the optimization problem of (<ref>) can be restructured as a reduced-dimensional problem with 𝐮^i_ as the primary variable. The corresponding constraint set for this reformulated problem is denoted by 𝒮_^i and presented as, 𝒮_^i= 𝐀^i_[𝐮^i_; 𝔽^i(𝐮^i_) ]+𝐁^i_𝐱^i+𝐛^i_≤0 Therefore, as long asthe prediction of the reformulated problem ensures 𝐮^i_∈𝒮_^i, 𝔽^i will produce the full-size𝐮^i,𝐮^i_ vectors satisfying local constraints 𝒮_^i(𝐱^i) by concatenating 𝐮^i_ and 𝐮^i_,𝐮^i_. §.§.§ Gauge Map After variable elimination,our primary objective is to predict 𝐮^i_ such that it satisfies the constraint set 𝒮_^i. Instead of directly solving this problem, we will utilize a neural network that finds a virtual prediction û^i_ which lies within the ℓ_∞-norm unit ball (denoted as ℬ) a set constrained by upper and lower bounds. The architecture of the neural network is designed to ensure that the resulting û^i_ remains confined within ℬ. Subsequently, we introduce a bijective gauge mapping, represented as 𝕋^i, to transform û^i_ from ℬ to 𝒮_^i. As presented in <cit.>, 𝕋^i is a predefined function with an explicit closed-form representation as below,𝐮^i_=𝕋^i(û^i_)=ψ_ℬ(û^i_)/ψ_𝒮_^i(û^i_)û^i_+𝐮^i_The function ψ_ℬ is the Minkowski gauge of the set ℬ, while 𝐮^i_ represents an interior point of 𝒮_^i. Moreover, the shifted set, 𝒮_^i, is defined as, 𝒮_^i = {𝐮̅^i_|( 𝐮^i_ + 𝐮̅^i_) ∈𝒮_^i }with ψ_𝒮_^i representing the Minkowski gauge on this set. §.§ Training the Neural ApproximatorsWe use the historical trajectories of ADMM (i.e. applied on historical power demands) for training purposes. Note that predicting the converged ADMM values is a time-series prediction challenge. Specifically, outputs from a given iteration are requisites for the subsequent iterations. This relationship implies that 𝐮^i^[k],𝐮^i^[k]_, ∀ i are contingent upon 𝐮^i^[k-1]_, derived from other agents' outputs 𝐮^j^[k-1],𝐮^j^[k-1]_,∀ j≠ i from the prior iteration. To encapsulate this temporal dependency, our training approach adopts a look-ahead format, facilitating the joint training of all neural approximators in a recurrent manner, which ensures that prior outputs from different agents are seamlessly integrated as current inputs (see Fig. <ref>).Suppose there are N_ training data points, indexed and associated with their respective output by the superscript (d). As an initial step, ADMM is employed to generate all values of optimization variables required for training. Concurrently, the optimal solution 𝐮^i*(d) pertaining to (<ref>) is calculated. Subsequently, for N_ recurrent steps, the loss function f_ is defined as the cumulative distance d between the prediction 𝐮^i^[k+r](d) and the optimal solution 𝐮^i*(d). This summation spans all agents, every recurrent step, every iteration (k=1,..N_), and all data points, as delineated in (<ref>). f_=∑_d=1^N_∑_k=1^N_∑_r=1^N_∑_i∈𝒩_ d(𝐮^i^[k+r](d),𝐮^i*(d) ) § EXPERIMENTAL RESULTS §.§ Experiment Setup§.§.§ Test SystemsWe examine a VPP consisting of three distinct agents, as illustrated in Fig. <ref>. * Agent 1 manages inflexible loads, flexible loads, and energy storage systems.* Agent 2 is responsible for inflexible loads and the operations of plug-in electric vehicles.* Agent 3 oversees inflexible loads, heating, ventilation, and air conditioning systems, in addition to photovoltaics. We derive the load profile from data recorded in the central area of New York on July 24th, 2023 <cit.>. Both preferred flexible loads and inflexible loads typically range between 10 kW and 25 kW. The production schedule range is set between 45 kW and 115 kW.For plug-in electric vehicles, our reference is the average hourly public L2 charging station utilization on weekdays in March 2022 as presented by Borlaug et al. <cit.>. In <cit.> the profile range for E_^i,τ, τ∈[0,24h] between 10 and 22 kW.With regards to the heating, ventilation, and air conditioning systems, the target indoor temperature T_^i,t is maintained at 77^∘F. Guided by the ASHRAE(American Society of Heating, Refrigerating, and Air-Conditioning Engineers) standards <cit.>, the acceptable summer comfort range is determined as T^i_=75^∘F and T^i_=79^∘F. External temperature readings for New York City's Central Park on July 24th, 2023 were obtained from the National Weather Service <cit.>.Concerning photovoltaic arrays, the Global CMP22 dataset from July 24th, 2023 <cit.> is used to calculate the regional solar radiation intensity R_^t. Supplementary parameter specifics are showcased in Table <ref>. §.§.§ Training DataA total of 20 ADMM iterations are considered, i.e., N_=20. This results in a dataset of 24 ×12 ×20 data points. For model validation, data from odd time steps is designated for training, whereas even time steps are reserved for testing. The DER coordination problem includes 192 optimization variables alongside 111 input variables.§.§.§ ADMM ConfigurationThe ADMM initialization values are set to zero. In our ADMM implementation, the parameter ρ is set to 0.0005. Optimization computations are carried out using the widely-accepted commercial solver, Gurobi <cit.>. §.§.§ Neural Network ConfigurationOur neural network models consist of a single hidden layer, incorporating 500 hidden units. The Rectified Linear Unit (ReLU) activation function is employed for introducing non-linearity. To ensure that û^i_ resides within ℬ (the ℓ_∞ unit ball), the output layer utilizes the Hyperbolic Tangent (TanH) activation. Furthermore, 3 recurrent steps are considered, represented by N_=3. §.§ Runtime Results Fig. <ref> illustrates the cumulative computation time across all agents and test data points over iterations. The performance comparison is conducted among the decentralized setup employing ADMM solvers, our proposed method, and the centralized approach using traditional centralized solvers. From the case study, it is observed that the computational time required by the classical ADMM solver exceeds the centralized solvers solution time after approximately five iterations. Remarkably, our proposed method significantly outperforms the classical ADMM, achieving 500x speed up. Also, even surpasses the efficiency of the centralized solver in terms of computation speed.Table <ref> provides the average computational time for a single iteration on a single data point. An insightful observation from the results suggests that the method would require around 3300 iterations to match the computational time of centralized solvers. However, based on the convergence analysis that will be provided later, method demonstrates convergence in a mere 10 iterations.§.§ Optimality and Feasibility Results Fig. <ref> presents the optimality deviation rate for both the traditional ADMM algorithm and method. The deviation rate metric quantifies the degree to which the operational profiles of the DERs deviate from the optimal (derived from solving the centralized problem). It is evident that method achieves faster convergence. Moreover, showcases faster reduction of the deviation rate compared to the standard ADMM approach.Similarly, Fig. <ref> depicts the deviation rate of the VPP schedule for both the ADMM approach and method. This rate sheds light on the difference between the actual VPP production schedule and its planned output. In the context of our optimization problem, the deviation rate is equivalent to the feasibility gap rate of the coupled constraints, as shown in (<ref>). Notably, excels in convergence speed and stability. The VPP schedule deviation rate declines more rapidly and remains stable using method, whereas the traditional ADMM method results in more oscillations and converges at a slower pace.Table <ref> summarizes post-convergence metrics for both algorithms across all agents, iterations, and test data points. While the minimum optimality deviation rate achieved by is slightly higher than that of the classical ADMM, our approach showcases a much lower variance and a significantly reduced maximum deviation. These results highlight method's efficacy, especially when tasked with recurrently solving similar optimization problems. The observed improvements in variance and maximum deviation highlight the versatility and robustness of in varied problem scenarios. To sum up, the proposed solution speeds up the solution time of each ADMM iteration by up to 500X. Also, needs fewer iterations to converge, hence, the overall run time will be significantly shorter.§ CONCLUSION In this work, we introduced a novel ML-based method, , to significantly enhance the performance of the distributed optimization techniques and discussed its performance in addressing challenges of the DER coordination problem (solved by VPP). Our multi-agent framework for VPP decision-making allows each agent to manage multiple DERs. Key to our proposed approach is the capability of each agent to predict their local power profiles and strategically communication with neighboring agents.The collective problem-solving efforts of these agents result in a near-optimal solution for power dispatching, ensuring compliance with both local and system-level constraints.A key contribution of our work is developing and incorporating neural network approximators in the process of distributed decision-making. This novelty significantly accelerates the solution search and reduces the iterations required for convergence. Uniquely, in contrast to restoration-centric methodologies, bypasses the need for auxiliary post-processing steps to achieve feasibility using a two-pronged solution approach, where local constraints are inherently satisfied through the gauge mapping technique, and coupled constraints are penalized overADMM iterations.The method reduces the solution time per iteration by up to 500%.Coupled with requiring fewer iterations for convergence, the net result is a drastic reduction of overall convergence time while respecting the problem constraints and maintaining the quality of the resulting solution. § ACKNOWLEDGEMENTThanks to Dr. Erik Blasch (Fellow member) for concept discussion and co-authoring the paper. ieeetr | http://arxiv.org/abs/2310.17882v1 | {
"authors": [
"Meiyi Li",
"Javad Mohammadi"
],
"categories": [
"cs.LG",
"cs.SY",
"eess.SY"
],
"primary_category": "cs.LG",
"published": "20231027041113",
"title": "Machine Learning Infused Distributed Optimization for Coordinating Virtual Power Plant Assets"
} |
Institute of Mathematics, Faculty of Mechanical Engineering,Brno University of Technology, Technická 2896/2, 616 69 Brno, Czech [email protected], {hrdina,a.navrat}@fme.vutbr.cz In this paper, a novel quantization scheme for cooperative games is proposed. The considered circuit is inspired by the Eisert-Wilkens-Lewenstein protocol modified to represent cooperation between players and extended to 3–qubit states. The framework of Clifford algebra is used to perform necessary computations. In particular, we use a direct analogy between Dirac formalism and Quantum Register Algebra to represent circuits. This analogy enables us to perform automated proofs of the circuit equivalence in a simple fashion. To distribute players' payoffs after the measurement, the expected value of the Shapley value with respect to quantum probabilities is employed. We study how entanglement, representing the level of pre-agreement between players, affects the final distribution of utility. The paper also demonstrates how all necessary calculations can be automatized using the Quantum Register Algebra and GAALOP software.Quantization of two- and three-player cooperative games based on QRA Ivan Eryganov, Jaroslav Hrdina and Aleš Návrat Accepted: 8 August 2023 ====================================================================§ INTRODUCTION Game theory is a branch of applied mathematics that studies optimal decisions of entities between which conflict of interests has occurred <cit.>. This is done via formalization of the conflict into a game, where these entities take the role of players with clearly defined strategies (set of possible decisions) and payoff functions, describing utilities that can be obtained by the players <cit.>. Cooperative game theory enables to study situations, where players are able to form coalitions in order to cooperate and improve their utilities <cit.>. In classical (superadditive) cooperative games, the main interest dwells in finding a fair allocation of the utility, produced by the grand coalition, with respect to players' contributions <cit.>. Recently, game theory has been enriched with a new class of quantum games <cit.>. Originally, quantum game theory studied the quantization of non-cooperative matrix-form games, such as the well-known Prisoner's Dilemma <cit.>. One of the most pioneering works on this topic <cit.> has studied new non-trivial equilibrium, which occurs due to the quantization of strategies and full entanglement between players' decision states. Though it has been an object of criticism <cit.> (the considered strategy sets have no physical interpretation), this work has established the fundamental basis of contemporary research in quantum games <cit.>. General quantum games can be characterized as games that include superposition of players' strategies and/or entanglement of players' decisions <cit.>.In Eisert-Wilkens-Lewenstein (EWL) quantization protocol <cit.>, the decisions of the player to cooperate or deflect cooperation have been assigned to basis states of the qubit (quantum analogy of the classical bit <cit.>). In this work, we extend this idea to describe the cooperative game of n-players by assigning basis states of the n–qubit to coalitions, where 1 on i-th position means, that i-th player participates in this coalition, while 0 on the same position identifies, that player is absent in the coalition. Whereas the proposed idea has the potential to be generalized into the domain of all cooperative games,in this work, we study quantization protocols only for two- and three-player games. The main interest is to assess the potential of our quantization approach to cooperative games and to find out if it will allow for non-trivial results due to quantum phenomena. All the necessary calculations will be performed using geometric algebra <cit.>. The recent papers <cit.> demonstrate an increasing number of applications of geometric algebras to quantum computing. In particular, we represent all the underlying quantum circuits using the language of Quantum Register Algebra (QRA) <cit.>, a real form of complex Clifford algebra, <cit.>. It should be emphasized, that complex Clifford algebra can be seen as a viable and intuitive symbolic alternative to the well-known fermionic quantum computation model <cit.>. This enables us to study the properties of the proposed quantization schemes and to perform the automated proof of their equivalence. To validate the proposed approach, we "solve" instances of two- and three-player weighted majority cooperative games. The QRA representations of the corresponding 2– and 3–qubit decision states and quantum gates on them will be programmed in GAALOP <cit.>, which makes it possible to directly obtain the final state of the game, from which the probabilities will be extracted.The rest of the paper is structured as follows. In Section 2, we briefly discuss cooperative game theory and its solution concept of the Shapley value <cit.>. Then, Section 3 is devoted to the introduction of the original quantization protocol for 2- and 3-player cooperative games. After that, in Section 4, we demonstrate how QRA can be applied to conveniently represent quantum computing and study the properties of quantum circuits. Section 5 presents a detailed description of the QRA representations of the considered quantum games. In the end, we demonstrate how quantum cooperative games can be solved using GAALOP and discuss the obtained results.§ COOPERATIVE GAMES AND SHAPLEY VALUE In this section, we will define cooperative games, their subclass of weighted majority games, and the underlying solution concept of the Shapley value according to <cit.>. Throuhout the paper, we focus solely on cooperative TU-games, where the value function assigns to coalitions a real number with a monetary equivalent, that can be freely distributed in any feasible way. More precisely, a general cooperative TU-game is conventionally defined by a pair (N,v), where N is a set of players and v:2^N→ℝ is the value function, which assigns to each non-empty subset S⊆ N, S≠∅, called coalition, its utility represented by a real number v(S) <cit.>.Additionally, it is assumed that the so-called empty coalition ∅ produces no value, i.e. v(∅)=0. One of the distinguished families of cooperative games are simple games <cit.>. A simple game (N,𝒲) is defined by a set 𝒲, which is a "winning" set of subsets of players' set N, i.e. 𝒲⊂ 2^N. The set 𝒲 fulfills three main properties: * N∈𝒲, * ∅∉𝒲, * (S⊆ T⊆ NandS∈𝒲)⇒ T∈𝒲.Clearly, this game can be represented as a standard cooperative game via simple identification (N,𝒲)∼(N,v), where v(S)=1, if S∈𝒲0,otherwise.For the purpose of this paper, we work with the class of the weighted majority games <cit.>, where there exist quota q and weights w_i associated with each player i∈ N. Then, we can identify weighted majority game (N,q,(w_i)_i∈ N) with the simple game (N,𝒲) using the following relation S∈𝒲⇔∑_i∈ S w_i≥ q.To "solve" a game often means to compute its solution. In terms of cooperative game theory, a solution is defined as a function σ that assigns each game (N,v) a subset (or just one) of allocations σ(N,v) from the set X^∗(N,v) X^∗(N,v)={(x_i)_i∈ N | ∑_i∈ Nx_i≤ v(N)},of all feasible allocations of the v(N), where player's i payoff is denoted as x_i <cit.>. One of the main one-point solution concepts known in game theory is the Shapley value. The Shapley value for player i∈ N can be defined as ϕ_i (N,v)=∑_S⊆ N∖{i}|S|!(|N|-|S|-1)!/|N|! (v(S∪{i})-v(S)).The shapley value assigns to each player his or her expected payoff in the following situation. Assume players arrive randomly and each order has an equal probability of 1/|N|!. Then, when a player arrives, he or she obtains the marginal contribution to the coalition of the already arrived players <cit.>.Before the Shapley value concept for simple games is introduced, the property of a player to be pivotal to the coalition has to be explained. In a simple game (N,𝒲), a player i∈ N is pivotal to some coalition S, iff S∉𝒲, but S∪{i}∈𝒲. The analogy of the Shapley value for simple games is called the Shapley-Shubik power index <cit.>. It assigns to the player's percentual share of how many times the player is pivotal to a coalition of already arrived players in each sequential coalition (possible ordering of players). This power index reflects the player's power or pretension. Now, two particular instances of the weighted majority games will be presented and solved. Consider weighted majority game (N,q,(w_i)_i∈ N), where N={1,2}, w_1=1,w_2=1, and q=1. Table 1 demonstrates players' property of being pivotal (indicated by 1) for each possible order of arrival. Thus, if we denote the Shapley-Shubik power index of player i as ϕ_i, we have the following results: ϕ_1(N,q,(w_i)_i∈ N)=50%, ϕ_2(N,q,(w_i)_i∈ N)=50%. In this game, players obtain the same distribution from the creation of the joint utility, though each of them exceeds the quota on one's own. Consider weighted majority game (N,q,(w_i)_i∈ N), where N={1,2,3}, w_1=1,w_2=2,w_3=1, and q=2. Table 2 demonstrates players' property of being pivotal for each possible order of arrival. Thus, we have the following results:ϕ_1(N,q,(w_i)_i∈ N)=16.66%,ϕ_2(N,q,(w_i)_i∈ N)=66.66%,ϕ_3(N,q,(w_i)_i∈ N)=16.66%.Clearly, the second player dominates since he is pivotal 4 times out of 6 possible.However, the results of both games are true only for the ideal deterministic case, when a grand coalition N will always be formed and all players want to cooperate and are able to come to the agreement. On the other side, what if we allow players to form some bonds and pre-agreements or have doubts about cooperation?Moreover, what if this information will affect the probabilities of coalitions' occurrence and we will distribute players' expected utilities according to the restricted Shapley values? Will it lead to the redistribution of power indices and change the situation? To answer all these questions, we propose to quantize the considered weighted majority games. This will enable us to fairly distribute payoffs on the basis of additional information about players' relationships and intentions. § COOPERATIVE GAMES QUANTIZATIONThe main principle of the proposed quantization of cooperative games is the idea that a number of basis states representing n–qubit exactly corresponds to a cardinality of power set 2^N of players' set N={1,...,n}. Thus, we can easily identify each basis state with the particular coalition S⊆ N in the following way:S∼|a_1...a_i...a_n⟩⇔ a_i= 1, i∈ S0, i∉STherefore, it is possible to associate the probability of occurrence of each basis state with the probability of occurrence of a particular coalition. Then, we propose to distribute players' payoffs (power indices) as an expected value of their Shapley values computed for thevalue functions restricted on the coalitions S⊆ N with respect to new quantum probabilities. Our main objective is to study how information about players' initial agreement and their satisfaction with this pre-agreement can redistribute the resulting payoffs through this quantum value. In the following subsection, we will introduce new parameters reflecting the additional information about the game setting and demonstrate how they affect the probabilities of the basis states in two-player games. §.§ Quantization of two-player gameAt first, we focus on two-player cooperative games to better demonstrate the principles of the quantum cooperative games. Moreover, this simple instance will enable us to show that the proposed approach preserves and extends classical cooperative games. The proposed game quantization scheme is depicted in Figure <ref>. The scheme almost fully corresponds to the quantization of EWL protocol <cit.>, but does not end with the disentanglement operator before the measuring device. The consideration that bonds, created by the entanglement, have to be preserved in order to take them into consideration during the payoff distribution explains the absence of the disentanglement operator. Moreover, for the certain choices of operators Û_1(p_1) and Û_2(p_2), entanglement may not play a role in the |ψ_f⟩, if the disentanglement operator Ĵ^† is applied. The presence of each player from N={1,2} in the game is initially represented by the basis state |0⟩. It describes the fact that the player does not cooperate. Thus, the initial 2–qubit state |00⟩ corresponds to the empty coalition ∅, whereas for the remaining basis states the identification |01⟩∼{2}, |10⟩∼{1}, |11⟩∼{1,2},holds. Then, the entanglement <cit.> operator Ĵ is applied on the 2–qubit state |00⟩. In particular, we assume an entanglement operatorĴ(γ)= [ cosγ/200isinγ/2;0 cosγ/2 -isinγ/20;0 -isinγ/2 cosγ/20;isinγ/200 cosγ/2;],presented in the EWL protocol <cit.>. The operatorĴ(γ) depends on the entanglement measure γ∈ [0,π/2], where Ĵ(0) is an identity operator and Ĵ(π/2) creates one of maximally entangled two-qubit states (|00⟩+|11⟩)/√(2), called the Bell's state. In this work, the entanglement parameter γ is interpreted as a measure of the initial agreement between players. When players are maximally entangled, the Bell's state is created andprobability of occurrence of coalition N is 1/2 and the same is valid for∅. Thus, the full pre-agreement between players affects the game's possible outcome such that both of them are either together or do not participate in the game at all.After that, the information about players' desire to change the initial state is incorporated into the game using tensor products of unitary operators Û_i. We interpret the operator Û_i(p_i) as the player's i∈ N satisfaction with the outcome after the initial agreement and can be defined as followsÛ_i(p_i)=[cos(p_i)sin(p_i); -sin(p_i)cos(p_i) ], where p_i∈[0,π/2], ∀ i∈ N. This strategy operator corresponds to the one considered in <cit.>. The greater p_i indicates greater will to intervene in the game process and change the initial state.The final state of the game is then |ψ_f⟩=(Û_1(p_1)⊗Û_2(p_2))Ĵ(γ)|00⟩.Thus, a two-player quantum cooperative game associated with the classical cooperative game (N,v), |N|=2, can be defined as (N,v,γ,p_1,p_2), where γ∈ [0,π/2] and p_i∈[0,π/2], ∀ i∈ N. Now, we can proceed to the definition of the quantum Shapley value. §.§.§ The quantum Shapley value of a two-player quantum cooperative gameWe define the quantum Shapley value (ϕ̃_̃ĩ)_i∈ N of the two-player quantum cooperative game (N,v,γ,p_1,p_2) asϕ̃_i(N,v,γ,p_1,p_2)=∑_S⊆ N: i∈ Sp(S)ϕ_i(S,v_S),where v_S is a restriction of initial value function over coalition S and p(S) is a probability of occurrence of coalition S defined asp(S)=|⟨a_1a_2|ψ_f||⟩^2,where S∼|a_1a_2⟩ and ψ_f is calculated according to (<ref>). Table <ref>, Table <ref>, and Table <ref>demonstrate how different boundary values of the newly introduced parameters will affect the final distribution provided by the quantum Shapley value for player i=1. All numbers were rounded to three decimal places.As we can see, the proposed approach recreates the original solution ϕ(N,v) for the choice γ=0,p_1=p_2=π/2. Thus, it extends the classical Shapley value but also brings other, non-trivial, outcomes. Depending on the definition of the underlying value function v, the player might achieve a greater or lesser payoff in the quantum setting compared to the canonical one. For example, in case v(N)>v(1)+v(2), every deviation from the original classical scenario will penalize the player. Thus, the pre-agreement bonds only worsen the position of the player, making the player more vulnerable to a possible betrayal. Alternatively, if v(N)≤ v(1)+v(2) holds, the player is able to gain an advantage and raise greater claims.In Example <ref>, in case γ=π/4, p_1=π/2, p_2=π/4 player i=1 achieves ϕ̃_1(N,v,γ,p_1,p_2)=64% instead of original ϕ_1(N,v)=50%.Now, three-player games will be quantized in the next subsection.§.§ Quantization of three-player gameThree-player cooperative games can be quantized analogically to the two-player games. The proposed scheme is depicted in Figure 2.The main novelty, compared to the previous scheme from Figure 1, is that there are two gates describing the entanglement: Ĵ_̂2̂ and Ĵ_̂3̂. The first entanglement gate creates entanglement between pairs ofqubits and can be represented asĴ_̂2̂(γ_12,γ_13,γ_23)=(SWAP⊗Id)(Id⊗Ĵ(γ_13) )(SWAP⊗Id) ( Id⊗Ĵ(γ_23)) (Ĵ(γ_12)⊗Id),where parameter γ_ij∈ [0,π/2] describes entanglement between i-th and j-th player,Id=[ 1 0; 0 1 ]is an identity operator, and SWAP=[ 1 0 0 0; 0 0 1 0; 0 1 0 0; 0 0 0 1 ]interchanges the input states. This entanglement gate can be interpreted as the one that takes into consideration the pre-agreement between pairs of players but does not consider a potential bond that can be created between all three of them at once. Exactly due to this reason we assume a second entanglement gate. Gate Ĵ_̂3̂(γ_123) <cit.> is assumed to create the so-called GHZ state <cit.>, when full entanglement between players is considered. To avoid the parametrization of this gate, we consider only discrete possibilities for entanglement measure: γ_123∈{0,1}. In case of zero entanglement, Ĵ_̂3̂(γ_123)should act as an identity.Then, we define Ĵ_̂3̂(γ_123) as follows:Ĵ_̂3̂(γ_123)=Id⊗Id⊗Id,for γ_123=0,(Id⊗CNOT) (CNOT⊗Id) (H⊗Id⊗Id),for γ_123=1, whereH=1/√(2)[11;1 -1 ]is a Hadamard gate and CNOT=[ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ]is a controlled NOT gate.Thus, Ĵ_̂3̂(1) produces the GHZ state(Id⊗CNOT) (CNOT⊗Id) (H⊗Id⊗Id)|000⟩=|000⟩+|111⟩/√(2). Then, the final state of the three-player game is given as follows: |ψ_f⟩=(Û_1(p_1)⊗Û_2(p_2) ⊗Û_3(p_3))Ĵ_̂3̂(γ_123)Ĵ_̂2̂(γ_12,γ_13,γ_23)|000⟩. Thus, analogically to a two-player game, a three-player quantum cooperative game associated with the classical cooperative game (N,v), |N|=3, can be defined as (N,v,γ_123, γ_12, γ_13,γ_23 ,p_1,p_2, p_3), where γ_123∈{0,1}, γ_i,j∈ [0,π/2] and p_i∈[0,π/2], ∀ i, j∈ N. Then, the corresponding quantum Shapley value isϕ̃_i(N,v,γ_123, γ_12, γ_13,γ_23 ,p_1,p_2, p_3)=∑_S⊆ N: i∈ Sp(S)ϕ_i(S,v_S), with p(S)=|⟨a_1a_2a_3|ψ_f||⟩^2, S∼|a_1a_2a_3⟩ and ψ_f from (<ref>). The Shapley value, is symmetric by its axiomatic definition. Therfore, in order to demonstrate that the proposed approach is reasonable, it is necessary to prove that the resulting entangled state Ĵ_̂2̂(γ_12,γ_13,γ_23)|000⟩ does not depend on the ordering of the entanglement gates within Ĵ_̂2̂(γ_12,γ_13,γ_23). However, in the three-player game direct calculations become rather extensive and complex. Therefore, in the next section, we will demonstrate how the formal language of QRA will help us to perform quantum computing and study the properties of the considered entanglement gate.§ AUTOMATED PROOFS OF CIRCUIT EQUIVALENCE BASED ON QUANTUM REGISTER ALGEBRA In this section, it is demonstrated that the ordering of the entanglements within Ĵ_̂2̂ (γ_12,γ_13,γ_23) is insignificant. We also indicate how the QRA apparatus can be used to perform the automated proofs of this and other similar properties.§.§ Quantum Register AlgebraConsider ageometric algebra𝔾_2n with the set of basis elements{ e_1, … , e_2n}. This algebra can be seen as a subalgebra of geometric algebra 𝔾_2n+2 with the set of basis elements { e_1, … , e_2n, r_1,r_2 }. Then, we define QRA(n) <cit.> as a geometric algebra 𝔾_2n with the coefficients from ℂ̃ = { a+b ι | a,b ∈ℝ, ι = r_1 r_2 }, i.e.QRA(n) = { a_1 g_1 + ⋯ + a_2n g_2n| a_i ∈ℂ̃, g_i ∈𝔾_2n} .An important part of this construction is the definition of QRA conjugation as a Hermitean-linear anti-automorphism that extends identity on vectors <cit.>, i.e.(a_1 g_1 + ⋯ + a_2n g_2n)^† =a̅_1 g_1^† + ⋯ + a̅_2n g_2n^†,where a̅_i= a+b ι = a-b ι∈ℂ̃,(ab)^†=b^†a^† and e_i^†=e_i. To use QRA to model quantum computing, we choose a different basis. This basis is called Witt basis and is formed by elementsf_i= 1/2 (e_i + ι e_i+n), f_i^† =1/2 (e_i - ι e_i+n), i=1,…,n, where the rules for computation with the Witt basis are given as follows:(f_i)^2 =(f_i^†)^2=0 , f_i f_j = -f_j f_i , f_i^† f_j^† = -f_j^† f_i^†, f_i f_i^† f_i =f_i,f_i^† f_i f_i^†=f_i^†,f_i^† f_j = - f_j f_i^†. There is the following straightforward identificationof bra and ket vectors of Dirac formalism with elements of QRA:⟨ a_1 … a_n| ⟷I(f_n)^a_n… (f_1)^a_1,where a_i ∈{ 0,1},| a_1 … a_n ⟩⟷(f_1^†)^a_1… (f_n^†)^a_n I,where a_i ∈{ 0,1},whereI=f_1f_1^†⋯ f_nf_n^†. To describe a three-player game, the space of 3–qubit states should be considered, i.e. we will work with the identification | 000 ⟩ ⟷(f_1^†)^0 (f_2^†)^0 (f_3^†)^0 I= I, | 001 ⟩ ⟷ (f_1^†)^0 (f_2^†)^0 (f_3^†)^1 I = f_3^†I, | 010 ⟩ ⟷ (f_1^†)^0 (f_2^†)^1 (f_3^†)^0 I=f_2^†I,| 011 ⟩ ⟷ (f_1^†)^0 (f_2^†)^1 (f_3^†)^1I = f_2^† f_3^† I,| 100 ⟩ ⟷ (f_1^†)^1 (f_2^†)^0 (f_3^†)^0 I= f_1^† I,| 101 ⟩ ⟷(f_1^†)^1 (f_2^†)^0 (f_3^†)^1I =f_1^† f_3^† I, | 110 ⟩ ⟷(f_1^†)^1 (f_2^†)^1 (f_3^†)^0 I= f_1^† f_2^† I, | 111 ⟩ ⟷(f_1^†)^1 (f_2^†)^1(f_3^†)^1 I = f_1^† f_2^† f_3^† I.The space of 3–qubit bra vectors can be also described by QRA conjugation as ⟨ a_1a_2a_3 | =| a_1a_2a_3 ⟩^†, for example⟨ 111 |= | 111 ⟩^†⟷ (f_1^† f_2^† f_3^†)^† I = f_3 f_2 f_1 I.The analogical identification on two-qubit states has been already presented in <cit.>. We conclude this subsection by answering the question about how a circuit can be composed of individual blocks.According to <cit.>, a serial circuit, formed by sequential application of the gates f_A and f_B on the same qubit, can be represented in QRA as f_Bf_A. A parallel circuit <cit.>, consisting of the gates f_A and f_B, acting on different qubits, can be represented as f_Af_B up to a sign of individual monomials. When working with the two gates in a parallel circuit (in fact, their tensor product), it is sufficient to perform the following procedure, that has been originally presented in <cit.>.* On the right side of multiplication, we assign artificial coefficient b to the monomials with the odd number of terms. * On the left side of multiplication,a is assigned to monomials with the odd number of occurrences of elements of type f_i^† f_i or f_i. * Then, after the multiplication, we perform simple reassignment: ab→-1, a,b→1.Since multiplication in QRA is an associative operation, it is sufficient to apply the above-defined rule subsequently on pairs of quantum gates. This technical step is necessary due to the nature of the problem. On the other hand, it allows us for rather straightforward implementation. Examples of the serial and parallel circuits constructed via QRA can be found in <cit.>. §.§ Circuit identitiesAt first, to highlight the convenience of QRA notation, we use this algebra to find general representatives of SWAP(s,t) gates, describing the interchange of qubits s and t. At the end of this subsection, we apply QRA and GAALOP to perform the automated proof of the irrelevance of the entanglement gates ordering within Ĵ_̂2̂(γ_12,γ_13,γ_23). Let |ψ⟩ be an n–qubit and 1≤ s < n, s ∈ℤ. Then, the element SWAP(s,s+1)=f_sf_s^† f_s+1f_s+1^† - f_sf_s+1^†+ f_s^†f_s+1 + f_s^† f_s f_s+1^†f_s+1 acts on |ψ⟩ as a SWAP between qubits s and s+1. Let us note that the elementsf_sf_s^† f_s+1f_s+1^† and f_s^† f_s f_s+1^†f_s+1 act as identities, if they act nontrivially (result is not zero). Theelements f_sf_s+1^† and f_s^†f_s+1 interchange two adjacent elements. We will use this property frequently throughout this section. The following direct computations are based on the fact that all elements of (<ref>) have an even number of monomials.SWAP(s,s+1) (f_1^†)^a_1⋯ (f_n^†)^a_nI =(f_1^†)^a_1⋯ (f_s-1^†)^a_s-1SWAP(s,s+1) (f_s^†)^a_s⋯ (f_n^†)^a_n I =(f_1^†)^a_1⋯ (f_s-1^†)^a_s-1 [ SWAP(s,s+1) (f_s^†)^a_s(f_s+1^†)^a_s+1 ] (f_s+2^†)^a_s+2⋯ (f_n^†)^a_nIandSWAP(s,s+1) acts on (f_s^†)^a_s(f_s+1^†)^a_s+1as SWAP which completes the proof. Let |ψ⟩ be an n–qubit. Then, the element SWAP(s,t) =(f_sf_s^† f_tf_t^†- f_s^†f_t- f_sf_t^†- f_s^†f_sf_t^†f_t ) (∑_∑ (a_i)is odd (f_s+1^†f_s+1)^a_s+1 (f_s+1 f_s+1^†)^b_s+1⋯ (f_t-1^†f_t-1)^a_t-1 (f_t-1 f_t-1^†)^b_t-1) +(f_sf_s^† f_tf_t^† + f_s^†f_t- f_s f_t^†+ f_s^†f_sf_t^†f_t) (∑_∑ (a_i)is even(f_s+1^†f_s+1)^a_s+1 (f_s+1 f_s+1^†)^b_s+1⋯(f_t-1^†f_t-1)^a_t-1 (f_t-1 f_t-1^†)^b_t-1)acts as a SWAP gate between s^th and t^th qubit (s<t).Because each part of the expression (<ref>) has an even number of elements, it is easy to show that SWAP(s,t) ( f_1^†)^a_1⋯(f_n^†)^a_n I = ( f_1^†)^a_1⋯(f_s-1^†)^a_s-1SWAP(s,t) ( f_s^†)^a_s⋯(f_n^†)^a_nIand, because of associativity, we have an expressionSWAP(s,t) ( f_s^†)^a_s⋯(f_n^†)^a_n I = [SWAP(s,t) ( f_s^†)^a_s⋯(f_t^†)^a_t]( f_t+1^†)^a_t+1⋯(f_n^†)^a_n I.Thus, without loss of generality, we can only discuss the gate SWAP(1,n) = (f_1f_1^† f_nf_n^† - f_1^†f_n-f_1f_n^† - f_1^†f_1f_n^†f_n) (∑_∑ (a_i)is odd (f_2^†f_2)^a_2 (f_2 f_2^†)^b_2⋯(f_n-1^†f_n-1)^a_n-1 (f_n-1 f_n-1^†)^b_n-1) +(f_1f_1^† f_nf_n^† + f_1^†f_n-f_1f_n^† + f_1^†f_1f_n^†f_n) (∑_∑ (a_i)is even(f_2^†f_2)^a_2 (f_2 f_2^†)^b_2⋯(f_n-1^†f_n-1)^a_n-1 (f_n-1 f_n-1^†)^b_n-1) Let |ψ⟩ be an n–qubit, if a_i=b_i =1thenf_i^† f_i f_i f_i^† =0. Therefore, in the expressions (<ref>) and(<ref>), there are only such elements that a_i+b_i ∈{0,1}. The gate(f_2^†f_2)^a_2 (f_2 f_2^†)^b_2⋯(f_n-1^†f_n-1)^a_n-1 (f_n-1 f_n-1^†)^b_n-1acts as the projection to the state ∑_a_1,a_n ∈{0,1}ψ_a_1… a_n (f_1^†)^a_1 ( f_2^†)^a_2⋯(f_n-1^†)^a_n-1(f_n^†)^a_nI = ψ_0a_2… a_n-10( f_2^†)^a_2⋯(f_n-1^†)^a_n-1I+ψ_0a_2… a_n-11 ( f_2^†)^a_2⋯(f_n-1^†)^a_n-1f_n^†I+ψ_1a_2… a_n-10 f_1^† ( f_2^†)^a_2⋯(f_n-1^†)^a_n-1I+ψ_1a_2… a_n-11 f_1^† ( f_2^†)^a_2⋯(f_n-1^†)^a_n-1f_n^†I=ψ_0a_2… a_n-10( f_2^†)^a_2⋯(f_n-1^†)^a_n-1I+(-1)^∑ a_iψ_0a_2… a_n-11 f_n^† ( f_2^†)^a_2⋯(f_n-1^†)^a_n-1I+ψ_1a_2… a_n-10 f_1^† ( f_2^†)^a_2⋯(f_n-1^†)^a_n-1I+ (-1)^∑ a_iψ_1a_2… a_n-11 f_1^† f_n^†( f_2^†)^a_2⋯(f_n-1^†)^a_n-1IThus, if ∑ a_i is odd, the middle elements of the SWAP gate must have a different sign with respect to the corresponding projections. Then, because of Lemma <ref>, the element acts as a SWAP between the first and n^th qubit which completes the proof.Finally, with the help of GAALOP <cit.>, we demonstrate that the order of the entanglement operators within Ĵ_̂2̂(γ_12,γ_13,γ_23) does not affect the outcome of the game. The following code[caption=Code for the automated proof of the entanglement symmetry.] i = er1 * er2 ; f1 = 0.5*( e1 + i * e4 ); f1T = 0.5*( e1 - i * e4 ); f2 = 0.5*( e2 + i * e5 ); f2T = 0.5*( e2 - i * e5 ); f3 = 0.5*( e3 + i * e6 ); f3T = 0.5*( e3 - i * e6 ); I = f1 * f1T * f2 * f2T* f3 * f3T ; psi=I; J12 = cos(gamma12/2)*(f1*f1T*f2*f2T+f1*f1T*f2T*f2+f1T*f1*f2*f2T +f1T*f1*f2T*f2) +i*sin(gamma12/2)*(-f1*f2+f1*f2T-f1T*f2+f1T*f2T); Id3 = f3*f3T + f3T*f3; J23 = cos(gamma23/2)(f2*f2T*f3*f3T + f2*f2T*f3T*f3+f2T*f2*f3*f3T +f2T*f2*f3T*f3) +i*sin(gamma23/2)*(-f2*f3+f2*f3T-f2T*f3+f2T*f3T); J13 = cos(gamma13/2)(f2*f2T*f3*f3T + f2*f2T*f3T*f3+f2T*f2*f3*f3T +f2T*f2*f3T*f3) +i*sin(gamma23/2)*(-f2*f3+f2*f3T-f2T*f3+f2T*f3T); Id1 = f1*f1T + f1T*f1; SWAP12 =( f1 * f1T * f2 * f2T )+( f1T * f2 )-( f1 * f2T )+( f1T * f1 * f2T * f2 ); J1= J12 * Id3; J2= Id1 * J23; J33=Id1*J13; J3= SWAP12 * J33 *SWAP12; S1=J1*J2*J3; S2=J2*J3*J1; S3=J2*J3*J1; ?X1=S1-S2; ?X2=S1-S3; ?X3=S2-S3;has the output[caption=Output for the automated proof of entanglement symmetry.] function [X1, X2, X3] = script() endwhich proves that, under every possible ordering of entanglement gates, Ĵ_̂2̂(γ_12,γ_13,γ_23) is represented by the same element of QRA. Thus, the implementation of QRA in GAALOP can serve as an instrument to check the equivalence of the circuits. It can be seen as a viable alternative to symbolic calculations in other available languages. Moreover, the found QRA representation of SWAP(s,t) can be used to generalize the proposed approach into the domain of n-player quantum cooperative games in the future. In the next section, we will demonstrate another possible application of QRA to quantum cooperative game theory.§ THE QRA REPRESENTATION OF TWO- AND THREE-PLAYER QUANTUM COOPERATIVE GAMES In this section, we establish the parametric expressions describing the quantum Shapley values of the two- and three-player quantum cooperative games using QRA. The detailed deduction of QRA representations of 1– and 2–qubit gates can be found in <cit.>. §.§ Two-player game quantum Shapley valueThe considered scheme is analogous to the two-player quantum non-cooperative game (except for the disentanglement operator) presented in <cit.>. Therefore, using the considerations established in <cit.>, we can obtain the final 2–qubit state |ψ_f⟩=(Û_1(p_1)⊗Û_2(p_2)) Ĵ(γ)|00⟩ =cosγ/2(cos(p_1)cos(p_2)f_1f_1^† f_2f_2^†-cos(p_1)sin(p_2)f_1f_1^† f_2^†-sin(p_1)cos(p_2)f_1^† f_2f_2^†+sin(p_1)sin(p_2)f_1^† f_2^†)+isinγ/2(sin(p_1)sin(p_2)f_1f_1^† f_2 f_2^†+sin(p_1)cos(p_2)f_1f_1^† f_2^†+cos(p_1)sin(p_2) f_1^† f_2f_2^†+cos(p_1)cos(p_2) f_1^† f_2^†).Thus, the quantum Shapley value of the two-player game can be represented as ϕ̃_1(N,v,γ,p_1,p_2) =(cos^2γ/2sin^2(p_1) cos^2(p_2)+sin^2γ/2cos^2(p_1)sin^2(p_2))ϕ_i({1},v_{1})+(cos^2γ/2sin^2(p_1)sin^2(p_2)+sin^2γ/2cos^2(p_1) cos^2(p_2))ϕ_i(N,v).ϕ̃_2(N,v,γ,p_1,p_2) =(cos^2γ/2cos^2(p_1)sin^2(p_2)+sin^2γ/2sin^2(p_1) cos^2(p_2))ϕ_i({2},v_{2})+(cos^2γ/2sin^2(p_1)sin^2(p_2)+sin^2γ/2cos^2(p_1) cos^2(p_2))ϕ_i(N,v).Then, according to the definition of Shapley value, we haveϕ_i({i},v_{i})=v({i}), ϕ_i(N,v)=v({i})/2+v(N)-v(N∖{i})/2.Thus, we can obtain the following expression: ϕ̃_i(N,v) =(cos^2γ/2sin^2(p_i)cos^2(p_N∖{i})+sin^2γ/2cos^2(p_i)sin^2(p_N∖{i})v({i})+(cos^2γ/2sin^2(p_1)sin^2(p_2)+sin^2γ/2cos^2(p_1) cos^2(p_2))(v({i})/2+v(N)-v(N∖{i})/2).It is easy to verify, that the obtained expression is in full accordance with the results presented in Tables <ref>, <ref>, and <ref>. Now, the QRA representation of the three-player quantum cooperative game will be described in detail.§.§ Three-player game quantum Shapley value Three-player game starts with the qubit |000⟩=f_1f_1^† f_2f_2^† f_3f_3^†.Then, the first entanglement gate Ĵ_̂2̂(γ_12,γ_13,γ_23) is applied. The gate Ĵ_̂2̂(γ_12,γ_13,γ_23) represents a series of gates (SWAP⊗Id)(Id⊗Ĵ(γ_13))(SWAP⊗Id) ( Id⊗Ĵ(γ_23)) (Ĵ(γ_12)⊗Id), with each of them being tensor product of at least two gates. Further, we will use the notation (SWAP(1,2)⊗Id(3))(Id(1)⊗Ĵ(γ_13))(SWAP(1,2)⊗Id(3)) ( Id(1)⊗Ĵ(γ_23)) (Ĵ(γ_12)⊗Id(3)) to prevent possible ambiguity and specify on which qubits the gates are applied.The first gate in the series is(Ĵ(γ_12)⊗Id(3)), whereId(3)=f_3f_3^†+f_3^† f_3.This is a tensor product of one 2–qubit gate and one 1–qubit gate. However, when the identity operator is on the right side of the tensor product, no change in sign can occur and it is sufficient to directly rewrite such gate as a multiplication. Thus, we directly obtain the expressionĴ(γ_12)⊗Id = cosγ_12/2(f_1 f_1^† f_2 f_2 ^† +f_1^† f_1 f_2 f_2^†+ f_1 f_1^† f_2^† f_2 +f_1^† f_1f_2^† f_2)(f_3f_3^†+f_3^† f_3)+isinγ_12/2(-f_1f_2+f_1f_2^†-f_1^† f_2+f_1^† f_2^†)(f_3f_3^†+f_3^† f_3).The next gate is Id(1)⊗Ĵ(γ_23). However, the gate Ĵ(γ_23) cannot affect signs of monomials. Thus, the following representation can be obtained:Id(1)⊗Ĵ(γ_23) = cosγ_23/2(f_1f_1^†+f_1^† f_1)(f_2 f_2^† f_3 f_3 ^† +f_2^† f_2 f_3 f_3^†+ f_2 f_2^† f_3^† f_3 +f_2^† f_2f_3^† f_3)+isinγ_23/2(f_1f_1^†+f_1^† f_1)(-f_2f_3+f_2f_3^†-f_2^† f_3+f_2^† f_3^†).Then, the input states have to be interchanged using SWAP(1,2)⊗Id(3) to entangle the remaining pair of qubits. The SWAP(1,2) gate can be represented as SWAP(1,2)=f_1 f_1^†f_2 f_2^†+ f_1^† f_2 - f_1 f_2^† + f_1^† f_1 f_2^† f_2,and, again, by straightforward multiplication we obtain SWAP(1,2)⊗Id(3) =f_1 f_1^†f_2 f_2^†f_3f_3^†+ f_1^† f_2f_3f_3^† - f_1 f_2^†f_3f_3^† + f_1^† f_1 f_2^† f_2f_3f_3^†+f_1 f_1^†f_2 f_2^†f_3^† f_3+ f_1^† f_2f_3^† f_3 - f_1 f_2^†f_3^† f_3 + f_1^† f_1 f_2^† f_2f_3^† f_3.After that, the gate Id(1)⊗Ĵ(γ_13) is applied, which completely copies the gate Id(1)⊗Ĵ(γ_23) with changed entanglement parameter. At last, to preserve the initial identification of qubits with coalitions, we interchange the states back using SWAP(1,2)⊗Id(3) once more time. Thus, the whole effect of the Ĵ_̂2̂(γ_12,γ_13,γ_23) on the initial state |000⟩ can be described by the multiplication of the basis state |000⟩ by the above-described gates in the corresponding order. Now, the entanglement gate Ĵ_̂3̂(γ_123) has to be applied. Due to the discrete nature of the entanglement parameter γ_123, we have divided this section into smaller subsections describing each possible choice of the parameter γ_123 separately.§.§.§ Case γ_123=0 and action of 1–qubit gatesIn case γ_123=0, the operator Ĵ_̂3̂(γ_123) collapses intoĴ_̂3̂(0)= Id(1)⊗Id(2)⊗Id(3). Thus, it can be described by the following element of QRAĴ_̂3̂(0) =f_1f_1^† f_2f_2^† f_3f_3^†+ f_1f_1^† f_2f_2^† f_3^† f_3+f_1f_1^† f_2^† f_2 f_3f_3^†+ f_1f_1^† f_2^† f_2f_3^† f_3+f_1^† f_1 f_2f_2^† f_3f_3^†+ f_1^† f_1 f_2f_2^† f_3^† f_3+f_1^† f_1 f_2^† f_2 f_3f_3^†+ f_1^† f_1 f_2^† f_2f_3^† f_3,which does not affect the state Ĵ_̂2̂(γ_12,γ_13,γ_23)|000⟩.Therefore, before the application of unitary operators, we obtain the state Ĵ_̂3̂(0)Ĵ_̂2̂(γ_12,γ_13,γ_23)|000⟩ =(cosγ_12/2cosγ_13/2cosγ_23/2+isinγ_12/2sinγ_13/2sinγ_23/2)f_1f_1^† f_2f_2^† f_3f_3^†+ (sinγ_12/2sinγ_13/2cosγ_23/2+icosγ_12/2cosγ_13/2sinγ_23/2)f_1f_1^† f_2^† f_3^†+(sinγ_12/2cosγ_13/2sinγ_23/2+icosγ_12/2sinγ_13/2cosγ_23/2)f_1^† f_2f_2^† f_3^†+(cosγ_12/2sinγ_13/2sinγ_23/2+isinγ_12/2cosγ_13/2cosγ_23/2)f_1^† f_2^† f_3f_3^†. Now, the tensor product of three gates Û_i, i=1,2,3, is applied to the state described above. This tensor product can be represented asÛ_1(p_1)⊗Û_2(p_2) ⊗Û_3(p_3)=(sin(p_1)(f_1-f_1^†)+cos(p_1)(f_1f_1^†+f_1^† f_1))⊗(sin(p_2)(f_2-f_2^†)+cos(p_2)(f_2f_2^†+f_2^† f_2))⊗(sin(p_3)(f_3-f_3^†)+cos(p_3)(f_3f_3^†+f_3^† f_3)), Whereas the serial circuit of more than two gates can be represented directly via multiplication, the sign-changing rule for the tensor product of two quantum gates cannot be simply generalized for the three gates. Thus, for the three-player cooperative game, it is necessary to establish the sign-changing rule for the general parallel circuit of three gates. When working with three gates, interactions between monomials become more complex and will have more possible effects. To handle this situation, instead of two artificial parameters, five parameters will be needed to define sign changing procedure. * At first, we assign a to monomials from the left side of multiplication with an odd number of terms of type f_j and f_j^† f_j.* Then, in the middle term of multiplication, we assign b to monomials with odd number of terms of type f_j and f_j^†, c to monomials, which have odd number of occurrences of terms of type f_j^† and f_j and, at the same time, odd number of occurrences of terms of type f_j and f^†_j f_j, and d to monomials, which have odd number of occurrences of terms of type f_j and f^†_j f_j.* At last, we assign e to monomials from the right side of multiplication with an odd number of occurrences of terms of type f_j^† and f_j. * Then, after performing the multiplication, we perform the reassignment: a,b,c,d,e,ad,be,abe,ade→ 1, ab,ac,ae,ce,de,ace→ -1.Thus, before the multiplication and the reassignment, the tensor product can be represented as follows:Û_1(p_1)⊗Û_2(p_2) ⊗Û_3(p_3) = (sin(p_1)(af_1-f_1^†)+cos(p_1)(f_1f_1^†+af_1^† f_1))(sin(p_2)(cf_2-bf_2^†)+cos(p_2)(f_2f_2^†+df_2^† f_2))(sin(p_3)(ef_3-ef_3^†)+ cos(p_3)(f_3f_3^†+f_3^† f_3)).Alternatively, the representation of this gate in QRA can be obtained via subsequent application of the previously presented sign-changing rules for the pairs of gates. Since the considered tensor product has 64 non-zero elements, we omit its full representation and directly proceed to the case γ_123=1. §.§.§ Case γ_123=1In case γ_123=1, the QRA representation of the gateĴ_̂3̂(1)=( Id(1)⊗CNOT(2,3)) (CNOT(1,2)⊗Id(3)) (H(1)⊗Id(2)⊗Id(3))has to be found. The first part of the serial gate is (H(1)⊗Id(2)⊗Id(3)), where H(1)=1/√(2)(f_1f_1^†+f_1+f_1^†- f_1^† f_1). Since the Hadamard gate is in a tensor product with identity operators on the right side, it can be represented as a straightforward multiplicationH(1)⊗Id(2)⊗Id(3) =1/√(2)(f_1f_1^†+f_1+f_1^†- f_1^† f_1)(f_2f_2^† +f_2^† f_2)(f_3f_3^† +f_3^† f_3).Then, the CNOT(1,2) gate can be written down as CNOT(1,2)=f_1f_1^† f_2f_2^†+f_1f_1^† f_2^† f_2-f_1^† f_1 f_2-f_1^† f_1 f_2^†. Thus, once more, we haveCNOT(1,2)⊗Id(3) =(f_1f_1^† f_2f_2^†+f_1f_1^† f_2^† f_2-f_1^† f_1 f_2-f_1^† f_1 f_2^†)(f_3f_3^† +f_3^† f_3).At last, according to the sign-changing procedure for the parallel circuit of two gates, we have Id(1)⊗CNOT(2,3) =(f_1f_1^† +f_1^† f_1)⊗ (f_2f_2^† f_3f_3^†+f_2f_2^† f_3^† f_3-f_2^† f_2 f_3-f_2^† f_2 f_3^†)=(f_1f_1^† +af_1^† f_1)(f_2f_2^† f_3f_3^†+f_2f_2^† f_3^† f_3-bf_2^† f_2 f_3-bf_2^† f_2 f_3^†)=f_1f_1^† f_2f_2^† f_3f_3^†+f_1f_1^† f_2f_2^† f_3^† f_3-bf_1f_1^† f_2^† f_2 f_3-bf_1f_1^† f_2^† f_2 f_3^†+af_1^† f_1 f_2f_2^† f_3f_3^†+af_1^† f_1 f_2f_2^† f_3^† f_3-abf_1^† f_1f_2^† f_2 f_3-abf_1^† f_1f_2^† f_2 f_3^†=f_1f_1^† f_2f_2^† f_3f_3^†+f_1f_1^† f_2f_2^† f_3^† f_3-f_1f_1^† f_2^† f_2 f_3-f_1f_1^† f_2^† f_2 f_3^†+f_1^† f_1 f_2f_2^† f_3f_3^†+f_1^† f_1 f_2f_2^† f_3^† f_3+f_1^† f_1f_2^† f_2 f_3+f_1^† f_1f_2^† f_2 f_3^†.Thus, we are able to obtain the complete representation of the gate Ĵ_̂3̂(1) by multiplication of the above-presented terms in the corresponding order. Due to the extensive size of the gate Ĵ_̂3̂(1) and of the gates established in the previous subsections, the representation of the final state or the full parametric expression representing the quantum Shapley value can be non-informative and confusing. Therefore, we have decided to omit them. However, as it has been already demonstrated, we are able to perform quantum computing using the GAALOP. In the next section, it will be described how to measure the quantum states using GAALOP, in order to assign the resulting probabilities to the quantum Shapley value.§ OUTCOMES OF THE GAMES AND DISCUSSIONAt first, we calculate the resulting probabilities for the two-player quantum cooperative game using GAALOP to demonstrate the quantization code of the most simple instance. The following QRA code was compiled as a MATLAB script using GAALOPWeb. [caption=Two-player cooperative game.] i = er1 * er2 ; f1 = 0.5*( e1 + i * e3 ); f1T = 0.5*( e1 - i * e3 ); f2 = 0.5*( e2 + i * e4 ); f2T = 0.5*( e2 - i * e4 ); ket00 = f1 * f1T * f2 * f2T ; J=cos(gamma/2)*(f1 * f1T * f2 * f2T+f1T * f1 * f2 * f2T+ f1 * f1T * f2T * f2+ f1T * f1 * f2T * f2)+ i*sin(gamma/2)*(-f1 * f2 -f1T * f2 +f1*f2T+f1T * f2T); U1tensorU2=(sin(p1)*(a*f1-f1T)+cos(p1)*(f1*f1T+a*f1T*f1))* (sin(p2)*(b*f2- b*f2T)+cos(p2)*(f2*f2T+f2T*f2)); psi_final=U1tensorU2*J*ket00; ?probability_final_0=4*abs(ket00*psi_final)* abs(ket00*psi_final); ?probability_final_2=4*abs(ket00*f2*psi_final)* abs(ket00*f2*psi_final); ?probability_final_1=4*abs(ket00*f1*psi_final)* abs(ket00*f1*psi_final); ?probability_final_12=4*abs(ket00*f2*f1*psi_final)* abs(ket00*f2*f1*psi_final);It provides a script describing the measurement of the resulting quantum states. After the symbolic substitution of ab with -1 and a,b with 1, the final probabilities can be easily obtained for any given γ, p_1, and p_2. For example, for the choice γ=0, p_1=3π/8, p_2=π/8, we obtain [caption=Two-player cooperative game: output of the resulting function.] probability_final_0 =0.1250 probability_final_1 = 0.7286 probability_final_12 =0.1250 probability_final_2 = 0.0214The correctness of this result can be easily verified using (<ref>). The obtained probabilities can be straightforwardly substituted into (<ref>) to compute the quantum Shapley values of players. The previous code can be generalized for the case of a 3-qubit system describing the three-player quantum cooperative game.[caption=Three-player cooperative game.] i = er1 * er2 ; f1 = 0.5*(e1 + i * e4); f1T = 0.5*(e1 - i * e4); f2 = 0.5*(e2 + i * e5); f2T = 0.5*(e2 - i * e5); f3 = 0.5*(e3 + i * e6); f3T = 0.5*(e3 - i * e6); ket000 = f1 * f1T * f2 * f2T*f3*f3T ; Id1=f1*f1T+f1T*f1; Id2=f2*f2T+f2T*f2; Id3=f3*f3T+f3T*f3; SWAP=f1 * f1T * f2 * f2T+f1T * f2-f1 * f2T +f1T * f1 * f2T * f2; CNOT12=f1*f1T*f2*f2T+f1*f1T*f2T*f2-f1T*f1*f2-f1T*f1*f2T; H1=1/sqrt(2)*(f1*f1T+f1+f1T-f1T*f1); Id1tensorCNOT23=f1*f1T*f2*f2T*f3*f3T+f1*f1T*f2*f2T*f3T*f3- f1*f1T*f2T*f2*f3-f1*f1T*f2T*f2*f3T+f1T*f1*f2*f2T*f3*f3T+ f1T*f1*f2*f2T*f3T*f3+f1T*f1*f2T*f2*f3+f1T*f1*f2T*f2*f3T; J12=cos(gamma12/2)*(f1 * f1T * f2 * f2T+f1T * f1 * f2 * f2T+ f1 * f1T * f2T * f2+f1T * f1 * f2T * f2)+i*sin(gamma12/2)* (-f1 * f2 -f1T * f2 + f1*f2T+f1T * f2T); J13=cos(gamma13/2)*(f2 * f2T * f3 * f3T+f2T * f2 * f3 * f3T+ f2 * f2T * f3T * f3+f2T * f2 * f3T * f3)+i*sin(gamma13/2)* (-f2 * f3 -f2T * f3 + f2*f3T+f2T * f3T); J23=cos(gamma23/2)*(f2 * f2T * f3 * f3T+f2T * f2 * f3 * f3T+ f2 * f2T * f3T * f3+f2T * f2 * f3T * f3)+i*sin(gamma23/2)* (-f2 * f3 -f2T * f3 + f2*f3T+f2T * f3T); J2=SWAP*Id3*Id1*J13*SWAP*Id3*Id1*J23*J12*Id3; J3=(1-gamma123)*Id1*Id2*Id3+gamma123* Id1tensorCNOT23*CNOT12*Id3*H1*Id2*Id3; U1tensorU2tensorU3=(-sin(p1)*sin(p2)*f1*f2+sin(p1)*sin(p2)*f1*f2T +sin(p1)*cos(p2)*f1*f2T*f2+sin(p1)*cos(p2)*f1*f2*f2T- sin(p1)*sin(p2)*f1T*f2+sin(p1)*sin(p2)*f1T*f2T -sin(p1)*cos(p2)*f1T*f2T*f2-sin(p1)*cos(p2)*f1T*f2*f2T+ cos(p1)*sin(p2)*f1*f1T*f2-cos(p1)*sin(p2)*f1*f1T*f2T+ cos(p1)*cos(p2)*f1*f1T*f2T*f2+cos(p1)*cos(p2)*f1*f1T*f2*f2T- cos(p1)*sin(p2)*f1T*f1*f2+cos(p1)*sin(p2)*f1T*f1*f2T +cos(p1)*cos(p2)*f1T*f1*f2T*f2+cos(p1)*cos(p2)*f1T*f1*f2*f2T)* cos(p3)*(f3*f3T+f3T*f3)+(-sin(p1)*sin(p2)*f1*f2+ sin(p1)*cos(p2)*f1*f2T*f2+sin(p1)*sin(p2)*f1T*f2T- sin(p1)*cos(p2)*f1T*f2*f2T-cos(p1)*sin(p2)*f1*f1T*f2T+ cos(p1)*cos(p2)*f1*f1T*f2*f2T-cos(p1)*sin(p2)*f1T*f1*f2+ cos(p1)*cos(p2)*f1T*f1*f2T*f2)*sin(p3)*(f3-f3T)+ (-sin(p1)*sin(p2)*f1*f2T-sin(p1)*cos(p2)*f1*f2*f2T+ sin(p1)*sin(p2)*f1T*f2+sin(p1)*cos(p2)*f1T*f2T*f2- cos(p1)*sin(p2)*f1*f1T*f2-cos(p1)*cos(p2)*f1*f1T*f2T*f2- cos(p1)*sin(p2)*f1T*f1*f2T-cos(p1)*cos(p2)*f1T*f1*f2*f2T)* sin(p3)*(f3-f3T); psi_final=U1tensorU2tensorU3*J3*J2*ket000; ?probability_final_0=8*abs(ket000*psi_final)* abs(ket000*psi_final); ?probability_final_1=8*abs(ket000*f1*psi_final)* abs(ket000*f1*psi_final); ?probability_final_2=8*abs(ket000*f2*psi_final)* abs(ket000*f2*psi_final); ?probability_final_3=8*abs(ket000*f3*psi_final)* abs(ket000*f3*psi_final); ?probability_final_12=8*abs(ket000*f2*f1*psi_final)* abs(ket000*f2*f1*psi_final); ?probability_final_13=8*abs(ket000*f3*f1*psi_final)* abs(ket000*f3*f1*psi_final); ?probability_final_23=8*abs(ket000*f3*f2*psi_final)* abs(ket000*f3*f2*psi_final); ?probability_final_123=8*abs(ket000*f3*f2*f1*psi_final)* abs(ket000*f3*f2*f1*psi_final); To demonstrate the functionality of our approach, we have computed the quantized version of the game from Example <ref> under different settings. The first instance is depicted in Figure <ref>. When there is no bond between players and the player with the greatest weight is indifferent to a game process, player 1 might benefit from cooperation with player 3 (3 benefits as well due to the symmetric setting). The instance with the "stronger" bond between players 1 and 3 is depicted in Figure <ref>. The maximal change of the intitial state remains the best possible option for the players 1 and 3. However, whereas the payoff in case p_1=p_2=π/2 has decreased, players begin to benefit from not changing state at all. The instance with the maximal bond between players 1 and 3 is depicted in Figure <ref>. It can be seen that under γ_13=π/2 players' payoffs have decreased and now they maximally benefit from cooperation in a new sense: their actions have to be equivalent. Thus, both of them should completely change the initial state or not operate with it at all. The case when all players are entangled via a 3-qubit gate is depicted in Figure <ref>. This setting has an analogical effect as the case depicted in Figure <ref>. The last case that was considered is presented in Figure <ref>. Compared to all previously considered instances, this last setting demonstrates that the maximal possible payoff obtained by the player does not have to correspond to boundary decisions p_1=p_2=π/2 or p_1=p_2=0, but can be found inside of the considered intervals.Figures <ref>-<ref> have demonstrated that the quantum Shapley value redistributes the payoffs according to the pre-agreement between players and takes into consideration their "acceptance" of the initial state. Indeed, players with smaller weights can benefit in situations when the player with the greatest payoff is indifferent to a game process and there is no strong bond between them. However, when equally strong players are maximally related (entangled), their distribution can only decrease, once again implying that pre-agreements only damage their prosperity. It can be concluded, that the proposed quantization scheme for two-player and three-player cooperative games has allowed for non-trivial results. In particular, we have demonstrated, that, depending on the properties of v, the strong bond between equally strong players, that are not created within the negotiation process, may decrease their payoffs (in case cooperation is beneficial). This peculiar outcome can be explained by an additional risk taken by the participants due to the existence of the pre-agreement, which can be interpreted as the initial probabilistic coalition structure. Alternatively, when cooperation in the classical cooperative games is not the best option due to the definition of v, then the existence of pre-agreement may improve players' payoffs. These results indicate the potential of the proposed quantization of the cooperative games. Our study has also demonstrated that QRA allows for efficient computation within the quantum game theory framework. Moreover, QRA can be even used to perform automated proofs using the geometric algebra calculator GAALOPWeb. Thus, the language of QRA and its implementation in the GAALOP have provided us with a convenient tool to perform quantum computing and to study quantum cooperative games, in particular. The future implementation of the tensor product sign-changing rule within the GAALOP shall further simplify computations with the multiple qubit states using QRA.1 Alves Alves R., Hildenbrand, D., Hrdina, J. et al.: An Online Calculator for Quantum Computing Operations Based on Geometric Algebra, Advances in Applied Clifford Algebras. 32, (2022)alves20mathematica Alves, R., Hildenbrand, D., Steinmetz, C., Uftring, P.: Efficient Development of Competitive Mathematica Solutions Based on Geometric Algebra with GAALOPWeb. Advances in Applied Clifford Algebras. 30, (2020). Ben Benjamin S.C., Hayden, P.M.: Comment on "Quantum Games and Quantum Strategies". Physical Review Letters. 87, (2001)bosc Bostanci, J., and Watrous, J.: Quantum Game Theory and the Complexity of Approximating Quantum Nash Equilibria. Quantum. 6, (2022) Bravyi Bravyi, S.B., Kitaev, A.Y.: Fermionic Quantum Computation. Annals of Physics. 298, (2002).Cafaro Cafaro, C., Mancini, S.:A Geometric Algebra Perspective on Quantum Computational Gates and Universality in Quantum Computing. Advances in Applied Clifford Algebras. 21, (2011) Lima de Lima Marquezino F., Portugal R., Lavor C.:A Primer on Quantum Computing. Springer, (2019) dlDoran C.,Lasenby A.: Geometric Algebra for Physicists. Cambridge University Press, (2003)dhsa Doran, C., Hestenes, D., Sommen, F., Van Acker, N.: Lie Groups as Spin Groups. Journal of Mathematical Physics. 34, (1993) W Dür W., Vidal G., Cirac J. I.: Three Qubits can be Entangled in Two Inequivalent Ways. Physical Review A. 62, (2000) Eisert Eisert J., Wilkens M., Lewenstein M.:Quantum Games and Quantum Strategies. Physical Review Letters. 83, (1999)Elgazzar Elgazzar A.S.: Quantum Prisoner’s Dilemma in a Restricted One-parameter Strategic Space. Applied Mathematics and Computation. 370, (2020) Eryganov Eryganov I., Hrdina, J.: Complex Clifford Algebra in Repeated Quantum Prisoner's Dilemma. Mathematical Methods in the Applied Sciences, (2022) Flitney Flitney A. P., Abbott D.: An Introduction to Quantum Game Theory,Fluctuation and Noise Letters. 2, (2002) Green Greenberger D.M., Horne M.A., Zeilinger A.: Going beyond Bell's Theorem, in Bell's Theorem, Quantum Theory and Conceptions of the Universe. Springer Dordrecht, (1989) hil3 Hildenbrand D.: The Power of Geometric Algebra Computing. CRC Press, Taylor & Francis Group, (2022) hil1 Hildenbrand, D.: Foundations of Geometric Algebra Computing. Springer Berlin, Heidelberg, (2013) hit Hitzer, E., Lavor, C., Hildenbrand, D.: Current Survey of Clifford Geometric Algebra Applications. Mathematical Methods in the Applied Sciences. (2023) hrdina2022quantum Hrdina J., Navrat A., Vasik P.: Quantum Computing based on Complex Clifford Algebras, Quantum Information Processing. 21, (2022).Hrdina2023 Hrdina, J., Hildenbrand, D., Návrat, A., et. al.: Quantum Register Algebra: the Mathematical Language for Quantum Computing. Quantum Information Processing. 22, (2023)Isbell Isbell, J. R.: A Class of Simple Games. Duke Mathematical Journal. 25, (1958) Lounesto Lounesto, P.: Clifford Algebra and Spinors. CUP, Cambridge, (2006) Neyman Neyman, A.: Weighted Majority Games Have Asymptotic Value. Mathematics of Operations Research. 13, (1988) Owen Owen G.: Game theory. Bingley, Emerald (2013) Peleg Peleg B., Sudhölter P.: Introduction to the Theory of Cooperative Games. Springer, (2007) per Perwass Ch.: Geometric Algebra with Applications in Engineering. Springer Verlag, (2009) Poundstone Poundstone, W.: Prisoner's Dilemma: John von Neumann, Game Theory, and the Puzzle of the Bomb. Anchor, (1993) Saad Saad, W., Han, Z., Debbah, M., et. al.: Coalitional Game Theory for Communication Networks. IEEE Signal Processing Magazine. 26, (2009) ShapleyShubik Shapley, L., and Shubik, M.: A Method for Evaluating the Distribution of Power in a Committee System. American Political Science Review. 48, (1954) Shapley Shapley, L.: A value for n-person games. In: Contributions to the theory of games. Princeton University Press (1953) Tilly Tilly, J., Chen, H., Cao, S., et. al.: The Variational Quantum Eigensolver: A review of Methods and Best Practices. Physics Reports. 986, (2022) Neumann Von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behavior. Princeton University Press, NJ, USA (2007) | http://arxiv.org/abs/2310.18067v1 | {
"authors": [
"Ivan Eryganov",
"Jaroslav Hrdina",
"Aleš Návrat"
],
"categories": [
"math.QA",
"quant-ph"
],
"primary_category": "math.QA",
"published": "20231027113053",
"title": "Quantization of Two- and Three-player Cooperative Games Based on QRA"
} |
Paley-Wiener Theorem for Probabilistic Frames]Paley–Wiener Theorem for Probabilistic Frames School of Mathematical and Statistical Sciences, Clemson University, 105 Sikes Hall, Clemson, SC, USA, 29634. [email protected];The Paley–Wiener Theorem is a classical result about the stability of basis in Banach spaces claiming that if a sequence is close to a basis, then this sequence is a basis.Similar results are also extended to frames in Hilbert spaces.As the extension of finite frames for ℝ^d, probabilistic frames are probability measures on ℝ^d with finite second moments and the support of which span ℝ^d. This paper generalizes the Paley–Wiener theorem to the probabilistic frame setting. We claim that if a probability measure is close to a probabilistic frame, then this probability measure is also a probabilistic frame.[ Dongwei Chen================§ INTRODUCTION First proposed by Paley and Wiener in <cit.>, the Paley–Wiener theorem is a classical result about the stability and perturbation analysis of basis in a Hilbert space, which claims that if a sequence is close to an orthonormal basis in a Hilbert space, then this sequence also forms a basis. However, Boas in <cit.> noticed that Paley and Wiener's proof still holds in a Banach space:Let {x_i}_i=1^∞ be a basis for a Banach space 𝒳 and suppose {y_i}_i=1^∞ is a sequence of elements of 𝒳 such that ‖∑_i=1^n c_i (x_i-y_i)‖≤λ‖∑_i=1^n c_i x_i ‖for some constant 0 ≤λ <1, and all choices of scalars c_1, …, c_n (n=1, 2,3, …). Then {y_i}_i=1^∞ is a basis for the Banach space 𝒳 and is equivalent to {x_i}_i=1^∞ [Equivalence of basis {x_i}_i=1^∞ and {y_i}_i=1^∞for the Banach space 𝒳 means that there exists a bounded and invertible operator T on 𝒳 such that Tx_i=y_i, for any i.]. Since then, many variations of this stability theorem have been generalized to study the perturbation theory of basis in a Banach space<cit.>, entire functions of exponential type <cit.>, and frames<cit.>. For a more complete treatment of frame perturbation theory, see <cit.> for more information. As the extension of orthonormal basis, frames were first introduced by Duffin and Schaeffer in the context of nonharmonic analysis <cit.> and have been applied in pure and applied mathematics, for instance, the Kadison-Singer problem <cit.>,time-frequency analysis<cit.>,wavelet analysis<cit.>,coding theory<cit.>, and sampling theory<cit.>. Recall that a sequence {f_i}_i=1^∞ in a separable Hilbert space ℋ is said to be a frame for ℋ ifthere exist 0<A ≤ B < ∞ such that for any f ∈ℋ, A ‖ f‖^2 ≤∑_i =1^∞|⟨ f,f_i ⟩| ^2≤ B‖ f ‖^2.A and B are called lower and upper frame bounds. Furthermore, {f_i}_i=1^∞ is said to be a tight frame if A = B and Parseval if A = B=1. Suppose {f_i}_i=1^∞ is a frame for ℋ with bounds 0<A ≤ B < ∞.A frame {g_i}_i=1^∞ for ℋ is said to be a dual frame of {f_i}_i=1^∞ if for any f ∈ℋ, f = ∑_i =1^∞⟨ f,g_i ⟩f_i = ∑_i =1^∞⟨ f,f_i ⟩g_i.An example of dual frames for {f_i}_i=1^∞ is its canonical dual frame {S^-1f_i}_i=1^∞ with frame bounds 0< 1/B≤1/A < ∞, where S is the frame operator of {f_i}_i=1^∞:S: ℋ→ℋ,S(f) = ∑_i =1^∞⟨ f,f_i ⟩f_i.For interested readers, we refer to <cit.> for more details about frames.Christensen first generalized <ref> to study the stability of frames in Hilbert spaces by the following theorem:Let {f_i}_i=1^∞ be a frame for a Hilbert space ℋ with bounds 0<A ≤ B< ∞. Let {g_i}_i=1^∞ be a sequence in ℋ and assume that there exist constants λ,δ≥ 0 such that λ + δ/√(A) <1 and ‖∑_i=1^n c_i (f_i-g_i)‖≤λ‖∑_i=1^n c_i f_i ‖ + δ [ ∑_i=1^n |c_i|^2]^1/2forall scalars c_1, …, c_n (n=1, 2,3, …). Then {g_i}_i=1^∞ is a frame for ℋ with boundsA(1-(λ + δ/√(A)) )^2andB(1+λ + δ/√(B) )^2.Later, Casazza and Christensen improved <ref> by adding one more term related to the sequence {g_i}_i=1^∞ on the right-hand side of the inequality: Let {f_i}_i=1^∞ be a frame for a Hilbert space ℋ with bounds 0<A ≤ B< ∞. Let {g_i}_i=1^∞ be a sequence in ℋ and assume that there exist constants λ_1,λ_2, δ≥ 0 such that max (λ_1 + δ/√(A), λ_2) <1 and ‖∑_i=1^n c_i (f_i-g_i)‖≤λ_1‖∑_i=1^n c_i f_i ‖ + λ_2‖∑_i=1^n c_i g_i ‖ + δ [ ∑_i=1^n |c_i|^2]^1/2forall scalars c_1, …, c_n (n=1, 2,3, …). Then {g_i}_i=1^∞ is a frame for ℋ with boundsA(1-λ_1+λ_2 + δ/√(A)/1+λ_2 )^2and B(1+λ_1+λ_2 + δ/√(B)/1-λ_2 )^2.From then on, Paley-Wiener type theorems have been studied for many mathematical objects, such as Banach frames<cit.>, frames containing Riesz basis <cit.>, frame sequence <cit.>, sequences with reconstruction properties in a Banach space <cit.>, von Neumann–Schatten dual frames, <cit.>, Operator represented frames <cit.>, g–frames<cit.>, continuous frames on quaternionic Hilbert spaces <cit.>, approximately dual frames<cit.>, frames for metric spaces<cit.>, Hilbert–Schmidt frames and sequences <cit.>. Especially in <cit.>,they introduced dual frames in the perturbation condition, which differs from previous conditions to preserve Hilbert frames:Let {f_i}_i=1^∞ be a frame for the Hilbert space ℋ with bounds 0<A ≤ B< ∞, and let {h_i}_i=1^∞ denote a dual frame of {f_i}_i=1^∞ with upper frame bound 0<D<∞. Suppose {g_i}_i=1^∞ is a sequence in ℋ such that α:= ∑_i=1^∞‖ f_i-g_i ‖^2 < ∞,β:=∑_i=1^∞‖ f_i-g_i ‖‖ h_i ‖ < 1.Then {g_i}_i=1^∞ is a frame in ℋ with bounds (1-β)^2/D and B(1+√(α/B)).In this paper, we generalize the frame perturbation theory to probabilistic frames for ℝ^d that are probability measures on ℝ^d satisfying frame-like condition: A probability measure μ on ℝ^d is said to be a probabilistic frame if there exist 0<A ≤ B < ∞ such that for any x ∈ℝ^d, A ‖ x‖^2 ≤∫_ℝ^d|⟨ x,y ⟩| ^2 dμ(y)≤ B‖ x‖^2.μ is said to be a tight probabilistic frame if A = B and Parseval if A = B=1. It is worthy to note that by taking μ_f := ∑_i=1^N1/Nδ_y_i∈𝒫(ℝ^d), the (finite) frame condition for ℝ^dis equivalent to A/N‖ x‖^2 ≤∫_ℝ^d|⟨ x,y ⟩| ^2 dμ_f(y)≤B/N‖ x‖^2,for any x ∈ℝ^d. Probabilistic frames for ℝ^d were first introduced by Martin Ehler in <cit.>, then studied in directional statistics<cit.>, minimization of p–frame potential<cit.>, and further reviewed in <cit.>. C.G. Wickman generalized the definition of dual frames, analysis operator, and synthesis operator to probabilistic frames<cit.>, and used gradient flows to study the probabilistic p-frame potential<cit.>. Minimizing problems about probabilistic frames under Wasserstein metric from optimal transport, like finding the closest Parseval probabilistic frame, are open and active topics <cit.>. Some equalities and inequalities for probabilistic frames are also studied in <cit.>. For interested readers, we refer to <ref> for detailed introductions to probabilistic frames, optimal transport (and invertibility of linear operators on Banach spaces).This paper is organized as follows. In <ref>, we describe the mathematical preliminaries.In <ref>, we generalize Paley-Wiener theorem to probabilistic frames. First, we generalize <ref> to <ref> by using integration and continuous functions with compact support as test functions. Then in<ref>, we consider a special case that is inspired by a particular case in the perturbation condition with λ=0, δ = √(R) in <ref>. Furthermore, we add one more term related to the probability measure on the right-hand side of the inequality, which generalizes <ref> and <ref> to <ref>.In <ref>, we give a sufficient perturbation condition by including the probabilistic dual frames, which generalizes <ref> to <ref>. Then in <ref>, we consider a particular situation where the probabilistic dual frame is given by the canonical probabilistic dual frame. Nevertheless, the perturbation condition is given by a product measure μ⊗η. As stated in <ref>, we claim that the perturbation conditionin <ref> is also true if we use any transport coupling γ∈Γ(μ, η). In the end, we use a different reconstruction formula given by the canonical probabilistic Parseval frame and further generalize <ref> and <ref> to <ref>. § MATHEMATICAL PRELIMINARIESThis section introduces probabilistic frames and the invertibility of (bounded) linear operators on Banach spaces. We also briefly introduce optimal transport and Wasserstein distance that is often used to quantify the distance between two probabilistic frames.§.§ Probabilistic Frames and Optimal TransportLet 𝒫(ℝ^d) denote the set of Borel probability measures on ℝ^d, ‖·‖ the Euclidean norm, and 𝒫_2(ℝ^d) the set of probability measure μ∈𝒫(ℝ^d) with finite second moments M_2(μ) , i.e., M_2(μ) := ∫_ℝ^d‖ x ‖ ^2 dμ(x) < + ∞.Let B_r(x) be the open ball centered at x with radius r>0.The support of μ∈𝒫(ℝ^d) is defined by supp(μ) = { x ∈ℝ^d: for any r>0, μ(B_r(x))>0}. Before going further, we need to introduce the pushforward of a probability measure by a measure map.Let M,N>0 and μ∈𝒫(ℝ^M). The pushforward of μ bya measurable map f: ℝ^M →ℝ^Nis a probability measure in ℝ^N, which is denotedby f_#μ and f_#μ (E) := (μ∘ f^-1) (E) = μ (f^-1(E)), for any Borel set E ⊂ℝ^N.Furthermore, we have the change-of-variables formula:∫_ℝ^N g(y) d (f_#μ)(y) = ∫_ℝ^M g(f(x)) d μ(x),where g is measurable such that g ∈ L^1(ℝ^N, f_#μ) and g ∘ f ∈ L^1(ℝ^M,μ). We have the following definitions for probabilistic frames and frame operators. μ∈𝒫(ℝ^d) is said to be a probabilistic frame if there exist 0<A ≤ B < ∞ such that for any x ∈ℝ^d, A ‖ x‖^2 ≤∫_ℝ^d⟨ x,y ⟩ ^2 dμ(y)≤ B‖ x‖^2.μ is said to be a tight probabilistic frame if A = B and Parseval if A = B=1. And μ is said to be a Bessel probability measure if only the upper bound holds.By Cauchy-Schwartz inequality, it is easy to show that if μ∈𝒫_2(ℝ^d), then μ is a Bessel probability measure with bound M_2(μ). Let μ be a probabilistic frame. The frame operator S_μ for μ is defined by S_μ := ∫_ℝ^dy y^T d μ(y).Note that S_μ is a d× d matrix and S_μ >0 means S_μ is positive definite. ‖ S_μ‖_2 is used to denote the 2-matrix norm of S_μ. Let the frame bounds of μ be 0<A ≤ B < ∞ and S_μ>0. SinceS_μ is symmetric and each eigenvalue of S_μ is within [A, B], thenA ≤‖ S_μ‖_2 ≤ B, 1/B≤‖ S_μ^-1‖_2 ≤1/A, 1/√(B)≤‖ S_μ^-1/2‖_2 ≤1/√(A). We also have the following characterization for probabilistic frames.Let μ∈𝒫(ℝ^d). Then the following holds: (1) μ is a probabilistic frame ⇔ S_μ >0 (positive definite) ⇔ μ∈𝒫_2(ℝ^d) and span{supp(μ)} = ℝ^d. (2)μ is a tight probabilistic frame with bound A > 0 ⇔ S_μ = A I_d × dwhere I_d× d is the d × d identity matrix. Furthermore, if μ is tight with bound A>0, f=1/A∫_ℝ^d⟨ f, x ⟩ x dμ(x),∀ f ∈ℝ^d. (3)μ is a Parseval probabilistic frame ⇔ S_μ = I_d × d. An obvious metric to quantify the distance between two probabilistic frames μ and νis 2-Wasserstein metric W_2(μ,ν) in optimal transport, which is given byW_2^2(μ,ν) = γ∈Γ(μ,ν)inf ∫_ℝ^d×ℝ^d‖ x-y ‖ ^2 dγ(x,y),where Γ(μ,ν) is the set of transport couplings on ℝ^d×ℝ^d with marginals μ and νΓ(μ,ν) ={γ∈𝒫(ℝ^d ×ℝ^d): P_1_#γ = μ,P_2_#γ = ν},and P_1 and P_2 are the projections on x and y, i.e., for any (x,y) ∈ℝ^d ×ℝ^d, P_1(x, y) = x, P_2(x,y) = y.Similarly, one could define the probabilistic dual frames for a given frame.Let μ be a probabilistic frame for ℝ^d. The set of transport duals for μ is defined asD_μ := {ν∈𝒫_2(ℝ^d): ∃ γ∈Γ(μ, ν)with ∫_ℝ^d ×ℝ^d xy^T dγ(x, y) = I_d × d}.Furthermore, D_μ is not empty and a compact subset of 𝒫_2(ℝ^d) with respect to the weak topology.Let μ be a probabilistic frame for ℝ^d and take ν∈ D_μ. Then ν is also a probabilistic frame.Therefore,the following definition is well-defined. Let μ be a probabilistic frame for ℝ^d. ν∈𝒫_2(ℝ^d) is called a probabilistic dual frame of μ with respect to γ∈Γ(μ, ν) if ∫_ℝ^d ×ℝ^d xy^T dγ(x, y) = I_d × d. Let μ be a probabilistic frame for ℝ^d with bounds 0<A≤ B<∞. S_μ^-1_#μ is said to be the canonical probabilistic dual frame of μ since ν:=S_μ^-1_#μ is the transport dual of μ with respect to γ:= (Id, S_μ^-1)_#μ∈Γ(μ, ν), and the frame bounds are 0<1/B≤1/A< ∞.If μ is a probabilistic frame, there are two canonical probabilistic frames related to μ: the canonical Parseval frame S_μ^-1/2_#μ and the canonical dual frameS_μ^-1_#μ. Therefore, we have the following reconstruction formulas: for any f ∈ℝ^d, f=∫_ℝ^d⟨ f, S_μ^-1/2 x ⟩ S_μ^-1/2 x dμ(x)= ∫_ℝ^d⟨ S_μ^-1/2 f, x ⟩ S_μ^-1/2 x dμ(x), f= ∫_ℝ^d⟨f, S_μ^-1x ⟩ xdμ(x) =∫_ℝ^d⟨S_μ^-1 f, x ⟩ xdμ(x).To show the perturbation theory that includes probabilistic dual frames, we need the following gluing lemma that "glues" two transport couplings together. Similarly, P_1, P_2, P_12, P_23 are projections, i.e.,for any (x,y,z) ∈ℝ^d ×ℝ^d ×ℝ^d, P_1(x, y, z) = x,P_2(x,y,z) = y,P_12(x, y, z)=(x,y),P_23(x, y, z)=(y,z). Let μ_1, μ_2, μ_3 ∈𝒫_2(ℝ^d). Suppose γ^12∈Γ(μ_1, μ_2) and γ^23∈Γ(μ_2, μ_3) such that P_2_#γ^12 = P_1_#γ^23 = μ_2. Then there exists γ^123∈𝒫(ℝ^d ×ℝ^d ×ℝ^d) such that P_12_#γ^123 = γ^12 and P_23_#γ^123 = γ^23. With Gluing Lemma, we claim the following proposition without proof. Letμ be a probabilistic frame for ℝ^d and ν a probabilistic dual frame of μ with respect to γ_12∈Γ(μ, ν). Suppose η∈𝒫_2(ℝ^d) and γ_23∈Γ(ν, η), then there exists π∈𝒫(ℝ^d ×ℝ^d ×ℝ^d) such that P_12_#π = γ_12,P_23_#π = γ_23. For a complete introduction to probabilistic frames and optimal transport, we refer to <cit.> and <cit.> for more details. §.§ Invertibility of Linear Operators on Banach SpacesThis subsection gives a brief introduction to the invertibility of linear operators on Banach spaces, which is used many times in this paper. It is well-known that a bound linear operator U on a Banach space 𝒳 is invertible if ‖ I-U ‖ <1 where Iis identity operator in 𝒳, and ‖ U^-1‖≤1/1-‖ I-U ‖. Casazza and Christensen further generalized this result in the following lemma:Let 𝒳, 𝒴 be Banach spaces andU: 𝒳→𝒳a linear operator on 𝒳. If there existλ_1, λ_2 ∈ [0,1) such that for any x ∈𝒳, ‖ Ux -x ‖≤λ_1 ‖ x ‖ + λ_2 ‖ Ux ‖.Then U is bounded invertible, and for any x ∈𝒳, 1-λ_1/1+λ_2‖ x ‖≤‖ Ux ‖≤1+λ_1/1-λ_2‖ x ‖,1-λ_2/1+λ_1‖ x ‖≤‖ U^-1x ‖≤1+λ_2/1-λ_1‖ x ‖.We also have the following extension result for linear operators on Banach spaces:Suppose 𝒳 and 𝒴 are Banach spaces. Let U:𝒳→𝒴 be a bounded linear operator, 𝒳_0 a dense subspace of 𝒳, and V:𝒳→𝒴 a linear mapping. If for any x ∈𝒳_0, ‖ Ux -Vx ‖≤λ_1 ‖ Ux | + λ_2 ‖ Vx ‖ + δ‖ x ‖,where λ_1, λ_2, δ∈ [0,1). Then V has a unique extension to a bounded linear operator (of the same norm) from 𝒳 to 𝒴, and the extension still satisfies the inequality. For the mathematical proof of the above lemma and corollary, we refer to Casazza and Christensen's paper <cit.> for more details. § PALEY-WIENER THEOREM FOR PROBABILISTIC FRAMES In this section, we generalize the Paley-Wiener theorem to probabilistic frames. We first generalize <ref> to <ref>. Then by using Casazza and Christensen's criteria for the invertibility of linear operators in <ref>, we generalize<ref> to <ref>. Recall that ν∈𝒫_2(ℝ^d) means that ν has finite second moment , i.e., M_2(ν) := ∫_ℝ^d‖ x ‖ ^2 dν(x) < + ∞.And if ν∈𝒫_2(ℝ^d), ν is a Bessel probability measure with bound M_2(ν). Let C_c(ℝ^d) be the set of continuous functions on ℝ^d with compact support. Then we have the first perturbation theorem about probabilistic frames.Let μ be a probabilistic frame for ℝ^d with bounds 0<A≤ B < ∞ and ν∈𝒫_2(ℝ^d). Suppose there exist λ,δ≥ 0 such that λ + δ/√(A) <1 and ‖∫_ℝ^d w(x)x dμ(x) - ∫_ℝ^d w(y)y dν(y) ‖≤λ‖∫_ℝ^d w(x)x dμ(x) ‖ + δ‖w ‖_L^2(μ)for all w ∈ C_c(ℝ^d). Then ν is a probabilistic frame for ℝ^d with boundsA^2(1-(λ + δ/√(A)) )^2/ M_2(ν) andM_2(ν). Since ν∈𝒫_2(ℝ^d), then ν is a Bessel probabilistic measure with bound M_2(ν) = ∫_ℝ^d‖ y ‖^2 dν(y). Now let us get the lower frame bound. Let U:L^2(μ) →ℝ^d be the (synthesis) operator for probabilistic frame μ, which is defined by U(w) := ∫_ℝ^d w(x) x dμ(x).Then U is bounded linear and ‖ U ‖≤ M_2(μ).Similarly, we define another linear operator T:L^2(μ) →ℝ^d by T(w) := ∫_ℝ^d w(y) y dν(y).Note that T is well-defined. By definition, for any w ∈ C_c(ℝ^d), we have‖U(w)- T(w) ‖≤λ‖ U(w) ‖ + δ‖w ‖_L^2(μ) .Since C_c(ℝ^d) is dense in L^2(μ), then by <ref>, we know that T could be extended uniquely to a bounded linear operator that is still denoted by T, and for any w ∈ L^2(μ),‖ U(w) - T(w) ‖≤λ‖ U(w) ‖ + δ‖w ‖_L^2(μ) .Therefore, for any w ∈ L^2(μ),‖ T(w)‖≤‖ U(w)‖ +‖ U(w) - T(w)‖≤ ((λ +1)‖ U‖ +δ) ‖w ‖_L^2(μ) < +∞.Thus T is well-defined and ‖ T‖≤(λ +1)‖ U‖ +δ < +∞.Now let us define U^+: ℝ^d → L^2(μ) by (U^+x)(·) := (U^*(UU^*)^-1x) (·) = (U^*(S_μ^-1x)) (·) = ⟨ S_μ^-1x, ·⟩∈ L^2(μ).where U^* is the adjoint operator of U and UU^* = S_μ. Then ‖ U^+x ‖_L^2(μ)^2 = ∫_ℝ^d⟨ S_μ^-1x, y ⟩ ^2 dμ(y) = ∫_ℝ^d⟨ x, S_μ^-1y ⟩ ^2 dμ(y),where the last equality is obtained since S_μ^-1 is self-adjoint. Since S_μ^-1_#μ isthe probabilistic dual frame with bounds 1/B and 1/A, then‖ U^+x ‖_L^2(μ)^2 = ∫_ℝ^d⟨ x,y ⟩ ^2 d(S_μ^-1_#μ)(y) ≤1/A‖x ‖^2. Replacing w in <ref> by U^+x leads to‖ x - T(U^+x) ‖≤λ‖ x‖ + δ‖U^+x ‖_L^2(μ)≤ (λ + δ/√(A)) ‖x ‖.Therefore, ‖ I - TU^+‖≤λ + δ/√(A) <1. Then TU^+ is invertible and ‖ (TU^+)^-1‖≤1/1- (λ + δ/√(A)).Note that any x ∈ℝ^d could be written as x = TU^+(TU^+)^-1x = ∫_ℝ^d⟨ S_μ^-1(TU^+)^-1x, y ⟩ y dν(y).Therefore, ‖x ‖^4=⟨ x, x ⟩^2 =|∫_ℝ^d⟨ S_μ^-1(TU^+)^-1x, y ⟩⟨ x, y ⟩ dν(y) |^2 = ⟨⟨ S_μ^-1(TU^+)^-1x, ·⟩,⟨ x, ·⟩⟩_L^2(ν)^2 ≤∫_ℝ^d⟨ S_μ^-1(TU^+)^-1x, y ⟩^2 dν(y) ∫_ℝ^d⟨ x, y ⟩^2 dν(y)≤‖ S_μ^-1(TU^+)^-1x ‖^2 ∫_ℝ^d‖ y ‖^2 dν(y) ∫_ℝ^d⟨ x, y ⟩^2 dν(y),where the last two inequalities come from Cauchy–Schwarz inequality and the second equality is obtained since ⟨ x, ·⟩∈ L^2(ν) and ⟨ S_μ^-1(TU^+)^-1x, ·⟩∈ L^2(ν). Let ‖ S_μ^-1‖_2 be the 2-matrix norm of S_μ^-1. Since S_μ^-1 is symmetric, then ‖ S_μ^-1‖_2 is the largest eigenvalue of S_μ^-1. Therefore, ‖ S_μ^-1‖_2 ≤1/A. Thus‖x ‖^4 ≤‖ S_μ^-1‖_2^2‖(TU^+)^-1‖^2‖ x ‖^2∫_ℝ^d‖ y ‖^2 dν(y) ∫_ℝ^d⟨ x, y ⟩^2 dν(y)≤1/A^2 (1/1- (λ + δ/√(A)))^2 ‖ x ‖^2∫_ℝ^d‖ y ‖^2 dν(y) ∫_ℝ^d⟨ x, y ⟩^2 dν(y).Thus for any x ∈ℝ^d, A^2(1-(λ + δ/√(A)))^2/∫_ℝ^d‖ y ‖^2 dν(y)‖x ‖^2≤∫_ℝ^d⟨ x, y ⟩^2 dν(y) ≤∫_ℝ^d‖ y ‖^2 dν(y)‖x ‖^2.That is to say, ν is a probabilistic frame with boundsA^2(1-(λ + δ/√(A)) )^2/ M_2(ν) andM_2(ν).where M_2(ν):=∫_ℝ^d‖ y ‖^2 dν(y). The following lemma is inspired by a particular case in the condition with λ=0, δ = √(R) in <ref>, just formulated in "adjoint" form of the "synthesis" operator of sighed measure "μ -ν". However, it is not accurate to definite the "adjoint". An easier way to prove is to apply the definition of probabilistic frames. Let μ be a probabilistic frame for ℝ^d with bounds 0<A≤ B < ∞ and ν∈𝒫(ℝ^d). Suppose there exists a constant R where 0<R < A, such that for any x ∈ℝ^d, |∫_ℝ^d⟨ x, y ⟩^2 dμ(y) - ∫_ℝ^d⟨ x, z ⟩^2dν(z) |≤ R‖x ‖^2 ,or equivalently, for any x ∈𝕊^d-1(unit sphere in ℝ^d), |∫_ℝ^d⟨ x, y ⟩^2 dμ(y) - ∫_ℝ^d⟨ x, z ⟩^2dν(z) |≤ R.Then ν is a probabilistic frame for ℝ^d with bounds A-R and B+R. For any x ∈ℝ^d, we have -R ‖x ‖^2≤∫_ℝ^d⟨ x, z ⟩^2 dν(z) - ∫_ℝ^d⟨ x, y ⟩^2dμ(y) ≤R‖x ‖^2 .Therefore, ∫_ℝ^d⟨ x, y ⟩^2dμ(y) -R ‖x ‖^2≤∫_ℝ^d⟨ x, z ⟩^2 dν(z)≤∫_ℝ^d⟨ x, y ⟩^2dμ(y)+ R‖x ‖^2.Since μ is a probabilistic frame for ℝ^d with bounds A and B, then A ‖x ‖^2≤∫_ℝ^d⟨ x, y ⟩^2dμ(y) ≤B‖x ‖^2.Therefore, for any x ∈ℝ^d,(A-R) ‖x ‖^2 ≤∫_ℝ^d⟨ x, z ⟩^2 dν(z)≤(B + R)‖x ‖^2 .That is to say, ν is a probabilistic frame for ℝ^d with bounds A-R and B+R.Furthermore, <ref> could be improved to any coupling γ∈Γ(μ, ν) with marginals μ and ν. Let μ be a probabilistic frame for ℝ^d with bounds 0<A ≤ B<∞,ν∈𝒫(ℝ^d), and γ∈Γ(μ, ν).Suppose there exists R where0<R < A, such that for any x ∈ℝ^d, |∫_ℝ^d ×ℝ^d⟨ x, y ⟩^2 - ⟨ x, z ⟩^2dγ(y, z) |≤ R‖x ‖^2,or ∫_ℝ^d ×ℝ^d|⟨ x, y ⟩^2 - ⟨ x, z ⟩^2 |dγ(y, z) ≤ R‖x ‖^2.Then ν is a probabilistic frame for ℝ^d with bounds A-R and B+R. Since γ∈Γ(μ, ν), then|∫_ℝ^d⟨ x, y ⟩^2 dμ(y) - ∫_ℝ^d⟨ x, z ⟩^2dν(z) | = |∫_ℝ^d ×ℝ^d⟨ x, y ⟩^2 - ⟨ x, z ⟩^2dγ(y, z) |≤∫_ℝ^d ×ℝ^d|⟨ x, y ⟩^2 - ⟨ x, z ⟩^2 |dγ(y, z). Indeed, the test function in <ref> could be improved to continuous functions C(ℝ^d). Suppose μ is a probabilistic frame for ℝ^d with bounds 0 < A ≤ B < ∞ and ν∈𝒫(ℝ^d). If there exists 0 ≤ R <A such that w ∈ C(ℝ^d)sup|∫_ℝ^dw(y)dμ(y) - ∫_ℝ^dw(y)dν(y) |≤ R,then v is a probabilistic frame for ℝ^d with bounds A-R and B+R. The proof is clear by taking the test functions to be w_x(y) = ⟨x/‖ x ‖, y ⟩^2 where x is nonzero.Recall that C_c(ℝ^d) is the set of continuous functions on ℝ^d with compact support and M_2(ν):=∫_ℝ^d‖ y ‖^2 dν(y) the second moment of the probability measure ν. By adding one more term related to the probability measure ν on the right-hand side of the inequality in <ref>, we get a more general perturbation result that corresponds to Paley-wiener theorem for frames in <ref>. Let μ be a probabilistic frame for ℝ^d with bounds 0<A≤ B < ∞ and ν∈𝒫_2(ℝ^d). If there existλ_1,λ_2, δ≥ 0 such that max(λ_1 + δ/√(A), λ_2) <1 and ‖∫_ℝ^d w(x)x dμ(x) - ∫_ℝ^d w(y)y dν(y) ‖≤λ_1 ‖∫_ℝ^d w(x)x dμ(x) ‖ + λ_2 ‖∫_ℝ^d w(y)y dν(y) ‖ + δ‖w ‖_L^2(μ)for all w ∈ C_c(ℝ^d). Then ν is a probabilistic frame for ℝ^d with boundsA^2(1-(λ_1 + δ/√(A)))^2/(1+λ_2)^2 M_2(ν) and M_2(ν)Since ν∈𝒫_2(ℝ^d), then ν is Besselwith bound M_2(ν):=∫_ℝ^d‖ y ‖^2 dν(y). Now let us get the lower frame bound. Similarly,let us define U:L^2(μ) →ℝ^d and T:L^2(μ) →ℝ^d in the following way U(w) := ∫_ℝ^d w(x) x dμ(x), T(w) := ∫_ℝ^d w(y) y dν(y).Then U is bounded linear and ‖ U ‖≤ M_2(μ). Furthermore, T is well-defined. Since C_c(ℝ^d) is dense in L^2(μ), then by <ref>, we know that T could be extended uniquely to a bounded linear operator that is still denoted by T, and for any w ∈ L^2(μ),‖ U(w) - T(w) ‖≤λ_1‖ U(w) ‖ + λ_2‖ T(w) ‖ + δ‖w ‖_L^2(μ) .Therefore, for any w ∈ L^2(μ),‖ T(w)‖≤‖ U(w)‖ +‖ U(w) - T(w)‖≤ ((λ_1 +1)‖ U‖ +δ) ‖w ‖_L^2(μ) + λ_2‖ T(w)‖.Thus T is well-defined and ‖ T‖≤ (λ_1 +1)‖ U‖ +δ/1-λ_2 < +∞.Similarly,let us define U^+: ℝ^d → L^2(μ) by (U^+x)(·) := (U^*(UU^*)^-1x) (·) = (U^*(S_μ^-1x)) (·) = ⟨ S_μ^-1x, ·⟩∈ L^2(μ).where U^* is the adjoint operator of U and UU^* = S_μ. Then ‖ U^+x ‖_L^2(μ)^2= ∫_ℝ^d⟨ S_μ^-1x, y ⟩ ^2 dμ(y) = ∫_ℝ^d⟨ x, S_μ^-1y ⟩ ^2 dμ(y) =∫_ℝ^d⟨ x,y ⟩ ^2 d(S_μ^-1_#μ)(y) ≤1/A‖x ‖^2. Replacing w in <ref> by U^+x leads to‖ x - T(U^+x) ‖ ≤λ_1‖ x‖ + λ_2 ‖ T(U^+x) ‖ + δ‖U^+x ‖_L^2(μ)≤ (λ_1 + δ/√(A)) ‖x ‖ + λ_2 ‖ T(U^+x) ‖.Since max(λ_1 + δ/√(A), λ_2) <1, by <ref>, we know that TU^+ is invertible, and ‖ (TU^+)^-1‖≤1+λ_2/1- (λ_1 + δ/√(A)).Similarly, any x ∈ℝ^d could be written as x = TU^+(TU^+)^-1x = ∫_ℝ^d⟨ S_μ^-1(TU^+)^-1x, y ⟩ y dν(y).Therefore, ‖x ‖^4=⟨ x, x ⟩^2 = |∫_ℝ^d⟨ S_μ^-1(TU^+)^-1x, y ⟩⟨ x, y ⟩ dν(y) |^2 ≤∫_ℝ^d⟨ S_μ^-1(TU^+)^-1x, y ⟩^2 dν(y) ∫_ℝ^d⟨ x, y ⟩^2 dν(y)≤‖ S_μ^-1‖_2^2‖(TU^+)^-1‖^2‖ x ‖^2∫_ℝ^d‖ y ‖^2 dν(y) ∫_ℝ^d⟨ x, y ⟩^2 dν(y)≤1/A^2 (1+λ_2/1- (λ_1 + δ/√(A)))^2 ‖ x ‖^2∫_ℝ^d‖ y ‖^2 dν(y) ∫_ℝ^d⟨ x, y ⟩^2 dν(y),where ‖ S_μ^-1‖_2 is the 2-matrix norm of S_μ^-1 and ‖ S_μ^-1‖_2 ≤1/A. Thus for any x ∈ℝ^d, A^2(1-(λ_1 + δ/√(A)))^2/(1+λ_2)^2∫_ℝ^d‖ y ‖^2 dν(y)‖x ‖^2≤∫_ℝ^d⟨ x, y ⟩^2 dν(y) ≤∫_ℝ^d‖ y ‖^2 dν(y)‖x ‖^2.That is to say, ν is a probabilistic frame for ℝ^d with boundsA^2(1-(λ_1 + δ/√(A)))^2/(1+λ_2)^2 M_2(ν) and M_2(ν)where M_2(ν) := ∫_ℝ^d‖ y ‖^2 dν(y).Since δ/√(A)≤√(B)δ/A, the condition λ + δ/√(A) <1 in <ref> and max(λ_1 + δ/√(A), λ_2) <1 in <ref> could be replaced by λ + √(B)δ/A <1 and max(λ_1 + √(B)δ/A, λ_2) <1, respectively.In this case, the lower frame bounds for ν are A^2(1-(λ + √(B)δ/A) )^2/ M_2(ν) andA^2(1-(λ_1 + √(B)δ/A) )^2/(1+λ_2)^2M_2(ν)This is due to another way to get ‖ U^+x ‖_L^2(μ):‖ U^+x ‖_L^2(μ)^2 = ∫_ℝ^d⟨ S_μ^-1x, y ⟩ ^2 dμ(y)≤ B‖ S_μ^-1x ‖^2 ≤ B‖ S_μ^-1‖_2^2 ‖ x ‖^2 ≤B/A^2‖ x ‖^2. § PERTURBATIONS INCLUDING PROBABILISTIC DUAL FRAMESLet {f_i}_i=1^∞ be a frame for the Hilbert space ℋ with bounds 0<A ≤ B < ∞. Recall that a frame {h_i}_i=1^∞ for ℋ is a dual frame of{f_i}_i=1^∞ if for any f ∈ℋ, f = ∑_i =1^∞⟨ f,h_i ⟩f_i = ∑_i =1^∞⟨ f,f_i ⟩h_i .Suppose the upper frame bound for {h_i}_i=1^∞ is D and {g_i}_i=1^∞ is a sequence in ℋ such that α:= ∑_i=1^∞‖ f_i-g_i ‖^2 < ∞,β:=∑_i=1^∞‖ f_i-g_i ‖‖ h_i ‖ < 1.Then by <ref>, {g_i}_i=1^∞ is a frame in ℋ with bounds (1-β)^2/D, B(1+√(α/B)).In this section, we generalize the above result to the probabilistic frames setting: we give a sufficient perturbation condition where the probabilistic dual frames are used,which is similar to <ref> without the quadratic close condition α<∞.Let μ be a probabilistic frame for ℝ^d with bounds 0<A ≤ B < ∞. Recall that ν is said to be a probabilistic dual frame of μ with respect to γ_12∈Γ(μ, ν) if ∫_ℝ^d ×ℝ^d xy^T dγ_12(x, y) = I_d × d.Furthermore, suppose η∈𝒫_2(ℝ^d) and γ_23∈Γ(ν, η). Then by <ref>(gluing lemma) and <ref>,there exists π∈𝒫(ℝ^d ×ℝ^d ×ℝ^d) such that P_12_#π = γ_12,P_23_#π = γ_23. Now we are ready to state the main perturbation theorem about probabilistic frames where the probabilistic dual frame is included. Let μ be a probabilistic frame for ℝ^d and ν the probabilistic dual frame of μ with respect to γ_12∈Γ(μ, ν). Let η∈𝒫_2(ℝ^d), γ_23∈Γ(ν, η), and π∈𝒫(ℝ^d ×ℝ^d ×ℝ^d) be the coupling with marginals γ_12 and γ_23 obtained by Gluing Lemma. Suppose σ:= ∫_ℝ^d ×ℝ^d ×ℝ^d‖ x-z ‖‖ y ‖ dπ(x,y,z) <1, then η is a probabilistic frame for ℝ^d with bounds (1- σ)^2/M_2(ν) andM_2(η). And if the upper frame bound for ν is 0<D<∞, then the frame bounds for η are(1- σ)^2/D andM_2(η). Since η∈𝒫_2(ℝ^d), then η is Bessel with boundM_2(η):=∫_ℝ^d‖ z ‖^2 d η(z) < ∞. Next let us show the lower frame bound. Define a linear operator L: ℝ^d →ℝ^d byL(f) = ∫_ℝ^d ×ℝ^d⟨ f, y ⟩ zd γ_23(y,z),for any f ∈ℝ^d.Since ν is the probabilistic dual frame of μ with respect to γ_12∈Γ(μ, ν), thenf = ∫_ℝ^d ×ℝ^d⟨ f, y ⟩ x dγ_12(x, y) ,for any f ∈ℝ^d.Therefore, ‖ f -L(f) ‖ =‖∫_ℝ^d ×ℝ^d⟨ f, y ⟩ x dγ_12(x, y) - ∫_ℝ^d ×ℝ^d⟨ f, y ⟩ zd γ_23(y,z) ‖= ‖∫_ℝ^d ×ℝ^d ×ℝ^d⟨ f, y ⟩ (x- z)d π(x, y,z) ‖≤∫_ℝ^d ×ℝ^d ×ℝ^d‖ y ‖‖ x- z‖ d π(x, y,z)‖ f‖ = σ‖ f‖.Thus L: ℝ^d →ℝ^d is invertible and ‖ L^-1‖≤1/1-σ. Note that for any f ∈ℝ^d, f = LL^-1(f) = ∫_ℝ^d ×ℝ^d⟨ L^-1f, y ⟩ zd γ_23(y,z).Therefore, ‖f ‖^4=⟨ f, f ⟩^2 = |∫_ℝ^d ×ℝ^d⟨ L^-1f, y ⟩⟨ f, z ⟩ d γ_23(y,z) |^2 ≤∫_ℝ^d ×ℝ^d⟨ L^-1f, y ⟩^2 dγ_23(y,z) ∫_ℝ^d ×ℝ^d⟨ f, z ⟩^2 dγ_23(y,z) = ∫_ℝ^d⟨ L^-1f, y ⟩^2 dν(y) ∫_ℝ^d⟨ f, z ⟩^2 dη(z)≤‖ L^-1‖^2‖ f ‖^2∫_ℝ^d‖ y ‖^2 dν(y) ∫_ℝ^d⟨ f, z ⟩^2 dη(z)≤1/(1- σ)^2∫_ℝ^d‖ y ‖^2 dν(y) ‖ f ‖^2 ∫_ℝ^d⟨ f, z ⟩^2 dη(z).where the first inequality is due to Cauchy Schwarz inequality and the second equality comes from γ_23∈Γ(ν, η). Thus for any f ∈ℝ^d, (1- σ)^2/∫_ℝ^d‖ y ‖^2 dν(y)‖f ‖^2≤ ∫_ℝ^d⟨ f, z ⟩^2 dη(z) ≤∫_ℝ^d‖ z ‖^2 d η(z)‖f ‖^2. Therefore, η is a probabilistic frame for ℝ^d with bounds (1- σ)^2/M_2(ν) and M_2(η). If the upper frame bound forthe probabilistic dual frame ν is 0<D<∞, then‖f ‖^4≤∫_ℝ^d⟨ L^-1f, y ⟩^2 dν(y) ∫_ℝ^d⟨ f, z ⟩^2 dη(z)≤ D ‖ L^-1f ‖^2∫_ℝ^d⟨ f, z ⟩^2 dη(z)≤D/(1- σ)^2 ‖ f ‖^2 ∫_ℝ^d⟨ f, z ⟩^2 dη(z). In this case, the frame bounds for ηare (1- σ)^2/D andM_2(η).If the probabilistic dual frame of μ is given by the canonical probabilistic dual frame S_μ^-1_#μ, we will have the following corollary. Let μ be a probabilistic frame for ℝ^d with bounds 0<A ≤ B < ∞,and η∈𝒫_2(ℝ^d). Ifσ̂:=∫_ℝ^d ×ℝ^d‖S_μ^-1 x‖ ‖ x-z ‖d μ(x) d η(z)<1,or ∫_ℝ^d ×ℝ^d‖ x‖ ‖ x-z ‖ d μ(x) d η(z)<A, then η is a probabilistic frame for ℝ^d with bounds A(1- σ̂)^2 and M_2(η).In the previous theorem, let S_μ^-1_#μ be the canonical probabilistic dual frame of μ with respect to γ_12:= (Id, S_μ^-1)_#μ∈Γ(μ, S_μ^-1_#μ). Let γ_23 be the product measure γ_23:= S_μ^-1_#μ⊗η∈Γ(S_μ^-1_#μ, η). Then by the disintegration theorem and gluing lemma,the transport coupling with marginalsγ_12 andγ_23 is given by π:= γ_12⊗η∈𝒫(ℝ^d ×ℝ^d ×ℝ^d). Thusσ̂:= ∫_ℝ^d ×ℝ^d‖ x-z ‖‖S_μ^-1 x‖ d μ(x) d η(z) = ∫_ℝ^d ×ℝ^d ×ℝ^d‖ x-z ‖‖ y ‖ d π(x,y,z).Since σ̂<1 andthe upper bound of S_μ^-1_#μ is 1/A, then by <ref>, η is a probabilistic frame for ℝ^d with bounds A(1- σ̂)^2 and M_2(η). Since ‖S_μ^-1 x‖≤‖S_μ^-1‖_2 ‖x ‖≤1/A‖x ‖ where ‖S_μ^-1‖_2 is the 2-matrix norm of S_μ^-1 and ‖S_μ^-1‖_2 ≤1/A , then ∫_ℝ^d ×ℝ^d‖ x-z ‖‖ x‖ d μ(x) d η(z)<A implies σ̂:=∫_ℝ^d ×ℝ^d‖ x-z ‖‖S_μ^-1 x‖ d μ(x) d η(z)<1.Indeed, the condition in <ref> could be generalized to any coupling γ∈Γ(μ, η) with marginal μ and η. Let μ be a probabilistic frame for ℝ^d with bounds 0<A ≤ B < ∞ and η∈𝒫_2(ℝ^d). Let γ∈Γ(μ, η) be any coupling with marginal μ and η. Suppose ϵ:= ∫_ℝ^d ×ℝ^d‖ x‖ ‖ x-z ‖d γ(x,z)<A,then η is a probabilistic frame for ℝ^d with bounds (A- ϵ)^2/B and M_2(η).Furthermore, if χ := ∫_ℝ^d ×ℝ^d‖ S_μ^-1 x‖ ‖ x-z ‖d γ(x,z)<1,then η is a probabilistic frame for ℝ^d with bounds A^2 (1-χ)^2/B and M_2(η). Since η∈𝒫_2(ℝ^d), then η is Bessel with boundM(η):=∫_ℝ^d‖ z ‖^2 d η(z) < ∞. Next let us show the lower frame bound.Since S_μ^-1_#μ is the canonical probabilistic dual frame of μ with respect to (Id, S_μ^-1)_#μ∈Γ(μ, S_μ^-1_#μ), then f = ∫_ℝ^d⟨ f, S_μ^-1x ⟩ x dμ(x)=∫_ℝ^d⟨ S_μ^-1 f, x ⟩ x dμ(x) ,for any f ∈ℝ^d.Define a linear operator L: ℝ^d →ℝ^d byL(f) = ∫_ℝ^d ×ℝ^d⟨ S_μ^-1f, x ⟩ zd γ(x,z),for any f ∈ℝ^d.Therefore, ‖ f -L(f) ‖ =‖∫_ℝ^d ×ℝ^d⟨ S_μ^-1 f, x ⟩ x dμ(x) - ∫_ℝ^d ×ℝ^d⟨ S_μ^-1f, x ⟩ zd γ(x,z) ‖= ‖∫_ℝ^d ×ℝ^d⟨ S_μ^-1f, x ⟩ (x-z)d γ(x, z) ‖≤∫_ℝ^d×ℝ^d‖ x ‖‖ x- z‖ d γ(x, z) ‖ S_μ^-1 f‖≤ϵ ‖ S_μ^-1‖_2‖ f‖≤ϵ/A‖ f‖.Thus L: ℝ^d →ℝ^d is invertible and ‖ L^-1‖≤1/1-ϵ/A .Then for any f ∈ℝ^d, f = LL^-1(f) =∫_ℝ^d ×ℝ^d⟨ S_μ^-1L^-1 f, x ⟩ zd γ(x,z).Therefore, ‖f ‖^4=⟨ f, f ⟩^2 = |∫_ℝ^d ×ℝ^d⟨ S_μ^-1L^-1 f, x ⟩⟨f, z ⟩ d γ(x,z) |^2 ≤∫_ℝ^d ×ℝ^d⟨ S_μ^-1 L^-1f, x ⟩^2 dγ(x,z) ∫_ℝ^d ×ℝ^d⟨ f, y ⟩^2 dγ(x,z) = ∫_ℝ^d⟨ S_μ^-1 L^-1f, x ⟩^2 dμ(x) ∫_ℝ^d⟨ f, z ⟩^2 dη(z)≤ B ‖S_μ^-1‖_2^2 ‖ L^-1‖^2 ‖ f ‖^2∫_ℝ^d⟨ f, z ⟩^2 dη(z)≤B/A^2 (1-ϵ/A)^2‖ f ‖^2 ∫_ℝ^d⟨ f, z ⟩^2 dη(z),where the first inequality is due to Cauchy-Schwarz inequality. Thus for any f ∈ℝ^d, A^2 (1-ϵ/A)^2/B‖f ‖^2≤ ∫_ℝ^d⟨ f, z ⟩^2 dη(z) ≤‖f ‖^2 ∫_ℝ^d‖ z ‖^2 d η(z). Therefore, η is a probabilistic frame for ℝ^d with bounds (A- ϵ)^2/B and M_2(η).Furthermore, if χ := ∫_ℝ^d ×ℝ^d‖ x-z ‖‖ S_μ^-1 x‖ d γ(x,z)<1,then‖ f -L(f) ‖ = ‖∫_ℝ^d ×ℝ^d⟨ f, S_μ^-1 x ⟩ (x-z)d γ(x, z) ‖≤χ ‖ f‖.Therefore, ‖ I-L ‖≤χ <1 impliesL is invertible and ‖ L^-1‖≤1/1-χ. Similarly, we conclude that η is a probabilistic frame for ℝ^d with bounds A^2 (1-χ)^2/B and M_2(η). (A- ϵ)^2/Bis a smaller lower frame bounds than A^2(1-χ)^2/B, since (A- ϵ)^2/B= A^2/B (1- ∫_ℝ^d ×ℝ^d‖ x-z ‖‖ x ‖/A d γ(x,z))^2 ≤A^2/B (1- ∫_ℝ^d ×ℝ^d‖ x-z ‖‖ S_μ^-1 x ‖ d γ(x,z))^2 = A^2(1-χ)^2/B.The key step in the proof of <ref> is to use the canonical probabilistic dual frame to give a constructive formula for f and show the invertibility of linear operator L. Another way to construct f is to use the canonical Parseval probabilistic frame S_μ^-1/2_#μ, i.e.,for any f ∈ℝ^d, f=∫_ℝ^d⟨f, S_μ^-1/2 x ⟩ S_μ^-1/2 x dμ(x)= ∫_ℝ^d⟨ S_μ^-1/2 f, x ⟩ S_μ^-1/2 x dμ(x).According to this reconstruction formula, we have the last proposition of this paper.Let μ be a probabilistic frame for ℝ^d with bounds 0<A ≤ B < ∞ and η∈𝒫_2(ℝ^d). Let γ∈Γ(μ, η) be any coupling with marginal μ and η. Suppose τ:= ∫_ℝ^d×ℝ^d‖ x ‖‖S_μ^-1/2 x- z‖ d γ(x, z) < √(A),then η is a probabilistic frame for ℝ^dwith bounds (√(A)- τ)^2/B and M_2(η).Since η∈𝒫_2(ℝ^d), then η is Bessel with boundM_2(η):=∫_ℝ^d‖ z ‖^2 d η(z) < ∞. Next let us show the lower frame bound.Since S_μ^-1/2_#μ is the canonical Parseval probabilistic frame of μ, then f=∫_ℝ^d⟨ S_μ^-1/2 f, x ⟩ S_μ^-1/2 x dμ(x),for any f ∈ℝ^d.Define a linear operator L: ℝ^d →ℝ^d byL(f) = ∫_ℝ^d ×ℝ^d⟨ S_μ^-1/2f, x ⟩ zd γ(x,z),for any f ∈ℝ^d.Therefore, ‖ f -L(f) ‖ =‖∫_ℝ^d⟨ S_μ^-1/2 f, x ⟩ S_μ^-1/2 xdμ(x) - ∫_ℝ^d ×ℝ^d⟨ S_μ^-1/2f, x ⟩ zd γ(x,z) ‖= ‖∫_ℝ^d ×ℝ^d⟨ S_μ^-1/2f, x ⟩ (S_μ^-1/2x-z)d γ(x, z) ‖≤∫_ℝ^d×ℝ^d‖ x ‖‖S_μ^-1/2 x- z‖ d γ(x, z) ‖ S_μ^-1/2 f‖≤τ ‖ S_μ^-1/2‖_2‖ f‖≤τ/√(A)‖ f‖.Thus L: ℝ^d →ℝ^d is invertible and ‖ L^-1‖≤1/1-τ/√(A) .Then for any f ∈ℝ^d, f = LL^-1(f) =∫_ℝ^d ×ℝ^d⟨ S_μ^-1/2L^-1 f, x ⟩ zd γ(x,z).Therefore, ‖f ‖^4=⟨ f, f ⟩^2 = |∫_ℝ^d ×ℝ^d⟨ S_μ^-1/2L^-1 f, x ⟩⟨f, z ⟩ d γ(x,z)|^2 ≤∫_ℝ^d ×ℝ^d⟨ S_μ^-1/2 L^-1f, x ⟩^2 dγ(x,z) ∫_ℝ^d ×ℝ^d⟨ f, z ⟩^2 dγ(x,z) = ∫_ℝ^d⟨ S_μ^-1/2 L^-1f, x ⟩^2 dμ(x) ∫_ℝ^d⟨ f, z ⟩^2 dη(z)≤ B‖S_μ^-1/2‖_2^2‖ L^-1‖^2 ‖ f ‖^2∫_ℝ^d⟨ f, z ⟩^2 dη(z)≤B/A (1-τ/√(A))^2‖ f ‖^2 ∫_ℝ^d⟨ f, z ⟩^2 dη(z),where the first inequality is due to Cauchy-Schwarz inequality, the second inequality is because μ is a probabilistic frame with upper bound B, and the last inequality comes from ‖S_μ^-1/2‖_2 ≤1/√(A). Thus for any f ∈ℝ^d, A (1-τ/√(A))^2/B‖f ‖^2≤ ∫_ℝ^d⟨ f, z ⟩^2 dη(z) ≤‖f ‖^2 ∫_ℝ^d‖ z ‖^2 d η(z). Therefore, η is a probabilistic frame for ℝ^d with bounds (√(A)- τ)^2/B and M_2(η). § ACKNOWLEDGEMENTThis paper is a gift to my family and friends. I would like to thank my discussion with Dr.Martin Schmoll, Dr.Mishko Mitkovski, Dr.Cody Stockdale, Trevor Camper, and Deborpita Biswas. Lemma 3.2 is named Sweetie's Lemma since I proved it while talking to my sweetheart on the phone at midnight.unsrt | http://arxiv.org/abs/2310.17830v1 | {
"authors": [
"Dongwei Chen"
],
"categories": [
"math.FA"
],
"primary_category": "math.FA",
"published": "20231027005414",
"title": "Paley-Wiener Theorem for Probabilistic Frames"
} |
A diamond anvil microassembly for Joule heating and electrical measurements up to 150 GPa and 4000 K Michael J. Walter January 14, 2024 ==================================================================================================== We give improved algorithms for maintaining edge-orientations of a fully-dynamic graph, such that the maximum out-degree is bounded. On one hand, we show how to orient the edges such that maximum out-degree is proportional to the arboricity α of the graph, in, either, an amortised update time of (log^2 n logα), or a worst-case update time of (log^3 n logα).On the other hand,motivated by applications including dynamic maximal matching,we obtain a different trade-off. Namely, the improved update time of either (log n logα), amortised, or (log ^2 n logα), worst-case, for the problem of maintaining an edge-orientation with at most (α + log n) out-edges per vertex. Finally, all of our algorithms naturally limit the recourse to be polylogarithmic in n and α.Our algorithms adapt to the current arboricity of the graph, and yield improvements over previous work: Firstly, we obtain deterministic algorithms for maintaining a (1+ε) approximation of the maximum subgraph density, ρ, of the dynamic graph. Our algorithms have update times of(ε^-6log^3 n logρ) worst-case, and (ε^-4log^2 n logρ) amortised, respectively. We may output a subgraph H of the input graph where its density is a (1+ε) approximation of the maximum subgraph density in time linear in the size of the subgraph.These algorithms have improved update time compared to the (ε^-6log ^4 n) algorithm by Sawlani and Wang from STOC 2020. Secondly, we obtain an (ε^-6log^3 n logα)worst-case update time algorithm for maintaining a (1 + ε)OPT + 2 approximation of the optimal out-orientation of a graph with adaptive arboricity α, improving the (ε^-6α^2 log^3 n) algorithm by Christiansen and Rotenberg from ICALP 2022.This yields the first worst-case polylogarithmic dynamic algorithm for decomposing into (α) forests.Thirdly, we obtain arboricity-adaptive fully-dynamic deterministic algorithms for a variety of problems including maximal matching, Δ+1 colouring, and matrix vector multiplication. All update times are worst-case (α+log^2n logα), where α is the current arboricity of the graph.For the maximal matching problem, the state-of-the-art deterministic algorithms by Kopelowitz, Krauthgamer, Porat, and Solomon from ICALP 2014 runs in time (α^2 + log^2 n), and by Neiman and Solomon from STOC 2013 runs in time (√(m)). We give improved running times whenever the arboricity α∈ω( log n√(loglog n)). Acknowledgements. This research was supported by Independent Research Fund Denmark grant 2020-2023 (9131-00044B) “Dynamic Network Analysis” and the VILLUM Foundation grant (VIL37507) “Efficient Recomputations for Changeful Problems”.This project has additionally received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 899987.Chris Schwiegelshohn is partiallysupported by an Independent Research Fund Denmark (DFF) Sapere Aude Research Leader grant No 1051-00106B. Chandra Chekuri is supported by NSF grant CCF-1910149.Kent Quanrud is supported in part by NSF grant CCF-2129816. § INTRODUCTION In dynamic graphs, one wishes to update a data structure over a graph G(V,E) (or an answer to a specified graph problem) as the graph undergoes local updates such as edge insertions and deletions.One of the fundamental problems is to maintain an orientation of the edges such that the maximum out-degree over all vertices is minimised. While the problem is interesting in its own right, bounded out-degree orientations have a number of applications. First, the problem is closely related to the task of finding the densest subgraph; indeed if the edges can be fractionally oriented, the optimal maximal fractional out-degree is equal to the density ρ := |E∩ (S× S)|/|S| of the densest subgraph S⊆ V. Secondly, bounded out-degree orientations appear frequently as subroutines for other problems. In particular, there exist a large body of work parameterising the update time of dynamic algorithms for many fundamental problems such as shortest paths <cit.>, vertex cover <cit.>, graph colouring <cit.>, independent set <cit.>, and, most prominently, maximum matching <cit.> in terms of the arboricity α:=max_S⊆ V,|S|≥2[]|E∩ (S× S)|/|S|-1.In light of their widespread applicability, maintaining an edge orientation minimising the maximum outdegree is extremely well motivated. In particular, we are interested in algorithms with worst-case deterministic update times, as these can be immediately used as black-box subroutines. In a recent breakthrough result, <cit.> showed that it is possible to maintain an estimate for the smallest maximum outdegree in (n) worst case deterministic time by maintaining an estimate for the density of the densest subgraph. Nevertheless, all known results for maintaining an orientation require at least update time Ω(ρ) = Ω(α) worst case update time, regardless of whether the algorithm is randomised or not <cit.>. For dense graphs, this bound may be arbitrarily close to n. Thus, it raises the following question: Is it possible to maintain an (approximate) minimum out-degree orientation in sublinear deterministic worst case update time?§.§ Our Contribution In this paper, we answer the aforementioned question in the affirmative. Specifically, we provide a framework for maintaining approximate out-orientations with various trade-offs between the quality of the out-degree orientation and update time.For the problem of maintaining an out-orientation we obtain: * An orientation with maximum out-degree (α) in update time (log^3 n logα).* An orientation with maximum out-degree (1+ε)α +2 in update time (ε^-6log^3 n logα).* An orientation with maximum out-degree (α +log n) in update time (log ^2 n logα). The above running times are deterministic and worst-case.Contrary to the previous state-of-the-art result by Sawlani and Wang <cit.>, the recourse of our algorithm, i.e. the number of re-orientations of edges, is polylogarithmic in n (specifically, a log n factor lower than the running time). When allowing amortisation, we get even better bounds:* An orientation with maximum out-degree (α) in amortised update time (log^2 n logα).* An orientation with maximum out-degree (α +log n) in amortised update time (log n logα). Table <ref> gives an overview of our results, and their implications when applied to a selection of algorithmic problems. The latter we briefly discuss in the following.Densest SubgraphUsing the duality between out-degree orientations and maximum subgraph density, we obtain a (1+ε) approximate estimate for maximum subgraph density ρ in worst-case update time of(ε^-6log^3 n logρ).Additionally, we may output a subgraph H with a density greater than ρ / (1 + ) in time linear in the size of H (Lemma <ref>).This recovers (and moderately improves) the recent worst-case algorithm by <cit.> that has an update time of (ε^-6log^4 n). When allowing amortised analysis, we improve the running time to(ε^-4log^2 nlogρ) amortised. Arboricity DecompositionAn arboricity decomposition partitions the edge set into a minimum number of forests. The best dynamic algorithms for maintaining an (α)arboricity decomposition has an amortised deterministic update time of (log^2 n) due to <cit.> and an (√(m)log n) worst case deterministic update time due to <cit.>.Distinguishing between arboricities 1 and 2 requires Ω(log n) time <cit.>. We substantially improve the worst case update time to (log^3 n logα).Dynamic Matrix Vector MultiplicationIn the Dynamic Matrix Vector Multiplication problem, we are given an n× n matrix A and an n-vector x. Our goal is to quickly maintain y=Ax in the sense that we can quickly query every entry y_i = (Ax)_i, subject to additive updates to the entries of x and A.Interpreting A as the adjacency matrix of a graph with arboricity α, <cit.> presented an algorithm supporting updates to A in time (α^2+log^2 n) and updates to x in time (α + log n). We may update A in time (α + log ^2 n logα), improving when α∈ω(log n√(loglog n)). Maximal MatchingA matching is a set of vertex-disjoint edges. A matching M is maximal if no edge of the graph can be added to it without violating the matching property.More so than perhaps any other problem, there exists a large gap between the performance of the state of the art deterministic algorithms vs the state of the art randomised algorithms. Using randomisation, one can achieve a (1) amortised <cit.> and a (n) worst case update time <cit.>. Deterministic algorithms so far have only achieved a (√(m)) update time for arbitrary graphs <cit.>, or (α^2 + log^2 n) update time where (α) is the current arboricity of the graph <cit.>.Because our result explicitly maintains an (approximately) optimal orientation, we improve on known deterministic algorithms whenever α∈ω(log n ·√(loglog n)) by achieving an update time of (α + log^2 nlogα).Δ+1 ColouringA fundamental question in many models of computation is how to efficiently compute a Δ+1 colouring where Δ is the maximum degree of the graph.We present a deterministic algorithm that maintains a Δ+1 colouring in (α+log^2 nlogα) worst case update time. To the best of our knowledge, this is the first such algorithm that beats the trivial (Δ) update time for uniformly sparse graphs. All other results <cit.> (discussed in more detail in the appendix) require randomisation, amortisation, or do not yield a Δ+1 colouring.Related work on dynamic orientationsDynamic out-orientations have been widely studied <cit.> since they were introduced by Brodal and Fagerberg <cit.>, for maintaining an (α_) out-orientation[Here α_max denotes the maximum arboricity seen over the whole sequence of operations.] in (α_max+log n) amortised time.Brodal and Berglin <cit.> improve the time guarantee to worst-case (α_max+log n)time, albeit maintaining an (α_max+log n) out-orientation. The best adaptive algorithms, adapting to a changing arboricity, are by Henzinger, Neumann, and Wiese <cit.> achieving an out-degree of (α) and an amortised update time (log^2 n), and by Kopelowitz, Krauthgamer, Porat, and Solomon <cit.>, maintaining an (α+log n) out-orientation with a worst-case update time of (α^2 + log^2 n).Christiansen and Rotenberg <cit.> lowered the maximum out-degree to (1+ε)α+2 incurring a worse update time of (ε^-6α^2log^3 n). § NOTATION AND OVERVIEW OF TECHNIQUES Let G =(V, E) be a graph with n vertices and m edges.For any subgraph H of G, we denote by V(H) and E(H) the corresponding vertex and edge set. The density of subgraph H is ρ(H) := |E[G]|/|V(G)|. The maximum subgraph density of G is then the maximum over all H of ρ(H). A closely related measure of uniform sparsity is the arboricity of a graph, defined as: α := max_H ⊆ G , |V(H)| ≥ 2*|E(H)|/|V(H)|-1A fractional orientation G of a graph G assigns for every edge (u, v) a weight d(u → v) and d(v → u) such that d(u → v) + d(u → v) = 1.The out-degree of a vertex u is subsequently defined as d^+(u) = ∑_v ∈ V d(u → v).The maximum out-degree of G is Δ(G) := max_v ∈ V d^+(v).Picard & Queyranne <cit.> show that ⌈ρ⌉ = min_GΔ(G), and so it follows that ρ≤Δ(G).An orientation is a fractional orientation where d(u → v) is either 0 or 1. For brevity, we say that an orientation includes uv whend(u → v) = 1.For any vertex u ∈ V, we subsequently denote by N^+(u) (resp. N^-(u)) all vertices w with d(u → w) = 1(resp. d(w → u) = 1).In an orientation, d^+(u) is the number of edges directed from u (the out-degree).Whenever G is not simple, d^+(u) can be larger than |N^+(u)|.For any integer b ≥ 1, we denote by G^b the graph G where every edge is duplicated b times. Throughout this paper we maintain for a suitable choice of b, an orientation over G^b.Note that any orientation in G^b induces a fractional orientation on G.We may convert any such fractional orientation in G to an orientation on G by `rounding' every edge (i.e., d(u → v) = 1 if d(u → v) > d(v → u), breaking ties arbitrarily).Observe that if the maximum out-degree of an orientation in G^b is some value Δ, then the maximum out-degree of the rounded orientation in G is at most Δ/b/2≤ 2 Δ / b.An important theoretical insight for this work are the following linear programs. These dual programs respectively maximise the subgraph density, or minimise the largest fractional out-degree of an edge orientation of G:[t].5 Densest Subgraph (DS)maximise∑_uv∈ Ey_u, vs.t.x_u,x_v≥ y_u,v∀ u,v∈ V, uv∈ E ∑_v∈ Vx_v ≤ 1x,y ≥ 0 [t].5 Fractional Orientation (FO)minimiseρs.t.d(u→ v)+d(v→ u)= 1 ∀uv∈ E ρ≥ d^+(u)= ∑_v∈ V d(u → v)∀ u∈ Vρ,d(u→ v),d(v→ u) ≥ 0 Duality and previous work. The duality between these programs allows for approximating the maximum subgraph density by computing a fractional orientation that aims to minimise ρ.Thus, in an algorithmic sense, we focus on maintaining a fractional orientation of G. This is then achieved by maintaining an integral orientation in a graph with an appropriate number of edge duplicates.These integral orientations are typically maintained using the following simple, but efficient idea: If one takes a directed path from a high-out-degree vertex to a low-out-degree vertex, then reorienting every edge along this path lowers the out-degree of the high-out-degree vertex while only increasing the out-degree of some vertex of low out-degree.To make this idea constructive, one needs a way to efficiently locate a suitable directed path or chain to reorient. Kopelowitz et al.<cit.> showed how to locate such chains by maintaining a local condition, namely that d(u → v) > 0 implies that d^+(u) ≤ d^+(v) + 1.When the maximum out-degree is small, this local condition can be used to identify short chains. However, when the out-degree becomes large (in dense graphs) this procedure becomes slow. In particular, one can never hope to get a better bound on the chain length than Ω(ρ). This in turn means that their update times are Ω(ρ) = Ω(n). In fact, all of their algorithms have update times that depend on ρ^2. One ρ stems from the chain lengths and the other from the fact that changes in degrees need to be reported to all out-neighbours in order to efficiently locate the chains. Sawlani and Wang <cit.> removed the latter ρ-factor by informing neighbours via a round robin scheme.They then removed the former ρ-factor by instead requiring that d(u → v) > 0 implies that d^+(u) ≤ d^+(v) + f(ρ̃) for some function f and some very precise estimate ρ̃ of the current maximum subgraph density.By making the local condition depend on ρ̃, they were able to get chains of much shorter length, namely of length (^-2log n).However, for this local condition to yield a small out-degree, one requires that ρ̃ very precisely estimates the current density.To enforce this, Sawlani and Wang <cit.> maintain (log n/) different copies of the graph – each with a different estimate ρ̃. They maintain a pointer to the copy which currently estimates ρ the best. While this allows Sawlani and Wang <cit.> to estimate the current maximum subgraph density very well, their approach has several drawbacks.First of all, their algorithm only maintains an implicit orientation of the graph in the sense that the algorithm often switches between different copies of the graph each endowed with possibly very different orientations.While this does not matter in the context of density estimation, it matters in the context of using the out-orientation as an algorithmic tool.Firstly, any application run on such an orientation only maintains an implicit representation of the desired outcome, since one continually changes between different copies as updates arrive.Secondly, one has to update the applications across all copies meaning that the guarantee on the out-degree is no better than (n) in the top-most copy – even if the maximum subgraph density is low.The use of copies alsomakes the algorithm significantly more complicated.Our key idea. We show that maintaining a multiplicative local condition, namely that d^+(u) ≤ (1 + a) · d^+(v) for some chosen value a < 1, allows one to get both short chains of length (^-2log n logρ) whilst maintaining a very precise approximation of the maximum subgraph density.Furthermore, this can be achieved completely explicitly with low recourse and using only one copy of the graph.This allows us to apply our result to problems that benefit from having an explicit low out-degree orientation such as dynamic maximal matchings and (Δ + 1) colouring.Since this multiplicative local condition removes the need for scheduling updates to different copies, the algorithms also become simpler. However, the analysis become significantly more delicate.Sawlani and Wang <cit.> work with an additive local condition, where the added quantity depends on a very precise estimate ρ̃ of the current density.This allows them, in many places, to essentially reduce the problem-complexity to the case where: a) one has to basically only consider vertices with very large out-degree and b) one can essentially assume that these out-degrees are unchanged, since one is working with a quantity depending on ρ̃.Working with our local condition, however, allows for neither simplification.The local condition is equally ”tight” for every vertex, and it is very sensitive to changes in degrees at both endpoints of an edge.This means that one has to be very careful, when analysing the algorithms – especially when the vertices have low degree.Multiplicative local conditions We consider two local conditions that we want to maintain for an integral orientation. The first has both an additive and multiplicative term. The second has only a multiplicative term.In the first case, we require that d(u → v) > 0 implies that d^+(u) ≤ (1 + a) · d^+(v) + c.The benefit of the additive term is that for any new edge (u, v), we may always orient the edge towards either u or v without violating this local condition between u and v.The downside of this approach is that it leads to less accurate estimations of α and ρ.In the second case, we require that d(u → v) > 0 implies that d^+(u) ≤ (1 + a) · d^+(v). If both d^+(u) and d^+(v) are small, it may be that d^+(u) + 1 > (1 + a) · d^+(v) and d^+(v) + 1 > (1 + a) · d^+(u). Thus, when adding an edge (u, v) we cannot orient the edge without violating our local condition. This significantly complicates the analysis. Indeed, this complication means that we can only guarantee the multiplicative condition holds between updates to G, and is not maintained as an invariant as we perform updates to G^b. Hence, to get a simple recursive algorithm to work, we have to instead work with a threshholded local condition, where we allow edges between vertices of small enough degree to get a direction in order to handle the above problem.We show that maintaining such a threshholded local condition is actually equivalent to maintaining the multiplicative condition between updates to G.However, working with this threshholded condition requires one to be careful. To illustrate this, we briefly sketch how the algorithms work: suppose first that every vertex has perfect information about the degrees of all other vertices. Then, it can immediately identify if incrementing/decrementing its degree causes the local condition to be violated. If so, it can then reorient a violated edge, thus restoring its degree.This solves the problem for this vertex, but might move the problem to some other vertex. The key property however is that this vertex has a (significantly) smaller degree in the incremental case, or (significantly) higher degree in the decremental case.Hence, this cascade cannot continue many times, thus yielding a short chain.Every vertex, however, does not necessarily have access to the degree of all of its neighbours.Thus the algorithm has to supply this information somehow.We do so in 3 different ways: by naively informing and checking all out-neighbours of degree changes (reminiscent of the approach of Kopelowitz et al. <cit.>), by updating and checking estimates lazily in a round robin fashion (similar to Sawlani and Wang <cit.>) and in an amortised fashion by only checking every time a degree has changed substantially.The two last schemes demand that we at all times work with degree estimates that are not precise.Doing so is quite straightforward with a purely additive local condition, due to the simplifications mentioned earlier, but it is significantly more involved in the multiplicative case: here the conditions are very sensitive to degree changes, and so to make the analysis work, we have to be very precise about at what time a certain condition on the degree holds. Particularly so, when the degrees are small.This is further complicated by the fact that we now work with a threshholded local condition and thusly have to ensure that our analysis can handle all paradigms of the condition.In Section <ref>, we analyse the effect of maintaining an integral orientation that satisfies our local condition (that for all (u, v),d(u → v) > 0 implies d^+(u) ≤ (1 + a) · d^+(v) + c).We present a general theorem showing the impact of our local condition, parametrized by a and c. Let Δ be the maximal out-degree in our graph.We immediately apply this theorem to show that for a^-1∈(log n): Δ∈(ρ + log n) (for c ∈ O(1)) or Δ∈ O(ρ) (for c = 0). In Section <ref>, we show that choosing c = 0 and a^-1∈ O(^-2log n) allows us to maintain a factional orientation of the graph where the maximal out-degree Δ≤ (1 + )ρ.By naively rounding the fractional out-degrees, this implies that we can maintain an integral orientation where the out-degree Δ≤ (2 + )α. §.§ Parameterisation of the AlgorithmWe now introduce several components of the algorithm and analysis that can be specified to obtain various trade-offs between quality of the out-orientation and update time. Our algorithms have the two main parameters: η and a positive integer b.We maintain a graph G^b and an orientation G^b where one of the following invariants holds:We maintain an orientation G^b where for every directed edge uv in G^b: d^+(u) ≤ (1 + η· b^-1) · d^+(v). We maintain an orientation G^b where for every directed edge uv in G^b: d^+(u) ≤ (1 + η· b^-1) · d^+(v) +2 .Throughout the paper, we denote θ = 0 if we are maintaining Invariant <ref> and θ = 1 otherwise. This way, we maintain Invariant θ by maintaining d^+(u) ≤ (1 + η· b^-1) · d^+(v) + 2θ. The tighter the inequalities are, the closer the maximum out-degree of the maintained out-orientation is to the maximum subgraph density. Hence, setting θ = 0 will give a better approximation than θ = 1. Note that regardless of the choice of parameters, not all graphs have an orientation that satisfies Invariant <ref>. E.g. any orientation of the graph consisting of a single edge has a directed edge uv with 1 = d^+(u) > *1+η· b^-1· d^+(v) = 0. For convenience we will therefore need the following slightly relaxed invariant, which we show in Section <ref> is satisfiable for θ=0(and therefore for all θ≥0) as long as 0<b/η≤b/2, i.e. when b≥ 2 is even and η≥ 2 or when b≥ 3 and η≥ 3 (or more precisely η≥2b/b-1). We maintain an orientation G^b where for every directed edge uv in G^b: d^+(u) ≤max**1 + η· b^-1· d^+(v)+2θ , *b/2. The point is that for each update to G this lets us do updates to G^b one edge at a time, all the while satisfying Invariant <ref>, and when we are done the resulting graph satisfies Invariant θ because of the following Lemma. Let G^b be a graph that can be obtained from a graph G by replacing each edge with b copies. For all θ≥ 0, any orientation G^b of G^b that satisfies Invariant <ref> also satisfies Invariant θ. Suppose G^b satisfies Invariant <ref> andlet uv be any edge in G^b. If d^+(v)<b/2 then (because each edge in G^b is duplicated b times) d^+(u)≥ b-d^+(v)>b/2. So by Invariant <ref> we have d^+(u)≤*1+η· b^-1· d^+(v)+2θ and Invariant θ is satisfied for uv.Otherwise d^+(v)≥b/2 and by Invariant <ref>, d^+(u)≤max**1+η· b^-1· d^+(v)+2θ, *b/2 = *1+η· b^-1· d^+(v)+2θ thus Invariant θ is satisfied for uv. § A STRUCTURAL THEOREMIn this section, we formally establish the relationship between maintaining Invariant θ for a graph G^b, and the corresponding estimate of the density and arboricity of the graph G. The following theorem is our result in its most general form: allowing for (1 + )-approximations of ρ and more.Skip ahead to Corollaries <ref>+<ref> for a comprehensible application of the variables.Let G be a graph and let G^b be G with each edge duplicated b times. Let ρ_b be the maximum subgraph density of G^b. LetG^b be any orientation of G^b which has the following invariant: for some c≥ 0, every directed edge uv satisfies d^+(u) ≤ (1 + η· b^-1) · d^+(v)+c.Then for any γ > 0there exists a value k_max≤log_1 + γ n for which:(1+η· b^-1)^-k_maxΔ(G^b) ≤ (1+γ)ρ_b +c(η^-1· b+1).Let G^b=(V, E^b). We define for non-negative integers i the sets: T_i := * v∈ Vd^+(v) ≥Δ(G^b) ·*1 + η· b^-1^-i - c∑_j = 1^i *1 + η· b^-1^-j Observe that for all non-negative integers 0≤ i < j, T_i ⊆ T_j. Moreover, observe that T_0 contains at least one element (the element of G^b with maximum out-degree), and each T_i at most n elements (since they can contain at most all vertices of G).Let k be the smallest integer such that |T_k+1| < (1 + γ) |T_k|. It follows that k is upper bounded by the value k_max = log_(1 + γ) n.In order to bound the maximum out-degree of G^b, we want to show that no edges can be oriented from T_k to a vertex not in T_k+1. To do so, we assume two such candidates u ∈ T_k and v ∉T_k+1, and show that uv violates: d^+(u) ≤ (1 + η· b^-1) · d^+(v)+c.Per assumption we have d^+(u)≥ (1 + η· b^-1)^-kΔ(G^b)- c∑_j = 1^k (1 + η· b^-1)^-j (Since u∈ T_k)and d^+(v)< (1 + η· b^-1)^-1 (1 + η· b^-1)^-kΔ(G^b) -c∑_j = 1^k+1 (1 + η· b^-1)^-j.(Since v∉T_k+1)It follows that(1 + η· b^-1) d^+(v) + c< (1 + η· b^-1)^-kΔ(G^b) - c∑_j = 0^k (1 + η· b^-1)^-j + c = (1 + η· b^-1)^-kΔ(G^b)- c∑_j = 1^k (1 + η· b^-1)^-j≤ d^+(u). This would violate the assumed invariant of G^b. Hence for any u ∈ T_k and any edge uv, we have v ∈ T_k+1 and thus:∑_u ∈ T_k d^+(u) ≤ |E^b[T_k+1]|. Finally, we can bound the density ρ_b as:ρ_b = max_∅⊂ S⊆ V|E^b[S]|/|S|≥|E^b[T_k+1]|/|T_k+1| ≥∑_u ∈ T_k d^+(u) /(1+γ) |T_k| ≥|T_k| ·**1 + η· b^-1^-kΔ[]G^b - c∑_j = 1^k *1 + η· b^-1^-j/(1+γ)|T_k| .We find *1+γρ_b +c/1 - 1/1 + η· b^-1≥*1 + η· b^-1^-k_maxΔ*G^b, which concludes the proof.The parameter γ is needed to get a (1+)-approximation later on, where we will require that γ = Θ().For now, one can just think of γ as being a constant. In fact in the following corollaries, we will choose γ so 1+γ = e.Denote by ρ the density of G. For any η and b such thatη b^-1∈(1/log n), we have that Invariant <ref> for the graph G^b implies: Δ(G^b)∈(b ρ) and Δ(G)∈(ρ). Set γ=e-1, let k_max≤log_(1+γ)n=log_e n be as in Theorem <ref>. By our choice of η and bthere exists a constant s>0 such that η b^-1≤s/log_e n for all n≥ 1, thus by Theorem <ref> (with c=0) we now haveΔ(G^b) ≤ (1+η· b^-1)^k_max(1+γ)ρ_b ≤ e^η· b^-1· k_max(1+γ)ρ_b ≤ e^s+1ρ_b ∈(ρ_b).Finally, we note that per definition of subgraph density, ρ_b = b ·ρ. For all uv in G, there must be at least b/2 edges in G^b from u to v (else, G would include the edge vu instead). It immediately follows that the out-degree of u in G is at most d^+(u) ·b/2^-1∈(b^-1·ρ_b) = ( ρ). Denote by ρ the density of G. Let b = 1 and η = 1/log_e(n).Whenever Invariant <ref> holds for the graph G=G^b, it must be that:Δ(G)∈(ρ+log n). Set γ=e-1, let k_max be as in Theorem <ref>. By our choice of η and b, η· b^-1=1/log_e n=1/log_(1+γ)n. Thus by Theorem <ref> (with c=2) we now haveΔ(G) =Δ(G^b)≤ (1+η· b^-1)^k_max* (1+γ)ρ_b +c(η^-1· b+1) ≤ e^η· b^-1· k_max* (1+γ)ρ_b +c(η^-1· b+1) ≤ e * e·ρ_b +c(η^-1· b+1) ∈(ρ+log n) § A SIMPLE ALGORITHM FOR MAINTAINING THE INVARIANTS We first provide a simple worst-case (ρlogρ·(n))algorithm (where ρ is the maximum subgraph density) to maintain Invariant θ in G^b (i.e. we maintain one chosen invariant). Our data structure is purposefully more complicated than necessary here, to illustrate its use in future sections. Subsequent sections slightly adjust the algorithms. Crucially, the bound on the recursive depth of our functions applies throughout the paper.Recall that G^b is the graph G with edges duplicated b times. For convenience, we set λ = η b^-1/64 in the rest of the paper, and note that (1+λ)^5≤ 1+η b^-1≤ 2. We maintain Invariant θ using a data structure storing for all vertices u:* The value d^+(u) of the current orientation G^b,* The set N^+(u) in arbitrary order, and* The set N^-(u) in a sorted doubly linked list of buckets B_j(u). Each bucket B_j(u) contains, as a doubly linked list in arbitrary order, all w ∈ N^-(u) where j = log_(1 + λ) d^+(w). The vertex u has a pointer to the bucket B_i(u) with i = *log_(1 + λ)max*(1+λ)d^+(u),*b/4. We run Algorithms <ref>+<ref> on the graph G. These invoke Algorithms <ref>+<ref>, which in turn adddirected edges to and remove them from to G^b (Algorithms <ref>+<ref>). In our recursive algorithm calls, we may assume that for any edge insertion (u, v) in G^b, we call Insert( uv) whenever d^+(u) ≤ d^+(v).Recall that θ, η and b are parameters that are set beforehand:[t].5[H] Insert(edge (u, v) in G) [t].5 [H] Delete(edge (u, v) in G) [t].5[H] [t].5 [H] Delete(uv) [t].5[H] [t].5 [H] Remove(uv)We count time in discrete steps. A new time step starts just before Algorithm <ref> calls Insert or Algorithm <ref> calls Delete. For a time t, we denote for any variable ϕ in our code by ϕ_t its value before the invoking insertions (or deletions) at time t. E.g., for a vertex w, d^+(w)_t is the out-degree before invoking insertions at time t, and d^+(w)_t+1 is the out-degree just after.§.§ Maintaining Invariant <ref>.We show that by setting θ = 1 (and choosing η and b carefully) we maintainInvariant <ref>:Let G be a dynamic graph and ρ be the density of G at time t.We can choose our variables θ = 1, b = 1 and η∈Θ(log n) to maintain an out-orientation G^b = G in O* (ρ + log n) ·log n ·logρ time per update in G such thatInvariant <ref> holds for G. Moreover: * ∀ u, the out-degree d^+(u)_t+1 in G is at most ( ρ + log n), (i.e. Δ( G) ∈( ρ + log n)) Invariant <ref> demands that ∀uv we maintain d^+(u) < (1 + η b^-1) d^+(v) + 2. Corollary <ref> implies that if after time t we maintain Invariant <ref>, then we obtain the desired upper bound on all d^+(u)_t+1. What remains is to show that our algorithms maintain Invariant <ref> in the desired runtime. We do this in three steps as we show: Correctness:Our algorithms maintain Invariant <ref> in G^b,Recursive depth: Algorithms <ref>+<ref> have a recursive depth of *λ^-1·logρ, and Time:Algorithms <ref>+<ref> spend (ρ + log n) time before entering the recursion.We prove these three properties for deletions only.Invoking Delete( x_0 v) may cause us to recursively invoke Delete(x_i +1x_i): flipping a backward chain in G^b from x_0. Only the final vertex x_f in this chain decreases its out-degree once we terminate.For insertions we flip a forward chain x_i x_i+1, which is handled symmetrically. Correctness.We show that we maintain Invariant <ref>. Suppose that we terminate at a vertex x_f.Then after our sequence of flips, the vertex x_f is the only vertex that changed its out-degree (i.e. only for x_f: d^+(x_f)_t + 1 = d^+(x_f)_t - 1).Because our algorithm terminated and b = 1, for x First( Max( Buckets(N^-(x_f)))), d^+(x)_t ≤max{ (1 + λ) (d^+(x_f)_t - 1) + θ, b/4}.For all w ∈ N^-(x_f): d^+(w)_t+1≤ (1 + λ) d^+(x)_t+1. It follows d^+(w)_t+1≤max{(1 + λ)^2 d^+(x_f)_t+1 + 2, b/2}. We may apply Lemma <ref> to conclude that,once terminated, we satisfy Invariant <ref>. Recursive depth. What remains is to upper bound the recursive depth of our algorithm, proving termination. Our code implies that for all i: d^+(x_i+1)_t > (1 + λ) ( d^+(x_i)_t - 1) + θ. Thus d^+(x_i+1)_t ≥d^+(x_i)_t + 1.Let x_s be the last vertex in the chain where d^+(s)_t ∈( log n). The fact that out-degrees are integer and strictly increasing along the backward chain, implies that there are (log n) vertices preceding s.If f = s, the recursive depth is (log n) = (λ^-1) per definition.Otherwise, we note that before this sequence of updates, we satisfied Invariant <ref> and thus (by Corollary <ref>) know that for all i: d^+(x_i)_t ∈(ρ + log n).If there exist vertices x_i with i > s, then ρ∈Ω(log n) and thus(ρ + log n) =(ρ).Now we consider all i > s.We know that d^+(x_i+1)_t > (1 + λ) ( d^+(x_i)_t - 1). Thus, (using d^+(x_i)_t ≥d^+(x_i-1)_t + 1) we get that: d^+(x_i+1)_t >(1 + λ) d^+(x_i-1)_t.It follows that there are at most log_ (1 + λ)(ρ) =( λ^-1logρ) vertices in the chain of flipped edges: which upper bounds our recursive depth.Time spent.Whenever we insert a vertex v ∈ N^-(u), it is either because we added the edge (u, v) to G (occurring once) or, because we flipped an edge uv. In the first case, we may afford spending (log_(1+λ)d^+(v)) = (λ^-1log(bρ + log n)) = (λ^-1log n) time searching through all buckets for the bucket containing v.In the latter case, for Insert we have max*(1+λ)d^+(u)_t,b/4<d^+(v)_t+1≤max*(1+λ)d^+(u)_t,b/4+1, and for Delete we have max*(1+λ)(d^+(u)_t-1),b/4<d^+(v)_t<(1+λ)max*(1+λ)(d^+(u)_t-1),b/4. Using the pointer from u to the bucket B_i(u) where i=*log_(1+λ)max*(1+λ)d^+(u),*b/4, we may insert v into the correct bucket in (1) time. For each call of Delete(x_i+1 x_i), we spend (1) time retrieving the vertex x before we recurse.For the vertex x_f at the end of the recursion, we consider all (ρ + log n) vertices w ∈ N^+(x_f). We update the bucket that x_f is in.Denote by r(x_f)_t+1 = log_(1 + λ) d^+(x_f)_t the rank of f (i.e., the index of each bucket contained x_f at time t).The rank of x_f changes by at most 1, hence we may update our data structure in (d^+(f)_t+1) time. For each call of Insert(x_i+1 x_i), we spend (d^+(x_i+1)_t) =(ρ + log n) time retrieving the vertex x_i+2 before we recurse. Updating the data structure again takes (1) time per updated element.It follows that the total time spent adding or removing an arc in G^b is ( (ρ + log n) ·λ^-1logρ) =((ρ + log n) log n logρ).Since b = 1, the theorem follows. §.§ Maintaining Invariant <ref>.We show that by setting θ = 0 (and choosing η and b carefully) we maintain Invariant <ref>: Let G be a dynamic graph and ρ be the density of G at time t. We can choose our variables θ = 0,η=3, and b ∈Θ(log n), b≥ 2 to maintain an out-orientation G^b in * b ·ρ·λ^-1·logρ = ( ρ·log^2 n logρ) time per update in G, maintaining Invariant <ref> for G^b with: * ∀ v, the out-degree d^+(v)_t+1 in G^b is at most ( b·ρ), and* ∀ u, the out-degree of u in G is at most (ρ). We show that at all times we maintain Invariant <ref> for θ=0. Corollary <ref>, and our choice of variables, implies the desired upper bound on the out-degree of each vertex.We again consider:Correctness:Our algorithms maintain Invariant <ref> in G^b,Recursive depth: Algorithms <ref>+<ref> have a recursive depth of *λ^-1·logρ, and Time:Algorithms <ref>+<ref> spend (ρ) time before entering the recursion. We show the proof for deletions.Again, the proof for insertions is symmetrical (flipping a forward chain).Invoking Delete( x_0 v) may cause us to recursively invoke Delete(x_i +1x_i): flipping a backward chain in G^b from x_0. Only the last vertex x_f in this chain decreases its out-degree once we terminate.Correctness.Suppose that we terminate at a vertex x_f.Then after our sequence of flips, only the vertex x_f changed its out-degree (i.e. only for x_f: d^+(x_f)_t+1 = d^+(x_f)_t - 1).Because our algorithm terminated, for x First( Max( Buckets(N^-(x_f)))) it must be that: d^+(x)_t ≤max{ (1 + λ) (d^+(x_f)_t - 1 ) + θ, b/4}.It follows that for all vertices w ∈ N^-(x_f): d^+(w)_t≤ (1 + λ) max{ (1 + λ) (d^+(x_f)_t - 1 ) + θ, b/4}.Substituting d^+(x_f)_t for d^+(x_f)_t+1 and using that by our choice of parameters, (1+λ)^2≤ 1+η b^-1≤ 2 now gives that for all w ∈ N^-(x_f): d^+(w)_t≤max{ (1 + η b^-1) d^+(x_f)_t+1+ 2θ, b/2}. By Lemma <ref>, this implies that we maintain Invariant <ref>.Recursive depth.What remains is to upper bound the recursive depth of our algorithm. Let x_s be the first vertex in the chain where d^+(x_s)_t ≥b/4+2.Note that per definition of our algorithm, d^+(x_1)_t≥b/4+1. Thus, for all i ≥ 1:d^+(x_i+1)_t≥ d^+(x_i)_t + 1 and s ≤ 2. We now make a case distinction.If f ∈(λ^-1) then per definition, the recursive depth is (λ^-1). Otherwise, for all i > 2 it must be that d^+(x_i+1)_t > (1 + λ) ( d^+(x_i) - 1) ≥(1 + λ) d^+(x_i-1).Before this sequence of updates, we satisfied Invariant <ref>. Thus, by Corollary <ref>, for all i, d^+(x_i)_t ∈(ρ). Time spent.The proof upper bounding the time spent is identical to that of Theorem <ref>. The one exception being, that the out-degree d^+(u) for all vertices u is at most ρ.Thus, the running time per update in G^b is (ρ·λ^-1logρ) =(ρlog n logρ).Each update in G triggers Θ(log n) updates in G^b and so the runtime follows. § IMPROVED WORST CASE ALGORITHMS We adapt the algorithm of Section <ref>, replacing the algorithms for inserting and deleting directed edges in G^b to update our running time.We storeu ∈ N^-(v) in buckets determined not by the actual out-degrees d^+(u) but rather by an approximation of what we call the out-rank r(u) = *log_(1+λ)d^+(u). For each vertex v, for all vertices u ∈ N^-(v), we define the perceived out-rank r_v(u) as some integer stored in v for u ∈ N^-(v) (which we show is at most 1 removed from r(u)). In this section,We maintain for all u: * The exact value d^+(u) of the current orientation G^b,* The set N^+(u) in a linked list and a pointer some current `position' in the linked list. * The set N^-(u) in a doubly linked list of buckets B_j(u) sorted by j from high to low. Each bucket B_j(u) contains, as a doubly linked list in arbitrary order, all w ∈ N^-(u) where r_u(w)=j. The vertex u has a pointer to the bucket B_i(u) with i = *log_(1 + λ)max*(1+λ)d^+(u),*b/4. Any update in G invokes Algorithms <ref>+<ref>. These algorithms nowinvoke Algorithm <ref> or <ref> (instead of <ref> or <ref>). These two in turn invoke the normal add and remove functions (Algorithm <ref>+<ref>).Whenever we add a vertex w to a set N^-(u), we set r_u(w) = r(w). And when we add a vertex v to a set N^+(u), we do so in the position immediately before the current position, so it becomes the last one we visit when we round-robin over N^+(u). [t].5[H] Insert(uv) [t].5 [H] Delete(uv) Overview of techniques. Note that after incrementing (or decrementing) d^+(u), we flip an edge ux (or xu) whenever the following conditions hold:d^+(u)> max*(1 + λ)· d^+(x) + θ, *b/4 (for Insert)d^+(x)> max*(1 + λ) · d^+(u) + θ, *b/4 (for Delete).These checks are the same as in Section <ref> (and Section <ref> for deletions).As a result, the recursive depth of our algorithm is identical to that of Section <ref>. The big difference with Section <ref>, is that during insertions we do not loop over all x∈ N^+(u) each time (as that would be too expensive). Instead, we do a round robin scheme where we rely on the fact that if we recently checked the condition for edge ux without flipping it, then we need to add many more outgoing edges from u before it violates the actual Invariant <ref>.By checking 2/λ edges each time in round-robin order we are guaranteed to revisit ux before that happens.The second difference with Section <ref>, is that for each vertex u we cannot store a data structure on the in-neighbors of u that uses their actual out-degree.Instead, we bucket the vertices x ∈ N^-(u) using their out-degree at the time of adding the arc xu.The location of x in this data structure is thereby its perceived rank r_u(x). Whenever we insert or delete an arc in G^b, we get a recursive call to our insertion and deletion functions that flips a chain of edges. Only the final vertex x_f on this chain changes their actual out-degree.Hence, for this final vertex x_f, we perform round robin over the *2/λ next w ∈ N^+(x_f) to update the perceived rank of x_f in N^-(w). Again, we can not afford to update all of them.Recall that we parametrized time according to Definition <ref>. We show: Let r_v(u) get updated by an Insert or Delete at time s. Let the next update to r_v(u) occur during an Insert or Delete at time t. Then d^+(u)_t-d^+(u)_s≤λ2d^+(u)_s and r(u)_t-r(u)_s≤ 1. Only out-neighbours to u that exist at time s can be visited by the round-robin procedure before r_v(u) is updated again. Since we visit 2/λ of them per Insert or Delete that changes d^+(u), we can do at most d^+(u)_s / 2/λ≤λ/2d^+(u)_s Inserts or Deletes changing d^+(u) before time t.Thus, since 0< λ < 1:d^+(u)_t≥*1-λ2d^+(u)_s > *1+λ^-1d^+(u)_s r(u)_t ≥ r(u)_s-1d^+(u)_t≤*1+λ2d^+(u)_s < *1+λd^+(u)_s r(u)_t ≤ r(u)_s+1 For all edges uv at all steps during Insert or Delete, r_v(u)-r(u)≤ 1. Follows trivially from Lemma <ref> by the fact that each time it gets updated the true value has changed by at most 1.We now apply an argument that we have applied in previous sections, introducing a bit more slack than previously: During a Delete(uv) at time t, let x First( Max(N^-(u))) and d^+(x)_t ≤max**1+λ(d^+(u)_t-1)+θ,*b/4.Then for all w ∈ N^-(u) it must be that:d^+(w)_t≤ (1+λ)^3·max**1+λ(d^+(u)_t-1)+θ, *b/4≤max**1+η b^-1(d^+(u)_t-1)+2θ, *b/2.The vertex x First( Max(N^-(u))) has the largest perceived rank of all vertices in N^-(u). Thus, the perceived rank r_u(w)_t is at most r_u(x)_t.By Lemma <ref>, we now get:r(x)_t ≥ r_u(x)_t-1 ≥ r_u(w)_t-1 ≥ r(w)_t-2d^+(x)_t≥ (1+λ)^r(x)_t≥ (1+λ)^r(w)_t-2≥ (1+λ)^-3d^+(w)_t. It follows that d^+(w)_t ≤ (1 + λ)^3 ·max**1+λ(d^+(u)_t-1)+θ,*b/4. By noting that (1 + λ)^5 ≤ (1 + η b^-1)≤ 2 we recover the lemma.If during an Insert at time s, the out-neighbour x∈ N^+(u)_s is verified to satisfy d^+(u)_s+1≤max*(1+ λ)d^+(x)_s + θ, b/4, then at any time t up to and including the next time that we check the constraint we have that:d^+(u)_t ≤ (1+λ)^4·max*(1+λ)d^+(x)_t+θ, *b/4≤max*(1+η b^-1)d^+(x)_t+2θ, *b/2.If there are no Deletes changing d^+(x) between times s and t, we have d^+(x)_s ≤ d^+(x)_t andd^+(u)_t≤ (1+λ)· d^+(u)_s (By Lemma <ref>)≤ (1+λ)· (d^+(u)_s + 1) ≤ (1+λ)·max**1+λd^+(x)_s + θ, *b/4 (By our assumption)≤ (1+λ)·max**1+λd^+(x)_t + θ, *b/4 (Since d^+(x)_s ≤ d^+(x)_t)≤max*(1+η b^-1)d^+(x)_t+2θ, *b/2.Suppose now that there was a Delete between times s and t that changed d^+(x). Denote by s' the time just after the last such delete finished.It must be that s<s'≤ t. Then d^+(x)_s'-1-1=d^+(x)_s'. Since after s', there was no deletion decreasing d^+(x) it must be that d^+(x)_s'≤ d^+(x)_t and by Lemma <ref>, for all w∈ N^-(x)_s':d^+(w)_s' ≤ (1+λ)^3·max**1+λd^+(x)_s'+θ, *b/4In particular, u∈ N^-(x)_s' andd^+(u)_t≤ (1+λ)· d^+(u)_s' (By Lemma <ref>)≤ (1+λ)^4·max**1+λd^+(x)_s'+θ, *b/4 (By Lemma <ref>)≤ (1+λ)^4·max**1+λd^+(x)_t+θ, *b/4 (Since d^+(x)_s'≤ d^+(x)_t)≤max**1+η b^-1d^+(x)_t+2θ, *b/2Whenever Algorithms <ref> and <ref> terminate, they maintain an orientation G^b where for each edge uv in G^b, d^+(u) ≤max*(1 + η b^-1 ) ·d^+(v) + 2 θ, *b/2. By construction, when calling Insert(uv) at time t we always have d^+(u)_t ≤ d^+(v)_t. As argued in Theorems <ref>+ <ref>s, this new edge may never invalidate Invariant <ref> between u and v.Now consider the chain of edges that get recursively flipped until we reach the final vertex x_f.The vertex x_f is the only vertex for which d^+(x_f)_t+1 = d^+(x_f)_t + 1. Thus, for all other vertex pairs not including x_f, Invariant <ref> is maintained.Since the algorithm terminated at x_f it must be that for all x ∈ N^+(x_f)_t where the constraint was checked at time t:d^+(x_f)_t+1 = d^+(x_f)_t + 1 ≤max*(1+ λ)d^+(x)_t + θ, b/4 = max*(1+ λ)d^+(x)_t+1 + θ, b/4. By Lemma <ref>, it follows that Invariant <ref> is maintained between x_f and all vertices in N^+(x_f)_t+1.The argument for Delete(uv) is symmetrical, applying Lemma <ref> instead. Algorithms <ref>+<ref> spend (λ^-1) time before recursing, except for the outermost call which spends (λ^-1log n) time.Whenever we insert a vertex v ∈ N^-(u), it is either because we added the edge (u, v) to G (occurring once) or, because we flipped an edge uv. In the first case, we may afford spending (log_(1+λ)d^+(v)) = (λ^-1log(bρ + log n)) = (λ^-1log n) time searching through all buckets for the bucket containing v.In the latter case, for Insert we have max*(1+λ)d^+(u)_t,b/4<d^+(v)_t+1≤(1+λ)^4max*(1+λ)d^+(u)_t,b/4+1 by Lemma <ref>, and similarly for Delete we have max*(1+λ)(d^+(u)_t-1),b/4<d^+(v)_t≤(1+λ)^3max*(1+λ)(d^+(u)_t-1),b/4 by Lemma <ref>. Using the pointer from u to the bucket B_i(u) where i=*log_(1+λ)max*(1+λ)d^+(u),*b/4, we may insert v into the correct bucket in (1) time. During Insert(uv) or Delete(uv) we loop over at most (λ^-1) elements to change their bucket.By Lemma <ref>, each element changes their position in the data structure by at most 1, which can be done in (1) time. Concluding our argument.By Lemma <ref>, our algorithms maintain Invariant <ref> at all times.By Lemma <ref>, our algorithms spend (λ^-1) time before recursing (except for the outermost call, which uses (λ^-1log n) time). We now make a case distinction. Either we set ( θ = 1, b = 1,η∈Θ(log n) ), or, we set ( θ = 0, b ∈Θ(log n),η=3 ). Because our recursive condition in Algorithms <ref>+<ref> is the same as in Algorithms <ref>+<ref>, we may immediately apply the proofs of Theorem <ref>+<ref> to upper bound the recursive depth of our algorithms by (λ^-1logρ).Thus, the total time for inserting or deleting a single edge in G^b is (λ^-1log n+λ^-2logρ)=(λ^-2logρ) For every update in G, we do Θ(b) updates in G^b. Thus, for both choices of our variables, our algorithms run in time ( b ·λ^-2logρ), and they maintain Invariant <ref> for the chosen θ∈{ 0, 1}. Thus, we conclude:Let G be a dynamic graph and ρ be the density of G at update time. We can choose our variables θ = 0, η=3, and b ∈Θ(log n), b≥ 2 to maintain an out-orientation G^b in worst case (log^3 n logρ)time per operation in G, maintaining Invariant <ref> for G^b with: * ∀ v, the out-degree d^+(v) in G^b is at most ( b·ρ), and* ∀ u, the out-degree of u in G is at most (ρ). Let G be a dynamic graph and ρ be the density of G at time t. We can choose our variablesθ = 1, b = 1 and η∈Θ(log n) to maintain an out-orientationG^b = G in worst case *log^2 n logρ time per update in G such thatInvariant <ref> holds for G. Moreover: * ∀ u, the out-degree d^+(u) in G is at most ( ρ + log n), (i.e. Δ( G) ∈( ρ + log n))§ IMPROVED AMORTISED ALGORITHMS Previously, we relied upon the fact that vertices in v ∈ N^-(u) were put in buckets based on their exact out-degree d^+(v). Maintaining these exact values requires Ω(ρ) update time, and is thus not a suitable option when we aim for polylogarithmic update time. To this end, we store for all edges uv a single integer ϕ(u, v) which we will call their threshold value. Note that ϕ(u, v) ≠ϕ(v, u).We base our algorithmic logic and analysis on the threshold value instead.We maintain for all u:* The value d^+(u) of the current orientation G^b,* The set N^+(u) as a sorted doubly linked list of linked lists L_j(u).Each L_j(u) contains all w ∈ N^+(u) with ϕ(u, w) = j as a linked list in arbitrary order.The linked lists L_j(u) are stored in a linked list sorted by j.We maintain a pointer to the location j = d^+(u). * The set N^-(u) in a sorted doubly linked list of buckets B_j(u). Each bucket B_j(u) contains, as a doubly linked list in arbitrary order, all w ∈ N^-(u) where j = log_(1 + λ) d^+(w). The vertex u has a pointer to the bucket B_i(u) with i = *log_(1 + λ)max*(1+λ)d^+(u),*b/4.Any update in G, invokes Algorithms <ref>+<ref>. These algorithms nowinvoke Algorithm <ref> or <ref> (instead of <ref> or <ref>). These two in turn invoke the normal add and remove functions (Algorithm <ref>+<ref>). [t].5[H] [t].5 [H] Delete(uv)Suppose that the graph G^b contains ux. Thend^+(u) ≤max{ (1 + λ) ·ϕ(u, x), b/4}.Fix some arc ux.Whenever the value ϕ(u, x) is set, it is set to d^+(u).Thus, we satisfy the inequality.The only risk to the desired inequality is increasing d^+(u), whilst it is bigger than b/4.Suppose d^+(u)is momentarily increased after adding some arc uv. If the inequality for ux is violated, then x is eligible for the while loop.If the while loop processes x, then it will either flip ux or resets ϕ(u,x).If it flips ux, then ux is removed from the orientation so there is no inequality to satisfy. Otherwise, we reset ϕ(u,x) to d^+(u), which satisfies the inequality.If the while loop doesn't process x, then it must have selected another vertex y ∈ N^+(u) and flipped uy before processing x. In this case, d^+(u) is restored to its previous value when the inequality for ux was satisfied.Suppose that G^b contains an edge uz. Then: ϕ(u, z)≤max{ (1 + λ)^3 ( d^+(z) + θ), b/4}.Fix an arc uz. When the value ϕ(u, z) is set, it is set to d^+(u) at a point in time where d^+(u) ≤max{(1 + λ) d^+(z)+ θ, b/4}. Thissatisfies the desired inequality. The only risk to the desired inequality is when d^+(z) decreases. Now we perform a case distinction.If b/4≥ (1 + λ)d^+(z) + θ, then decreasing d^+(z) did not change the fact that previously ϕ(u, z) ≤b/4.Suppose d^+(z) ≥b/4, and that it momentarily decreases after deleting zy for some y. We know that u ∈ N^-(z), so u ∈ B_j(z) for some integer j.In the loop of Delete(zy), we consider three cases: (a): we encounter the vertex u ∈ N^+(z), without hitting any of the two returns.We know that ϕ(u, z) > (1 + λ) d^+(u) and we set ϕ(u, z) to be d^+(u). This decreases ϕ(u, z), which means that we continue satisfying the inequality.(b): we flip an arc tz and return, restoring d^+(z) to its original (inequality-satisfying) value.(c): before reaching case (a), we encounter an arc tz for t ∈ N^-(z) which causes us to hit return (the else in our code). Because we reached this point in the code before case (a) we know that: * ϕ(u,z) ≤ (1 + λ) ϕ(t,z) (we loop over all buckets B_i(z) in decreasing order and did not encounter the vertex u. Thus, u is either in the same bucket as t or in a lower bucket). * ϕ(t,z) < (1 + λ) d^+(t).* d^+(t) ≤max{ (1 + λ)d^+(z) + θ, b/4} = (1 + λ)d^+(z) + θ.Combining these inequalities, we haveϕ(u,z) ≤ (1 + λ) ϕ(t,z) ≤ (1 + λ)^2 d^+(t) ≤(1 + λ)^2 ·( (1 + λ)d^+(z) + θ)Hence, we recover that ϕ(u,z) ≤ (1 + λ)^3 (d^+(z) + θ). Our amortised algorithms maintain Invariant θ'. We combine Lemma <ref> and <ref> to get that for all z ∈ N^-(u): d^+(u) ≤max{ (1 + λ)max{ (1 + λ)^3 (d^+(z) + θ), b/4},b/4}≤max* (1 + η b^-1) d^+(z) + 2θ, b/2.§.§ Running time analysisWe now move on to the amortised analysis of the algorithm. At a high level, the idea is as follows. Recall that for each arc ux we have a label ϕ(u,x) equal to d^+(u) at some point in time. The labels ϕ(u,x) guide the data structure by suggesting arcs to flip. When we operate on an arc ux based on ϕ(u,x), if ux is not in fact a good arc to work with, then d^+(u) must have deviated substantially from ϕ(u,x), and we reset ϕ(u,x) to d^+(u). Loosely speaking, we amortised the effort to relabelux against the change to d^+(u). Adding an arc to G^b takes ( λ^-1) amortised time. We note that the recursive depth of Insert(uv) may be (ρ). However, we show that the amortised cost of each edge that we process is not too bad. Observe that the net effect of adding an arc, after all flips, is to increase the out-degree d^+(u) of a single vertex u by 1.Now, the running time of adding an arc is proportional to the number of arcs ux processed in the while loop over all recursive calls to Insert(uv). Each such edge ux has d^+(u) ≥ (1 + λ) ϕ(u,x) + 2 θ, where ϕ(u,x) was set to d^+(u) at a previous point in time. Consequently d^+(u) has increased by at least a (1+λ)-factor since ϕ(u,x) was set.We amortise the time spent processing arcs ux for fixed u against the increase to d^+(u). Each time an edge insertion results in increase d^+(u), we pay for (1/λ) units of work distributed uniformly over N^+(u). That is, each x ∈ N^+(u) receives Ω(1 / d^+(u)) fractional credits. By the time an arc ux is processed in the while loop of insertion, ux has acquired at least one unit of credit, which pays for the time to process it.Removing an arc from G^b takes (λ^-1logρ) amortised time. The total running time for a deletion is proportional to the total number of arcs processed in the while loop of Delete(uv), over all recursive calls to Delete. Each arc (x,u) processed in the loop (except for the very last one) has one of two outcomes: either it is flipped and we make a recursive call to Delete, or we reset ϕ(x,u).Since our recursive condition is the same as in Algorithm <ref>, we may immediately apply the proofs for upper bounding the recursive depth for deletions fromTheorem <ref> and <ref>; showing that the recursive depth is ( λ^-1logρ). Next we address the number of arcs xu where we reset ϕ(x,u). We note that ϕ(x,u) is only updated when it exceeds d^+(x) by a (1+λ)-factor. In other words, consider the time start when ϕ(x, u) was set. At the time end when it is reset to d^+(x), the out-degree of d^+(x) has decreased (by at least a 1 + λ factor). We consider the approximate rank of d^+(x) at two time steps: a lower bound on the rank when ϕ(x, u) was set, and an upper bound for when it is reset during a deletion.Formally, we write:s = ⌊log_1 + λϕ(x, u)⌋ and t = ⌈log_1 + λ d^+(x) ⌉. Finally we denote δ = s - t. Note that to update our data structure on N^-(u), we need to move ϕ(x, u) by Θ(δ) buckets. We make a case distinction based on whether δ < 3 or δ≥ 3. Case 1: δ < 3. In this case when setting ϕ(x, u) = d^+(x) we need to move x (1) buckets in the data structure on N^-(u). By the time ϕ(u,x) is reset, d^+(x) has decreased by at least (1 + λ)^s - (1+ λ)^s - 1≥λϕ(u, x)/(1 + λ) since ϕ(u,x) was set. The net effect of each deletion (after all flips and recursive calls) is to decrease the degree of a single vertex x by 1. When this occurs, we pay for 4 / λ units of work that are distributed uniformly over N^+(x). Consequently, by the time we reset ϕ(u,x) in a call to Delete(uv) (for some v), x has already acquired one fractional unit of work to pay for the (1) work.Case 2: δ≥ 3. In this case, the rank of d^+(x) decreased by at least Θ(δ) and at least three levels.The net effect of each deletion (after all flips and recursive calls) is to decrease the degree of a single vertex x by 1, at which point we distribute 4 / λ credits over N^-(x).Between time start and end, the out-degree d^+(x) may arbitrarily increase and decrease. However, we can always find a sequence of (not necessarily consecutive) edge deletions S = { (α, β)_i } such that after deletion (α, β)_i in G^b, the out-degree d^+(x) decremented by one, and for any pair of consecutive edge deletions (α, β)_i (α', β')_i+1 in S, the out-degree of x at the end of deleting (α, β)_iequals the out-degree of x at the start of deleting (α', β')_i+1. Denote by S^i ⊂ S all deletions in S where after the deletion, the vertex x has a rank t + i + 1 for 0 < i < s - t.For every deletion in S^i, we distribute 4 / λ of units of work over N^+(x).For all i ∈ (0, s - t - 1), decreasing the rank of x from t + i + 1 to t + i requires exactly (1+λ)^t+i + 1 - (1+λ)^t+i = λ (1 + λ)^t+i deletions. Thus, S^i has exactly (1+λ)^t+i + 1 - (1+λ)^t+i = λ (1 + λ)^t+i. Per definition of S^i, after each deletion, N^+(x) has at most (1 + λ)^t + i + 2 out-edges.Thus, whenever we distribute after each deletion 4 / λ credits over all N^+(x), the number of credits per edge C is at least:C ≥∑_i = 0^δ - 2 4 λ^-1# of deletions inS^i /out-degree ofxduring deletions inS^i =∑_i = 0^δ - 24 λ^-1λ (1 + λ)^t + i/ (1 + λ)^t+i + 2 = ∑_i = 0^δ - 24 λ/λ (1 + λ)^2≥δ - 2Here, the second-to-last inequality follows from the fact that λ≤ 1 and thus 4/(1 + λ)^2≥ 1. Hence, for δ≥ 3 we have acquired O(δ) credits on every edge in N^+(x), which we mayuse to pay for relocating x by δ-buckets. We may now apply Corollary <ref> and Corollary <ref>. These set η and b such that λ^-1∈(log n).We note that for every insertion in G, we insert b edges in G^b. Thus, we conclude:Let G be a dynamic graph and ρ be the density of G at update time. We can choose our variables θ = 0, η=3, and b ∈Θ(log n), b≥ 2 to maintain an out-orientation G^b in (log^2 n logρ) amortized time per operation in the original graph G, maintaining Invariant <ref> for G^b with:* ∀ u, the out-degree d^+(u) in G is at most ( ρ + log n), (i.e. Δ( G) ∈( ρ + log n)) Let G be a dynamic graph and ρ be the density of G at time t. We can choose our variables θ = 1, b = 1 and η∈Θ(log n)to maintain an out-orientationG^b = G in amortized *log n logρ time per update in G such thatInvariant <ref> holds for G. Moreover: * ∀ v, the out-degree d^+(v) in G^b is at most ( b·ρ), and* ∀ u, the out-degree of u in G is at most (ρ).§ OBTAINING(1+Ε) APPROXIMATIONSFinally, we note that we can choose our variables carefully to obtain a (1 + ) approximationsof the maximum subgraph density or minimum out-degree. Theorem <ref> implies that, for suitable choices of η and b, we can for any graph G maintain a directed graph G^b (where G^b is the graph G with every edge duplicated b times) such that G^bmaintains Invariant <ref>.By Theorem <ref>, G^b approximates the densest subgraph of G and the minimum out-orientation of G^b (where the approximation factor is dependent on β and η).The running time of the algorithm is ( b^3 ·logα) where α is the arboricity of the graph.In this section we show that for any 0 << 1, we can choosean η>0 and a b ∈(^-2log n) to ensure that G^b maintains a:* (1 + )-approx. of the maximum densest subgraph of G in (^-6log^3 n logα) time.* (1 + )-approx. of the minimum out-orientation of G^b. This implies an explicit (2 + )-approximation of the minimum out-orientation of Gin (^-6log^3 n logα) time.* (1 + )-approx. of the minimum out-orientation of G^b.Through applying clever rounding introduced by Christiansen and Rotenberg <cit.> we obtain an explicit (1 + )-approximation of the minimum out-orientation of G. By slightly opening their black-box algorithm, we can show that applying their technique does not increase our running time.Thus, our total running time is thus (^-6log^3 n logα).§.§ Obtaining a (1+ε) Approximation for Densest SubgraphLet G be a dynamic graph subject to edge insertions and deletions withadaptive maximum subgraph density ρ. Let G^b be G where every edge is duplicated b times. Let 0 ≤ϵ < 1. We can maintain an orientation G^b such that ρ≤b^-1·Δ( G^b) ≤ (1+)ρwith update time (^-6log^3(n)logρ ) per operation in G. We apply Theorem <ref> in order to maintain an out-orientation satisfying Invariant <ref>, which by Theorem <ref> satisfies ρ(G^b) ≤Δ(G^b) ≤ (1+γ)(1+η·b^-1)^k_maxρ(G^b).By setting γ=/2,η = 3, b = γ^-1ηlog_(1+γ)n∈(^-2ηlogn), we satisfy the conditions of the Theorem. Since k_max≤log_1+γ n, we find that(1+η·b^-1)^k_max≤ e^η b^-1· k_max≤ e^γ≤ 1+2γ = 1+where the last inequality comes from the fact that for 0 ≤ x ≤ 1, we have e^x ≤ 1+2x. The algorithm of Corollary <ref> can in (1) time per operation, maintain the integers: b^-1, Δ(G^b) and thus a (1 + ) approximation of the value of the density of G.However, to actually output any such realizing subgraph, a bit more of a data structure is needed:For a fully-dynamic graph G, there is an algorithmthat explicitly maintains a (1+) approximation of the maximum subgraph density in (^-6log ^3 n logα ) total time per operation, and that can output a subgraph realizing this density in() time whereis the size of the output. We use Corollary <ref> to dynamically maintain an orientation G^b in (^-6log^3(n)logρ ) per operation in G. Recall (Theorem <ref>) that we defined for non-negative integers i the sets:T_i := * v∈ Vd^+(v) ≥Δ*G^b·*1 + η· b^-1^-i (note that since we maintain Invariant <ref>, the constant c in the previous definition is zero).Let k be the smallest integer such that |T_k+1| < (1 + γ) |T_k|). Moreover, we showed in Corollary <ref> that k is upper bounded by ( ^-1log n). We show in Section <ref> that (the induced subgraph of the vertex set) T_k+1 is an approximation of the densest subgraph of G^b (and therefore of G).We store the vertices of G^b as leaves in a balanced binary tree, sorted on their out-degree.Since every change in G, changes at most (b log n logρ) = (^-2log^2 n logρ) out-degrees in G, we can maintain this binary tree in (^-2log^3 n logρ) additional time per operation in G. Each internal node of the balanced binary tree stores the size of the subtree rooted at that node.Moreover, we store the maximum out-degree Δ(G^b) as a separate integer, and a doubly linked list amongst the leaves.After each operation in G, for each integer i ∈ [0, ^-1log n], we determine how many elements there are in T_i as follows:first, we compute the value V_i = Δ(G^b) · (1 + η· b^-1)^-i.Then, we identify in (log n) time how many vertices have out-degree at least V_i (thus, we determine the size of T_i).It follows that we identify T_k in (^-1log^2 n) additional time. We store a pointer to the first leaf that is in T_k.If we subsequently want to output the densest subgraph of G, we traverse theelements of T_k in () total time by traversing the doubly linked list of our leaves. Related WorkWhile results for densest subgraph <cit.> can be used to estimate maximum degree of the best possible out-orientation, it is also interesting in its own right. Sawlani and Wang <cit.> maintain a (1 - )-approximate densest subgraph in worst-case time (^-6log^4 n ) per update where they maintain an implicit representation of the approximately-densest subgraph. They write that they can, in (log n) time, identify the subset S ⊆ V where G[S] is the approximately-densest subgraph and they can report it in (|S|)§.§ Obtaining an almost (1 + ) Approximation for Minimum Out-orientationBy Corollary <ref>, we can dynamically maintain for every graph G,a directed graph G^b (where each edge in G is duplicated b times) such that the maximum out-degree in G^b is at most a factor (1 + ) larger than the minimum out-orientation of G^b.For every edge (u, v) in G, we can now store a counter indicating how many edges point (in G^b) from u to v, or the other way around.The naive rounding scheme, states that the edge (u, v) is directed as uv whenever there are more edges directed from u to v. For any edge, we can decide its rounding in (1) time, thus we conclude:We can maintain for a graph G an orientation G where each vertex has an out-degree of at most(2+ε)α with update time (^-6log^3(n)logρ ) per operation.Obtaining a (1 + )-approximation of the minimum out-orientation of G is somewhat more work.Christiansen and Rotenberg <cit.> show how to dynamically maintain an explicit out-orientation on G of at most (1 + ) α + 2 out-edges. In their proofs, Christiansen and Rotenberg <cit.> rely upon the algorithm by Kopelowitz, Krauthgamer, Porat and Solomon <cit.>.By replacing the KKPS <cit.> algorithm by ours in a black-box like manner, we obtain the following:Let G be a dynamic graph subject to edge insertions and deletions. We can maintain an orientation G where each vertex has an out-degree of at most (1+ε)α + 2 with update time (^-6log^3 n logα) per operation in G, where α is the arboricity at the time of the update. The proof follows immediately from the proof Theorem 26 by Christiansen and Rotenberg <cit.> (using Corollary <ref> as opposed to <cit.>).For the reader's convenience, we will briefly elaborate on how this result is obtained and how we can apply Corollary <ref>. For the full technical details, we refer to the proof of Theorem 26 in <cit.>. * Christiansen and Rotenberg consider a graph G with arboricity α. Moreover, they construct a directed graph G^b which is the graph G where every edge in G is duplicated b ∈(^-2log n) times.[In <cit.>, Christiansen and Rotenberg choose the duplication constant to be γ and write G^γ.]Every operation in G triggers (b) operations in G^b. * On the graph G^b, they run the algorithm by <cit.> to maintain an orientation of G^b where each vertex has an out-degree of at most Δ(G^b) = (1 + ')α· b + log_(1 + ') n for ' = θ() (for instance ' = /4 works).The KKPS <cit.> algorithm uses per operation in G^b:[Christiansen and Rotenberg deliberately use the adaptive variant of KKPS <cit.>.] * **Δ(G^b) ^2= * (1 + )^2 α^2 b^2 + ^-4log^2 n = ( ^-4α^2 log^2 n) time, and* *Δ(G^b)= * (1 + )α b + ^-2log n = ( ^-2αlog n) combinatorial changesin G^b. (here, a combinatorial change either adds, removes, or flips an edge in G^b).* Finally, they deploy a clever rounding scheme to transform the orientation G^b into an orientation of G where the out-degree of each vertex in G is at most a factor 1/b the out-orientation of G^b, plus two. Thus, they ensure that each vertex has an out-degree of at most: (1 + ') α + b^-1log_1 + ' n+ 2 ≤ (1 + ') α + '^2/log n·2log n/' + 2 = (1 + ) α + 2since α≥ 1 if the graph has at least one edge (otherwise the claim is vacant). They achieve this in (log n) additional time per combinatorial change in G^b. Specifically: * They consider for every edge (u, v) in G its partial orientation (i.e. how many edges in G^b point from u to v or vice versa).If the partial orientation contains sufficiently many edges directed from u to v, the edge in G gets rounded (directed from u to v). * Let H be a (not necessarily maximal) set ofedges in G whose direction can be determined in this fashion. They call H a refinement. Christiansen and Rotenberg choose H such that in the rounded, directed graph G-H each vertex has an out-degree of at most (1 + ) α.* Christiansen and Rotenberg show that H always can be made into a forest. For all edges in H, they no longer explicitly store the b copies in G^b. Instead, they store for edges in H their (partial) orientation as an integer in [0, b].The forest H gets stored in a top tree where each interior node stores the minimum and maximum partial orientation of all its children. For any path or cycle in H, they can increment or decrement all orientation integers by 1 in (log n) time by lazily updating these maxima and minima in the top tree.For each edge in H, one can obtain the exact partial orientation in (log n) additional time by adding all lazy updates in the root-to-leaf path of the top tree.* In addition, they show how to dynamically maintain a 2-orientation on the forest H in (log n) update time per insertion in the forest. Adding the directed edges from the forest to G ensures that each vertex has an out-degree of at most (1 + ) α + 2.* For each combinatorial change in G^b, they spend (log n) time. Specifically: * each combinatorial change in G^b may remove an edge from the forest. The edge can be rounded in (1) time and removed from the top tree in (log n) time. * each combinatorial change may force an edge in G into the refinement and thus possibly creating a cycle.* When creating a cycle, the authors augment the cycle such that at least one edge on the cycle may be expelled from the refinement. They (implicitly) increment or decrement all orientation integers along the cycle using the lazy top tree in (log n) total time. * Augmenting a cycle causes the out-degree to remain the same for all elements on the cycle. Hence, the Invariants of KKPS <cit.> (and our Invariant <ref>) stay unchanged and the augmentation does not trigger any further operations in G^b. Note also that they specifically always leave at least one duplicate edge in each direction, so that no additional data structures need be updated.* The final edge along the augmented path may subsequently be rounded and added to G-H. Thus, spending (log n) time per combinatorial change in G^b.It follows through these three steps that the algorithm in <cit.> has a running time of: * b ·**Δ(G^b) ^2 +Δ(G^b) log n= *^-6α^2 log^3 n Given the results in this paper, we can instead apply our results as follows: * We again choose b ∈(^-2log n). Each operation in G triggers (b) operations in G^b. * We apply Theorem <ref> (or conversely Corollary <ref>) to maintain G^b such that each vertex has an out-degree of at most Δ(G^b) = (1 + θ()) α b.We proved that this algorithm takes:* (b^2 logα) time per operation in G^b, but* only triggers (b logα) combinatorial changes (edge flips) in G^b. * Finally, we apply the rounding scheme by Christiansen and Rotenberg which requires (log n) time per combinatorial change in G^b. Our total running time is (our algorithm + rounding scheme per combinatorial change):*b · b · b logα+ b · b logα·log n= *^-6log^3 n logα. Related WorkHistorically, four criteria are considered when designing dynamic out-orientation algorithms: the maximum out-degree, the update time (or the recourse), amortised versus worst-case updates, and the adaptability of the algorithm to the current arboricity. Brodal and Fagerberg <cit.> were the first to consider the out-orientation problem in a dynamic setting.They showed how to maintain an (α_max) out-orientation with an amortised update time of (α_max+ logn), where α_max is the maximum arboricity throughout the entire update sequence. Thus, their result is adaptive to the current arboricity as long as it only increases.He, Tang, and Zeh <cit.> and Kowalik <cit.> provided different analyses of Brodal and Fagerbergs algorithm resulting in faster update times at the cost of worse bounds on the maximum out-degree of the orientations.Henzinger, Neumann, and Wiese <cit.> gave an algorithm able to adapt to the current arboricity of the graph, achieving an out-degree of (α) and an amortised update time independent of α, namely (log^2 n). Kopelowitz, Krauthgamer, Porat, and Solomon <cit.> showed how to maintain an (α+log n) out-orientation with a worst-case update time of (α^2 + log^2 n) fully adaptive to the arboricity.Christiansen and Rotenberg <cit.> lowered the maximum out-degree to (1+ε)α+2 incurring a worse update time of (ε^-6α^2log^3 n). Finally, Brodal and Berglin <cit.> gave an algorithm with a different trade-off; they show how to maintain an (α_max+log n) out-orientation with a worst-case update time of (log n). This update time is faster and independent of α, however the maximum out-degree does not adapt to the current value of α. § APPLICATIONS In this section, we show how to combine our two trade-offs for out-orientations (theorems <ref>, <ref> with existing or folklore reductions, obtaining improved algorithms for maximal matching, arboricity decomposition, and matrix-vector product. §.§ Maximal matchings For our application in maximal matchings, we first revisit the following result. The authors have not seen this theorem stated in this exact generality in the literature, but similar statements appear in <cit.>, <cit.>, and <cit.>Suppose one can maintain an edge-orientation of a dynamic graph,that has t_u update time,that for each update performs at most r_u edge re-orientations (direction changes), and that maintains a maximum out-degree of ≤ n_o. Then there is a dynamic maximal matching algorithm[When the update time t_u is worst-case, the number of re-orientations r_u is upper bounded by t_u.] whose update time is (t_u+r_u+n_o). Each vertex maintains two doubly-linked lists over its in-neighbors (one for the matched, and one for the available in-neighbors) called in-lists and a doubly-linked list of its out-neighbors called the out-list. When a vertex becomes available because of an edge deletion, it may match with the first available in-vertex if one exists. If no such in-vertex exists, it may propose a matching to its ≤ n_o out-neighbors in the out-list, and then match with an arbitrary one of these if any is available. When a vertex v changes status between matched and available, it notifies all vertices in its out-list, who move v between in-lists in (1) time. Finally, when an edge changes direction, each endpoint needs to move the other endpoint between in- and out-lists. The bookkeeping of moving vertices between unordered lists takes constant time. For each edge insertion or deletion, we may spend additionally (n_o) timeproposing to or notifying to out-neighbors to a vertex, for at most two vertices for each deletion or insertion respectively. With this application in mind, some desirable features of out-orientation algorithms become evident:* we want the number of out-edgesto be (asymptotically) low, and* we want the update time to be efficient, preferably deterministic and worst-case. Here, a parameter for having the number of out-edges asymptotically as low as possible, can be sparseness measures such as the maximum subgraph density or the arboricity of the graph. An interesting challenge for dynamic graphs is that the density may vary through the course of dynamic updates, and we prefer not to have the update time in our current sparse graph to be affected by a brief occurrence of density in the past. In the work of Henzinger, Neumann, and Wiese, they show how it is possible to adjust to the current graph sparseness in the amortised setting <cit.>. In this paper, however, we are interested in the case where both the update time is worst-case and the number of re-orientations is bounded. One previous approach to this challenge is to take a fixed upper bound on the sparseness as parameter to the algorithm, and then use log n data structures in parallel <cit.>. Since we want the number of re-orientations to also be bounded, we cannot simply change between two possibly very different out-orientations that result from different bounds on the sparseness. Any scheme for deamortising the switch between structures would be less simple than the approach we see in this paper.There is a deterministic dynamic maximal matching algorithm with worst-case (α + log ^2 n logα) update time, where α is the current arboricity of the dynamic graph. The algorithm also implies a 2-approximate vertex cover in the same update time.Related WorkMatchings have been widely studied in dynamic graph models. Under various plausible conjectures, we know that a maximum matching cannot be maintained even in the incremental setting and even for low arboricity graphs (such as planar graphs) substantially faster than Ω(n) update time <cit.>. Given this, we typically relax the requirement from maximum matching to maintaining matchings with other interesting properties. One such relaxation is to require that the maintained matching is only maximal. The ability to retain a maximal matching is frequently used by other algorithms, notably it immediately implies a 2-approximate vertex cover.In incremental graphs, maintaining a maximal matching is trivially done with the aforementioned greedy algorithm.For decremental[Maintaining an approximate maximum matching decrementally is substantially easier than doing so for fully dynamic graphs. Indeed, recently work by <cit.> matches the running times for approximate maximum matching in incremental graphs <cit.>. However, for maximal matching, we are unaware of work on decremental graphs that improves over fully dynamic results.] or fully dynamic graphs, there exist a number of trade-offs (depending on whether the algorithm is randomised or determinstic, and whether the update time is worst case or amortised).Baswana, Gupta, and Sen <cit.> and Solomon <cit.> gave randomised algorithms maintaining a maximal matching with (log n) and (1) amortised update time. These results were subsequently deamortised by Bernstein, Forster, and Henzinger <cit.> with only a n increase in the update time. For deterministic algorithms, maintaining a maximal matching is substantially more difficult. Ivkovic and Lloyd <cit.> gave a deterministic algorithm with ((n+m)^√(2)/2) worst case update time. This was subsequently improved to (√(m)) worst case update time by Neiman and Solomon <cit.>, which remains the fastest deterministic algorithm for general graphs.Nevertheless, there exist a number of results improving this result for low-arboricity graphs. Neiman and Solomon <cit.> gave a deterministic algorithm that, assuming that the arboriticty of the graph is always bounded by α_max, maintains a maximal matching in amortised time (min_β>1{α_max·β + log_β n}), which can be improved to (log n/loglog n) if the arboricity is always upper bounded by a constant. Under the same assumptions, He, Tang, and Zeh <cit.> improved this to (α_max + √(α_maxlog n)) amortised update time. Without requiring that the arboricity be bounded at all times, the work by Kopelowitz, Krauthgsamer, Porat, and Solomon <cit.> implies a deterministic algorithm with (α^2 + log^2 n) worst case update time, where α is the arboricity of the graph when receiving an edge-update. §.§ Dynamic ∆+1 colouring Suppose one can maintain an edge-orientation of a dynamic graph,that has t_u update time,that for each update performs at most r_u edge re-orientations (direction changes), and that maintains a maximal out-degree of ≤ n_o. Then there is a dynamic Δ+1-colouring algorithm whose update time is (t_u+r_u+n_o).For a vertex v, say a colour is in-free if no in-neighbor of v has that colour.For a vertex of degree d, keep a doubly linked list of in-free colours from the palette0,1,…,d. Keep an arrayof size d+1 where the i'th entry points to a doubly-linked list of in-neighbors of colour i, and an arrayof size d+1 where the i'th entry points to the i'th colour in the list of in-free colours if the i'th colour is in-free.The colour of a vertex v is found by finding a colour that is both in-free and out-free: examine the ≤ n_o out-neighbors, and use the -array to temporarily move the ≤ n_o out-taken colours to a list . Give v an arbitrary free colour from the remaining list, and undo thelist. This takes (n_o) time, and gives v a colour between 0 and its degree.When an edge changes direction, this incurs (1) changes to linked lists and pointers. When an edge update incurs r_u edge re-orientations, we thus have (r_u) such changes. When an edge is inserted/deleted from a properly coloured graph, at most one vertex needs to be recoloured, either because there is a colour conflict, or because its colour number is larger than its degree. This vertex can be recoloured in (n_o) time. Thus, the total time per edge insertion or deletion is (t_u + r_u + n_o).There is a deterministic dynamic Δ+1 colouring algorithm with worst-case (α + log ^2 n logα) update time, where α is the current arboricity of the dynamic graph.Related WorkPrevious work presented randomised algorithms with constant amortised update time per edge insertion/deletion <cit.>. For deterministic algorithms, <cit.> showed that if one is willing to use (1+o(1))·Δ, colours, a (Δ) amortised update time is possible. Solomon and Wein <cit.> extended the algorithm by <cit.> and further showed that it is possible to maintain an αlog^2 n colouring in (n) amortised update time. §.§ Dynamic matrix vector product Suppose we have an n × n dynamic matrix A, and a dynamic n-vector x, and we want to maintain a data structure that allows us to efficiently query entries of Ax. The problem is related to the Online Boolean Matrix-Vector Multiplication (OMV), which is commonly used to obtain conditional lower bounds <cit.>. If A is symmetric and sparse, in the sense that the undirected graph G with A as adjacency matrix has low arboricity, then we can use an algorithm for bounded out-degree orientation as a black-box to give an efficient data structure as follows:Suppose one can maintain an edge-orientation of a dynamic graph with adjacency matrix A, that has t_u update time, that for each update performs at most r_u edge re-orientations (direction changes), and that maintains a maximal out-degree of ≤ n_o. Then there is a dynamic matrix-vector product algorithm that supports entry-pair changes to A in (t_u+r_u) time, entry changes to the vector x in (n_o) time, and queries to the an entry of product Ax in (n_o) time. Let each node i store the sum s_i=∑_j∈ N^-(i)A_ijx_j, i.e. the sum of the terms of (Ax)_i=∑_j∈ N(i)A_ijx_j corresponding to incoming edges at i.Changing entry A_ij=A_ji in the matrix to or from 0 corresponds to deleting or inserting an edge, which takes t_u time and does at most r_u edge re-orientations. Updating the (1) affected sums after inserting, deleting, re-orienting, or re-weighting an edge takes worst case (1) time. Any entry update to the matrix A thus takes (t_u+r_u) time.When a vector entry x_j changes, we need to update the at most n_o sums {s_i}_i∈ N^+(j), which can be done in worst case (n_o) time. Finally, the query for (Ax)_i is computed as (Ax)_i=s_i+∑_j∈ N^+(i)A_ijx_j in worst case (n_o) time. This result is used in <cit.> to give an algorithm for dynamic matrix vector product with running time (α^2+log^2 n) for updating the matrix, and (α+log n) for updating the vector and for queries.Combining this theorem with our Theorem <ref> gives us an algorithm for dynamic matrix vector product with slightly improved time for updating the matrix:Let A be a symmetric n× n matrix, and let G be the undirected graph whose adjacency matrix is A. Let x be an n dimensional vector. Then we can support changes to A in (log^2 nlogα) worst case time, changes to x in (α+log n) worst case time, and for each i∈{1,…,n} we can report ∑_j=1^nA_ijx_j in worst case (α+log n) time. If we instead combine with our Theorem <ref> we get an algorithm for dynamic matrix vector product with slightly worse time for updating the matrix, but improved time for updating the vector and for queries:Let A be a symmetric n × n matrix, and let G be the undirected graph whose adjacency matrix is A. Let x be an n dimensional vector. Then we can support changes to A in (log^3 nlogα) worst case time, changes to x in (α) worst case time, and for each i∈{1,…,n} we can report ∑_j=1^nA_ijx_j in worst case (α) time. §.§ Dynamic arboricity decomposition Suppose one can maintain an edge-orientation of a dynamic graph,that has t_u update time, and that maintains a maximal out-degree of ≤ n_o. Then there is an algorithm for maintaining a decomposition into 2n_o forests whose update time is (t_u).Firstly, as noted in <cit.>: By assigning the i'th out-edge of a vertex u to subgraph S_i, one obtains a decomposition into n_o subgraphs, each of which is a pseudoforest. Every vertex has at most one out-edge in each pseudoforest S_i, and thus, the at most one cycle in each tree of the pseudoforest is a directed cycle according to the orientation.For maintaining this dynamic pseudoforest decomposition, there is only an (1) overhead per edge-reorientation, yielding an (n_o)-time algorithm for maintaining n_o pseudoforests.Then, as noted in <cit.>, we may split each pseudoforest S_i into two forests f_i and f_i' by the following simple algorithm: given a new edge e in f_i, notice that there is at most one edge e' in f_i incident to its head. Now, one can safely insert e in any of the two forests {f_i , f_i'} that does not contain this at most one edge e'. Thus, consequently, neither f_i nor f_i' will contain a cycle. Thus, by applying Theorem <ref>, we obtain the following: There is a deterministic algorithm for maintaining an arboricity decomposition into (α) forests, whose worst-case update time is (log ^3 n logα), where α is the current arboricity of the dynamic graph. Related WorkWhile an arboricity decomposition of a graph; a partition of its edges into as few forests as possible; is conceptually easy to understand, computing an arboricity decomposition is surprisingly nontrivial. Even computing it exactly has received much attention <cit.>.The state-of-the-art for computing an exact arboricity decomposition runs in Õ(m^3/2) time <cit.>. In terms of not-exact algorithms there is a 2-approximation algorithm <cit.> as well as an algorithm for computing an α+2 arboricity decomposition in near-linear time <cit.>.For dynamic arboricity decomposition, Bannerjee et al. <cit.> give a dynamic algorithm for maintaining the current arboricity. The algorithm has a near-linear update time. They also provide a lower bound of Ω(logn).Henzinger Neumann Wiese <cit.> provide an (α) arboricity decomposition in ((log n , α)) time; their result also goes via out-orientation, and they provide a dynamic algorithm for maintaining a 2α' arboricity decomposition, given access to any black box dynamic α' out-degree orientation algorithm.Most recently, there are algorithms for maintaining (α+2) forests in (poly (log(n) , α)) update-time <cit.>, and (α + 1) forests in Õ(n^3/4poly(α)) time <cit.>. plain | http://arxiv.org/abs/2310.18146v3 | {
"authors": [
"Chandra Chekuri",
"Aleksander Bjørn Christiansen",
"Jacob Holm",
"Ivor van der Hoog",
"Kent Quanrud",
"Eva Rotenberg",
"Chris Schwiegelshohn"
],
"categories": [
"cs.DS"
],
"primary_category": "cs.DS",
"published": "20231027135153",
"title": "Adaptive Out-Orientations with Applications"
} |
[ [ January 14, 2024 ==================== The amount of news being consumed online has substantially expanded in recent years. Fake news has become increasingly common, especially in regional languages like Malayalam, due to the rapid publication and lack of editorial standards on some online sites. Fake news may have a terrible effect on society, causing people to make bad judgments, lose faith in authorities, and even engage in violent behavior. When we take into the context of India, there are many regional languages, and fake news is spreading in every language. Therefore, providing efficient techniques for identifying false information in regional tongues is crucial. Until now, little to no work has been done in Malayalam, extracting features from multiple modalities to classify fake news. Multimodal approaches are more accurate in detecting fake news, as features from multiple modalities are extracted to build the deep learning classification model. As far as we know, this is the first piece of work in Malayalam that uses multimodal deep learning to tackle false information. Models trained with more than one modality typically outperform models taught with only one modality. Our study in the Malayalam language utilizing multimodal deep learning is a significant step toward more effective misinformation detection and mitigation. § INTRODUCTIONThe spread of false or misleading information as if it were true news has become a big concern in our current digital age. This issue is not confined to major languages alone; it also impacts regional languages like Malayalam. During the COVID-19 pandemic, a plethora of false information circulated online. They even said that treatments with vinegar, tea, and salt water could be used as effective remedies<cit.><cit.>. Identifying fake news before it spreads is crucial to prevent it from spreading and any associated potential damage it causes. In the past, many fake news identification approaches have been reported in the machine learning literature with varying degrees of success <cit.><cit.><cit.>. With the advancement in computing and social media proliferation, new advanced approaches such as multi-modal news content started being shared by people across the globe <cit.><cit.><cit.>. The text-only approaches may not give good detection accuracies in this case, leading to the development of multi-modal fake news detection techniques<cit.><cit.>. The knowledge graphs-based approaches have also become prominent in identifying fake news<cit.>. Multi-modal fake news identification itself is very challenging, and detecting fake news from low-resource languages is even more challenging<cit.><cit.>.This study presents a novel approach to detect and counteract misinformation in Malayalam, which is a low-resource language that is highly agglutinative as well<cit.><cit.>.Malayalam is a South Indian language with a rich cultural heritage and a growing online presence. With a substantial Malayalam-speaking population, it is crucial to protect the integrity of information within this linguistic community. Fake news specifically targets regional languages like Malayalam, capitalizing on linguistic and cultural nuances often overlooked in the fight against disinformation. We aim to achieve this by incorporating diverse data types, such as text and images, to safeguard the accuracy of information within the Malayalam-speaking community. As fake news increasingly targets regional languages, there is a growing need for specialized solutions. This paper aims to tackle these issues directly and ensure the responsible sharing of accurate news and information in the digital age. Fake news has become a pervasive challenge in our digital landscape, where misinformation can easily spread across various online platforms, including social media. The consequences of fake news are far-reaching, as it can incite panic, manipulate public opinion, and erode trust in credible sources of information.The proposed methodology includes natural language processing and deep learning techniques <cit.>. NLP algorithms analyze the text content of news articles, social media posts, and other sources. In parallel, deep learning techniques are applied to analyze images associated with news stories. This includes using deep neural networks to detect manipulated or doctored images and identify misleading visual content. This combination of NLP and deep learning techniques allows a more comprehensive assessment of the presented information. The effectiveness of any fake news detection model relies on the quality and diversity of the data sources used for training and testing. In the case of Malayalam, a robust dataset of news articles, social media posts, and multimedia content in the language is essential. Collaborations with local news outlets, social media platforms, and community-driven initiatives provide access to a broad range of Malayalam content. Developing a fake news detection model for Malayalam involves creating a hybrid system that combines NLP and deep learning components. Using supervised learning methods, the model is trained on a labeled dataset of real and false news in Malayalam. Convolutional neural networks and recurrent neural networks, two deep learning techniques, create a model that distinguishes between the two groups. The major contributions of this paper are outlined as follows: * Surveys some of the recent state-of-the-art approaches in multi-modal fake news identification approaches.* Proposed a multi-modal framework incorporating modules for understanding images and text to identify fake news.* Experimentally verifies the usefulness of the proposed approach in terms of precision, recall, and accuracy measures.§ RELATED STUDIESSome of the most innovative methods for multi-modal false news identification documented in the machine learning literature are shown in the paper of Segura et al. <cit.>. The paper describes a method for detecting fake news using unimodal and multimodal approaches. This paper examines the advantages of combining textual and visual data and assesses fake news using the Fakeddit dataset. The authors extract information from images using pre-trained models, which they then mix with textual features to create a multimodal representation of each instance. They discovered that, with an accuracy of 87%, the multimodal approach based on CNN that took into account both text and image data yielded the best results. An approach for online social networks (OSNs) using text and visual data is presented bySantosh Kumar Uppada et al. <cit.>. They emphasize how misleading visual news may be and how it affects people psychologically. To analyze images and text, the authors use various methods, such as Error Level Analysis, VGG-16, VGG-19, Xception, Inception-Resnet50, and BERT models. The research employs approximately 1 million samples from the Fakeddit dataset, which includes text, photos, metadata, and comments. The technique, including the framework, picture editing, polarity-based fake post-detection models, and fusion models, is covered in detail in the paper. Additionally, it explains the outcomes of the experiments and offers an error analysis of the suggested model.Anit Sara Santhosh et al. published a paper to detect Fake News using Machine Learning <cit.>. The goal of the research is to develop a machine-learning model that can reliably identify if news in Malayalam is true or not. By utilizing NLP strategies and machine learning methods like the TF-IDF Vectorizer and Passive Aggressive Classifier, the authors hope to accomplish the problem. The study's approach entails gathering data, preprocessing it, creating a model, analyzing it, and reporting the findings. The authors used a dataset with 316 rows and 2 columns that had labels that were both fake and true and headings that indicated whether a piece of Malayalam news was true or fake. The work of Sudhanshu Kumar et al. <cit.> discusses a strategy for handling fake news in Hindi. After employing NLP techniques for feature engineering and pre-processing, the proposed method uses machine learning and deep learning to classify news articles. The study uses two independent datasets: FNC-1 and a news dataset in Hindi. The methods for feature extraction employed include TF-IDF and bag-of-words. Multilayer Perceptron, Multinomial Naive Bayes, Support Vector Machine, Logistic Regression, and Long Short-Term Memory (LSTM) are some of the categorization techniques employed.A detailed description of the preprocessing, feature extraction, classification, and prediction algorithms can be found in the Priyanshi Shah et al<cit.> publication. This article outlines a thorough plan to stop the spread of false information on social media. It initiates with a Textual Feature Extractor, utilizing sentiment analysis to gauge the emotional tone of news articles, a critical step as fake news often manipulates emotions to deceive readers. In parallel, a Visual Feature Extractor processes accompanying images, involving resizing, grayscale conversion, and advanced techniques like K-means clustering and Discrete Wavelet Transform (DWT) to extract significant visual features. These extracted features serve as crucial inputs for subsequent steps. The optimized feature vectors from textual and visual components are then fine-tuned using a Cultural Algorithm, which expertly combines normative and situational knowledge to refine the feature set while minimizing computational costs. Finally, the Fake News Detector merges the optimized feature vectors and employs a kernel Support Vector Machine (SVM) for classification, leveraging the power of SVMs to distinguish between real and fake news. The classifier undergoes rigorous training with labeled data, aided by cross-validation to ensure its robustness. This holistic approach, integrating both textual and visual information, is designed to significantly enhance fake news detection accuracy, as substantiated through a series of extensive experiments on real-world datasets, where it consistently outperforms existing methods by an average margin of 9% in terms of accuracy.In the work of Suryavardan et al. <cit.>, a multimodal fact-checking dataset called FACTIFY 2 is introduced as a solution to the rising problem of fake news and disinformation on the internet. By adding additional data sources and satirical articles, this dataset expands upon its predecessor, FACTIFY 1, and now contains 50,000 new data instances. Based on the inclusion of textual and visual data between claims and their supporting documents, FACTIFY 2 encompasses five separate categories: Support Text, Support Multimodal, Insufficient Text, Insufficient Multimodal, and Refute. We offer a baseline model based on BERT and Vision Transformer (ViT), showcasing its efficacy with a test set F1 score of 65%. An approach for SpotFake fake news proliferation on social media is addressed in the paper of Singhal et al. <cit.>. The authors introduce SpotFake, a novel multi-modal framework for detecting fake news without additional subtasks. This framework leverages textual and visual features, utilizing BERT for text analysis and pre-trained VGG19 for image features. The experiments conducted on Twitter and Weibo datasets demonstrate that SpotFake outperforms current state-of-the-art models significantly, improving accuracy and precision for fake news detection. The paper also highlights the importance of using multiple modalities for improved fake news detection, as supported by a public survey that revealed the benefits of combining text and image information. Overall, SpotFake represents a promising approach for addressing the challenge of fake news detection in the age of multimedia information.In a paper by Rina Kumari et al. <cit.>. (2021), they discuss detecting news using a multimodal factorized bilinear pooling approach. Fake news is purposely created to mislead people. They can have consequences for society. With abundant multimedia information on platforms like Twitter, Facebook, and blogs, it becomes challenging to identify news. To address this issue, the authors proposed a framework that combines visual information to maximize their correlation. By analyzing posts at a stage in the network, the model determines whether they are genuine or fake. The proposed framework surpasses existing models by 10 percentage points, achieving better performance with balanced F1 scores for both fake and real classes. The framework consists of four sub-modules; Attention Based Stacked Bidirectional Long Short Term Memory (LSTM), Attention Based Multilevel Convolutional Neural Network (CNN) ABM CNN RNN, MFB, and Multi-Layer Perceptron (MLP). Importantly this approach does not require any user or network details. The effectiveness of this method is evaluated on two datasets; Twitter and Weibo. Additionally, the model's complexity is significantly reduced compared to state-of-the-art models.Alessandro Bondielli et al.'s paper for the MULTI-false-DetectiVE challenge for the EVALITA 2023 campaign <cit.> examines multimodality in the context of false news. Gaining an understanding of the interaction between text and visuals and assessing the efficacy of multimodal false news detection systems are among the responsibilities. The study makes the case that the issue is unresolved and suggests some potential paths forward. In recent years, disinformation has developed into a potent strategic tool, particularly regarding actual occurrences covered as breaking news. The initial "Infodemic" that followed the COVID-19 epidemic in recent years has shown the distorted usage of online social media. Large language models and other related approaches are also covered in the article. Pengwei Zhan et al.'s study <cit.> describes a strategy for stopping the severe spread of fraudulent information on social media. There are serious consequences due to fake news spreading on social media. Automatically identifying fake news is essential to reducing these effects. To more effectively combine textual and visual information for false news identification, Multimodal Co-Attention Networks (MCAN) are suggested. MCAN outperforms state-of-the-art techniques and can learn inter-dependencies among multimodal characteristics. It is more effective to detect fake news when text and images are combined, according to recent research. Nevertheless, conventional expert identification techniques disregard pictures' semantic properties.In the paper <cit.>, researchers from the University of Melbourne provide a novel framework for cross-domain detection of false news using multi-modal data. The issue of fake news is a serious social concern that cannot be solved quickly via manual research. To detect false news, most research investigates supervised training models using various news record modalities; nevertheless, the effectiveness of these methods often decreases when news records originate from diverse domains. The scientists suggest an unsupervised method for choosing meaningful unlabeled news records for manual labeling to increase the labeled dataset's domain coverage. When the suggested fake news model and selective annotation method are combined, crossing-domain news datasets may perform at cutting-edge levels, and typically appearing domains in news datasets can see significant gains in performance. According to the authors, automatic fake news identification has emerged as a major issue that is getting much attention from researchers. However, it is challenging to spot false news since existing methods are restricted to a specific industry, such as politics, entertainment, or healthcare. The study <cit.> describes the unique Multimodal Co-Attention Networks (MCAN) presented by the Association for Computational Linguistics (ACL-IJCNLP) to better merge textual and visual features for false news identification. Fake news has become increasingly prevalent thanks to social media, which has had detrimental effects. Recently, tweets containing photos have gained popularity on social media because they provide richer content and draw in more people than tweets that solely contain text. This benefit is also fully used by fake news to tempt and confuse readers. MCAN outperforms state-of-the-art techniques and can learn inter-dependencies among multimodal characteristics. It is more effective to detect false news when text and the linked image are combined, according to recent research. Nevertheless, current methods for feature fusion and extraction are not sufficiently fine-grained.The proliferation of fake news on social media is a serious problem with documented negative impacts on individuals and organizations described in the paper of <cit.>. Researchers are building detection algorithms with an aim for high accuracy. This paper presents a novel approach using a Cultural Algorithm with situational and normative knowledge to detect fake news using text and images. The proposed method outperforms the state-of-the-art methods for identifying fake news in terms of accuracy by 9% on average. Fake news is intentionally written to confuse viewers, making it nontrivial to identify simply based on news content. The article discusses the prevalence of fake news and the need for research on automating the detection of false information and verifying its accuracy, which is discussed in the paper of <cit.> . It presents the outcome of the Factify 2 shared task, which provides a multi-modal fact verification and satire news dataset, as part of the DeFactify 2 workshop at AAAI 2023. The data calls for a comparison-based approach to the task by pairing social media claims with supporting documents, with both text and image, divided into 5 classes. The best performances came from using DeBERTa for text and Swinv2 and CLIP for images. The highest F1 score averaged for all five classes was 81.82%. The article highlights the difficulty in uncovering misleading statements before they cause significant harm and the scarcity of available training data hindered automated fact-checking efforts. The rapid distribution of news across numerous media sources, particularly on social media, has led to the fast development of erroneous and fake content. The emergence of multi-modal fake news has caused social harm, making it difficult to detect it accurately.A study on this is done by <cit.>. Traditional detection methods involve fusing different modalities without considering their impacts, leading to low accuracy. To address this, a new attention and adversarial fusion method, based on the pre-training language model BERT, has been developed. The attention mechanism captures differences in modalities, while the adversarial mechanism captures correlations between them. The proposed new method achieves 5% higher accuracy than the traditional method. To counter the proliferation of fake news, there is a need for in-depth research into automatic monitoring methods for fake news. The article discusses the main problem in detecting fake news and how to distinguish it according to its characteristics, including sources, texts, and attached pictures. It also highlights the importance of considering the complementarity of different features to improve detection accuracy.§ MATERIALS AND METHODS §.§ Long-Short Term Memory (LSTM)LSTM is a specific kind of recurrent neural network (RNN) that was created to solve the intrinsic drawbacks of conventional RNN, including the disappearing gradient issue.<cit.> The LSTM design is well-known for solving this issue and enabling it to identify both short and long-term relationships between consecutive data efficiently. Because of this property, LSTM is well-suited to tasks that need a thorough comprehension of context, the recognition of complex patterns encompassing extended periods, and the discovery of remote links between parts. The main component of the multimodal fake news detection technique is Long Short-Term Memory (LSTM), which has extensive training in interpreting textual data. Its main job is to discover short- and long-term relationships within sequences to identify linguistic signs typical of fake news in Malayalam. Text from diverse sources is methodically processed by LSTM, which carefully looks for language patterns and irregularities. This advanced method improves accuracy by identifying minute language details associated with false news. §.§.§ LSTM architectureA specific type of recurrent neural network (RNN) called the LSTM was designed to be developed to tackle the issue with standard RNNs' vanishing gradient. Three different types of gating units—the input gate, forget gate, and output gate—control the information flow into and out of the memory cell, which is the central component of the architecture. The memory cell itself is capable of maintaining its state over time. Information entering a memory cell is managed by the input gate, and information leaving the memory cell is managed by the forget gate. The output gate manages the flow of information from the memory cell to the network's output. A sigmoid activation function, which produces a value, is used to build each gate. A sigmoid activation function, which outputs a number between 0 and 1, is used to build each gate. This function establishes the quantity of information that can pass through the gate. The activations of the gates and the memory cell are calculated using a set of weights and biases included in the LSTM architecture in addition to the gating units. Backpropagation through time (BPTT), a variation of the backpropagation algorithm used to train RNNs, is the method used to learn these weights and biases during training. The LSTM design has advanced the best available for numerous demanding situations and has been demonstrated to be successful at capturing long-term temporal dependencies. The architecture of the LSTM model is shown in Figure <ref>.We train our LSTM model with pre-trained word embeddings in our Multimodal Fake News Detection project, giving it a solid linguistic base. Because of these embeddings, our model can learn semantic understanding and recognize complex linguistic cues in Malayalam. These pre-trained embeddings are important because they simplify training and improve our model's ability to spot minute patterns and irregularities in text. Pre-trained embeddings serve as a link between the analytical capabilities of our LSTM model and the linguistic complexity of Malayalam, improving the accuracy and precision of our fake news detection system.§.§ VGG-16VGG-16 stands for "Visual Geometry Group 16. It is a versatile deep learning architecture primarily used for image classification, object recognition, feature extraction, transfer learning, and benchmarking in various computer vision applications. Its deep structure and effectiveness on standard datasets have made it popular in the deep learning community. The network's depth, denoted by "16" in VGG-16, comprises 16 weight layers. VGG-16's 13 convolutional layers, which are in charge of extracting complex patterns and information from input images, are its fundamental component. The fact that these convolutional layers continually use 3x3 convolutional filters, scan the input with a stride of 1, and use "same" padding to preserve the spatial dimensions is noteworthy. This makes sure that crucial spatial information is preserved across the network. VGG-16 adds 2x2 max-pooling layers with a stride of 2 after each block of convolutional layers.<cit.> The feature maps are downsampled using these layers, which gradually reduce the feature maps' spatial size while enhancing their depth. This architectural decision helps the network efficiently capture hierarchical features. The architecture of VGG-16 model is shown in Figure<ref>. §.§ DatasetThe Multimodal Fake News Detection dataset has been thoughtfully assembled to support the development and evaluation of algorithms specifically tailored for the identification of fake news articles across a range of online platforms, including Facebook and various news websites such as Manoramaonline, One India, Anweshanam, Mangalam, Janam TV, 24 News, Asianet News, Samayam, and VishvasNews. This dataset comprises four fundamental components: news_headline, news_url, image_url, and image_name, alongside manually assigned binary labels, designating the veracity of each news article, with 0 signifying fake and 1 indicating true. * News_headline: This component encompasses the headline or title of each news article, serving as the textual basis for subsequent analysis and classification.* News_url: It denotes the URL of the source from which the news article originates. This information proves invaluable for cross-referencing and source validation.* Image_url: Corresponding to the URL of the associated image for each news article, images serve to provide supplementary context and vital visual cues to detect fake news.* Image_name: An automatically generated identifier for the associated image, facilitating the seamless connection between the image and its respective news article.* Target Class (Label): Each news article within the dataset is categorized as either "fake" (0) or "true" (1) based on a meticulous process of manual assessment and content verification. The dataset encompasses 1852 data points, including 926 fake news articles, and there are 926 instances of true news. Data acquisition involved compiling data from various sources, including Facebook and multiple news websites. For each news article, critical information, including news_headline, news_url, and image_url was meticulously extracted and integrated into the dataset. The image_name was algorithmically generated from the image_url to ensure coherent association. Determining labels for fake news (0) and true news (1) was conducted with scrupulous attention to manual content verification. A snapshot of the dataset is shown in Figure 3. § PROPOSED APPROACHThis section outlines our methodology for detecting fake news in the Malayalam language through a multimodal framework. Our methodology integrates text and image information, harnessing the power of natural language processing and deep learning techniques to enhance the accuracy and robustness of our model. The overall workflow and architecture for our proposed approach are shown in Figure <ref>. §.§ Feature Extraction§.§.§ Text Feature ExtractionOur approach to identifying fake news relies heavily on textual content as a data source. To ensure that the textual data is translated into a format that can be analyzed, several essential steps must be taken during the text feature extraction process. As our primary text data, we first collect headlines from news items. These headlines contain important information that may indicate whether it is reliable. We first preprocess the Malayalam text before moving on to feature extraction. Special characters, HTML tags, digits, and other components that are not necessary for the task at hand are removed in this stage. To generalize numerical data, we swap out numeric values with the generic term "NUM." This preparation enables a clearer and more reliable representation of the text. Next, we tokenize the cleaned headlines into words, following Malayalam language conventions. Tokenization breaks the text into individual units, allowing us to work with words as separate entities. our approach involves the utilization of Word2Vec for textual feature extraction, allowing us to extract the semantic associations between words in the Malayalam text and build word embeddings. These word vectors play a crucial role in enhancing the understanding of textual content and improving the accuracy of our fake news detection system. §.§.§ Image Feature ExtractionImages included with news items often provide insightful visual information that might enhance the textual content. We use the VGG16 transfer learning model to extract useful image characteristics to make use of this. This model can recognize various visual patterns and features because it was pre-trained on a sizable image dataset. The image feature extraction involves loading and preprocessing the images from the dataset. Images are resized to match the input size of the VGG16 model and converted to the appropriate format. After preprocessing, we utilize the VGG16 model to obtain the last layer's output, representing the most crucial image features. To ensure consistency with the text data, we flatten the extracted image features and pad them if necessary to match the maximum sequence length used for text data. This alignment is crucial for subsequent fusion and analysis.§.§ Classifier and Model Selection§.§.§ LSTM-Based Text ModelThe core of our text-based fake news detection lies in an LSTM-based model, carefully designed to process the textual content effectively. The architecture of our LSTM Text Model consists of the following layers: * Embedding Layer: This layer converts textual data into numerical vectors, enabling the model to understand and process the text effectively. It transforms words into dense vectors where similar words have similar representations.* Bidirectional LSTM Layer (return sequences=True): The 'return sequences' parameter is set to True in this layer, which uses Bidirectional Long Short-Term Memory (LSTM) units. With the help of bidirectional LSTMs, which record contextual information in both the forward and backward directions, the model can comprehend the context of the text in greater detail.* Bidirectional LSTM Layer: Similar to the layer above, this Bidirectional LSTM layer further enhances the text's contextual understanding.* Dense Layer (64 units, ReLU activation): This dense layer introduces non-linearity to the model by applying Rectified Linear Unit (ReLU) activation. It consists of 64 units, which allows the model to figure out detailed relationships within the information.* Dropout Layer (dropout rate: 0.5): During training, the Dropout Layer randomly sets a portion of the input units to zero to prevent overfitting. In our model, the dropout rate is set to 0.5.* Dense Layer(output: num classes, softmax activation): In the context of multi-class classification, the number of units in the final Dense Layer of the neural network architecture is set to match the number of classes ('num classes'). To derive class probabilities, the softmax activation function is applied.§.§.§ VGG16 Transfer Learning ModelFor image-based fake news detection, we employ a VGG16-based model, leveraging the power of transfer learning. The architecture of our Image Model includes the following layers: * Flatten Layer: This layer is essential to reshape the output of the VGG16 base into a one-dimensional vector, preparing it for further processing.* Dense Layer (256 units, ReLU activation): The Dense Layer with 256 units captures high-level image features, allowing the model to recognize complex visual patterns.* Dropout Layer (dropout rate: 0.5): This layer has a dropout rate of 0.5 which is applied to prevent overfitting and maintain model generalization.* Dense Layer (64 units, ReLU activation): This additional Dense Layer with 64 units captures more specific image details and patterns.* Dense Layer (2 units, softmax activation): For binary image classification, a final Dense Layer with 2 units and softmax activation is used, producing output probabilities for the two classes (real and fake).§.§.§ Text-Image Fusion ModelOur proposed fusion Model comprises the following layers: * Input 1: Output of LSTM Text Model: The output of the LSTM Text Model serves as one of the model's inputs. It represents the textual features extracted from news headlines and processed through the text model's layers.* Input 2: Output of VGG16 Image Model: The output of the VGG16-based Image Model serves as the second input. It embodies the image features extracted using deep learning techniques from the associated news article images.* Concatenate Layer (combines text and image model outputs): The Concatenate Layer seamlessly combines the outputs from both the Text and Image Models. This fusion step ensures that both textual and visual information are taken into account for subsequent analysis.* Dense Layer (64 units, ReLU activation): After fusion, a dense layer is introduced. This layer is instrumental in capturing intricate relationships and patterns that may arise from the fusion of features.* Dense Layer (output: num classes, softmax activation): The final Dense Layer produces the model's output. The number of units in this layer matches the number of classes for classification. The softmax activation function is used to derive class probabilities.§ EXPERIMENTS, RESULTS, AND DISCUSSIONSThis section explores the specifics of the implemented experiments of our proposed approach. All the experiments were conducted on NVIDIA A100 GPU compute using Python 3.9 with Tensorflow. The label encoding was done using the scikit-learn library, and the embeddings were generated using gensim and a local copy of VGG16. The number of epochs was set to 10 for this experiment, and the vector size to 300. The LSTM layers in the text-based model were configured with 128 units for the first layer, followed by 64 units for the second layer. The number of units in an LSTM layer affects the model's capacity to capture and learn from sequential data. A higher number will allow the model to capture more intricate patterns but can also lead to overfitting. A dense layer with 64 units was added after the LSTM layers with a dropout rate of 0.5 was chosen. For the VGG-16 model, the dense layer had 256 units, another dense layer had 64 units, and an output layer with 2 units. The output layer consists of two units, one for real news and one for fake news, making it a binary classification task. The dropout was set to 0.5, and the Adam optimizer was chosen for the image model. Adam is an adaptive learning rate optimization algorithm that performs well in various deep learning tasks.To create a prototype of the research output, we have also implemented a minimum working version and hosted the same using the Streamlit framework, which provided an intuitive and user-friendly interface for end-users. § CONCLUSION AND FUTURE WORKThis work proposed a multi-modal framework for identifying fake news in the Malayalam language. We have proposed and implemented a hybrid approach that combines textual and image features for identifying fake news from multi-modal sources. Our proposed approach showed reasonably good accuracy in the classification task comparable with any such baseline models. As the results are promising, the authors may continue working on improving the accuracy of the model by training the same with more data to cover a wide range of news from multiple domains. Incorporating explainability into the model will also be an interesting research dimension. unsrtnat § APPENDIX | http://arxiv.org/abs/2310.18263v1 | {
"authors": [
"Adhish S. Sujan",
"Ajitha. V",
"Aleena Benny",
"Amiya M. P.",
"V. S. Anoop"
],
"categories": [
"cs.CL",
"cs.CY"
],
"primary_category": "cs.CL",
"published": "20231027165129",
"title": "MalFake: A Multimodal Fake News Identification for Malayalam using Recurrent Neural Networks and VGG-16"
} |
Fast and simple unrooted dynamic forests Benjamin Aram BerendsohnFreie Universität Berlin, Germany. Email: . Supported by DFG Grant . ================================================================================================ A dynamic forest data structure maintains a forest (and associated data like edge weights) under edge insertions and deletions. Dynamic forests are widely used to solve online and offline graph problems. Well-known examples of dynamic forest data structures are link-cut trees <cit.> and top trees <cit.>, both of which need (log n) time per operation. While top trees are more flexible and arguably easier to use, link-cut trees are faster in practice <cit.>. In this paper, we propose an alternative to link-cut trees. Our data structure is based on search trees on trees (STTs, also known as elimination trees) and an STT algorithm <cit.> based on the classical Splay trees <cit.>. While link-cut trees maintain a hierarchy of binary search trees, we maintain a single STT. Most of the complexity of our data structure lies in the implementation of the STT rotation primitive, which can easily be reused, simplifying the development of new STT-based approaches. We implement several variants of our data structure in the Rust programming language, along with an implementation of link-cut trees for comparison. Experimental evaluation suggests that our algorithms are faster when the dynamic forest is unrooted, while link-cut trees are faster for rooted dynamic forests.§ INTRODUCTION Maintaining a dynamically changing forest along with data associated to vertices, edges, or trees is a well-known problem with a forty-year history. Sleator and Tarjan <cit.> first introduced a data structure (commonly called link-cut trees) for this task, with worst-case running time ( log n ) per operation, where n is the number of vertices in the forest. The same authors proposed a simplified amortized variant of the data structure using their Splay trees <cit.>. Several alternative data structures have since been proposed, including topology trees <cit.>, ET-trees <cit.>, RC-trees <cit.> and top trees <cit.>. Top trees are more flexible and thus more widely applicable than link-cut trees. However, an experimental evaluation by Tarjan and Werneck <cit.> suggests that link-cut trees, while less flexible, are faster then top trees by a factor of up to four, likely due to their relative simplicity.Dynamic forest data structures have a large number of applications. Link-cut trees in particular have been used as a key ingredient in algorithms and data structures including, but not limited to: minimum cut <cit.>, maximum flow <cit.>, minimum-cost flow <cit.>, online minimum spanning forests <cit.>, online lowest common ancestors <cit.>, online planarity testing <cit.>, and geometric stabbing queries <cit.>.In this paper, we design and implement data structures that can serve as a drop-in replacement for link-cut trees, based on search trees on trees (STTs, described in <ref>). We experimentally compare our data structures with link-cut trees.We focus on maintaining unrooted forests. More precisely, our data structures maintain an edge-weighted dynamic forest and support the following three operations: 0pt * u, v, w – Adds an edge between the vertices u and v with weight w. Assumes this edge did not exist beforehand. * u, v – Removes the edge between u and v. Assumes this edge existed beforehand. * u, v – Returns the sum of weights of edges on the path between u and v, orif u and v are not in the same tree. As weights, our implementation allows arbitrary commutative monoids. For example, when using edge weights from (, max), the u,v method returns the maximum edge weight on the path between u and v. We note that additional operations like increasing the weight of each edge on a path, or maintaining vertex weights and certain related properties are also possible, but omitted for simplicity. Rooted vs. unrooted forests. Some applications, like the maximum flow algorithms mentioned above, require maintaining rooted forests. In that case, a v operation is available, which returns the root of the tree containing v. Moreover, adding arbitrary edges is not allowed; the u,v operation requires that u is the root of its tree, and makes v the parent of u.The basic variant of link-cut trees maintains rooted forests. To maintain unrooted forests, an additional operation v is used, which makes v the root of its tree, and thus enables arbitrary s. While asymptotic performance is not affected,does come with additional bookkeeping that may impact performance in practice.For our data structures, the opposite is true: they “natively” implement unrooted forests, and maintaining rooted forests is possible only with some overhead.Indeed, in our experimental evaluation, link-cut trees are faster for rooted forests (without evert), and our approach is faster for unrooted forests. When explicitly maintaining rooted forests with changing roots (via ), link-cut trees again appear to be slower. We further discuss similarities and differences between link-cut trees and our data structures in <ref>.Search trees on trees. Our approach is based on recent results <cit.> for search trees on trees (STTs), also known as elimination trees. STTs are a generalization of binary search trees (BSTs) where, intuitively, the search space is a tree, and each query is a search for a vertex. We formally define STTs in <ref>.Previous papers on STTs <cit.> have concentrated on theoretical models, where the task is to answer vertex queries, analogous to searching for keys in a BST. The underlying tree is not modified. As in the BST setting, we distinguish between the static and dynamic STT model, both of which we will now briefly describe.In the static model, we are given an input distribution and need to build an STT that can answer queries with low expected cost (according to the distribution). Optimum static BSTs on n nodes can be computed in (n^2) time <cit.>, and approximated in linear time <cit.>. For optimal static STTs, no exact polynomial algorithm is known, though a PTAS <cit.> and an (n log n)-time 2-approximation <cit.> are known.In the dynamic model, we are allowed to modify the STT after each query. These modifications are done using STT rotations, a straight-forward generalization of BST rotations. Typically we assume the online dynamic model, where we are not provided with any information about the input sequence beforehand, as opposed to the offline case, where we know the input sequence in advance. The central open question for dynamic BSTs and STTs (called dynamic optimality) is whether there exists a constant-competitive online algorithm, i.e., an online algorithm whose performance matches the best offline algorithm, up to a constant factor.For BSTs, several online algorithms are conjectured to by constant-competitive, most prominently Splay <cit.> and Greedy <cit.>. Both of these algorithms have many useful properties, among them static optimality, which means that they match the optimum static tree on every input sequence[Note that the static optimum “knows” the queries in advance.]. The best known upper bound for competitiveness is achieved by Tango trees <cit.>, with a competitive ratio of (loglog n).Bose, Cardinal, Iacono, Koumoutsos, and Langerman <cit.> generalized Tango trees to the STT model, thus providing a (loglog n)-competitive algorithm, where n is the number of vertices in the tree. Berendsohn and Kozma <cit.> generalized Splay trees to STTs and proved static optimality. Dynamic forests using STTs. A common property of dynamic BST algorithms such as Splay is that after finding a node, it is brought to the root (e.g., using the eponymous splay operation). The same is true for Berendsohn and Kozma's Splay generalization (called SplayTT) <cit.>. In this paper, we use SplayTT to implement dynamic forest data structures. The amortized running time per operation is (log n), matching the asymptotic performance of known data structures.We remark that many previous dynamic forest implementations are also based on or inspired by Splay, such as the amortized variant of link-cut trees <cit.> and some top tree implementations <cit.>. We present the first STT-based approach.Our implementation is highly modular: it consists of (i) a basic implementation of STTs[More precisely, 2-cut STTs, defined in <ref>.] and STT rotations, (ii) a routinethat brings an STT node to the root via rotations, and (iii) an implementation of the operations , , andbased on . Most importantly, theimplementation can easily be replaced with a different one. We present four variants of , three based on SplayTT, and one simpler algorithm. Futureimplementations, developed in the dynamic STT model, would automatically provide a new dynamic forest algorithm.We implement our data structure in the Rust programming language.[The source code can be found at <https://github.com/berendsohn/stt-rs>.] The modularity described above is achieved using generics, resulting in an easily extendable library. For comparison, we also implement the amortized variant of Tarjan and Sleator's link-cut trees <cit.>, and some very simple linear-time data structures. We experimentally compare all implementations.In <ref>, we define STTs and related concepts. In <ref>, we present a basic data structure that maintains a 2-cut search tree on a fixed tree under rotations. In <ref> and <ref>, we show how to implement the dynamic forest operations, using multiple 2-cut STTs and assuming a black-boximplementation. In <ref>, we present our fouralgorithms. In <ref>, we present our experimental results, and in <ref>, we discuss our findings and propose some open questions.§ PRELIMINARIES Let G be a graph. The sets of vertices, resp., edges in G are denoted by V(G), resp., E(G). The subgraph of G induced by the vertex set U ⊆ V(G) is denoted by G[U].Below, we repeat definitions and observations from previous papers <cit.>. Search trees on graphs. A search tree on a connected graph G is a rooted tree T that is constructed as follows: Choose an arbitrary vertex r as the root. Then, recursively construct search trees on each connected component of G ∖ r and attach them to r as subtrees. We denote by V(T) the set of nodes in T. Observe that V(T) = V(G). For each node v ∈ V(T), we denote the subtree of T rooted at v by T_v. The root path of v in T is defined as the path from v to the root of T in T.A search tree T on a graph G satisfies the following properties: * For each edge {u,v}∈ E(G), either u is an ancestor of v or v is an ancestor of u; * For each node v ∈ V(T), the subgraph G[V(T_v)] is connected. It can be shown that the above two properties in fact fully characterize search trees on G.If G is a path, then search trees on G essentially correspond to binary search trees on |V(G)| nodes. Note, however, that children are unordered in STTs. Cuts and boundaries.Let G be a graph and H be a subgraph of G. The cut of H, denoted _G(H), is the set of edges between H and G ∖ H. The outer boundary of H, denoted _G(H), is the set of vertices in G ∖ H that are adjacent to some vertex in H. Note that |_G(H)| ≤ |_G(H)| and |_G(H)| = |_G(H)| if G is a tree. If T is a search tree on G and v ∈ V(T), we write (T_v) = _G(V(T_v)) for short. The following observations will be useful later. Let p be a node in an STT T, and let v, v' be children of p. Then: * p ∈(T_v); * (T_v) ⊆(T_p) ∪{p}; * (T_v) ∩(T_v') = {p}. We remark that (<ref>) follows from the lack of cycles in G and is not true for search trees on graphs in general.Rotations. Let v be a node in a search tree T on a graph G, and let p be the parent of v. A rotation of v with its parent (also called a rotation at v) is performed as follows. Make p a child of v and make v a child of the previous parent of p, if it exists (otherwise, make v the root). Then, every child of v with p ∈(T_v) is made a child of p. Observe that if G is a tree, then only one child of v can change parent like this, otherwise p is part of a cycle. <Ref> shows a rotation in an STT. vertex/.style=fill, circle, inner sep = 1.2pt, component/.style=draw, circle, minimum width = 8mm, minimum height = 8mm, inner sep = 0pt, empty triangle/.style = regular polygon, regular polygon sides = 3, inner sep = -1pt, minimum size = 8mm, triangle/.style = empty triangle, draw, nice dash/.style=dash pattern=on 1pt off 1pt, out dash/.style=nice dash, markTri/.style=fill=gray!30!whitek-cut search trees. Let T be a search tree on a graph. A node v ∈ V(T) is called k-cut if |(T_v)| ≤ k. In an STT, this means that at most k edges go from V(T_v) to the ancestors of v. The search tree T itself is called k-cut if every node of T is k-cut.Observe that 1-cut STTs are simply rootings of the underlying tree. Our data structures are based on 2-cut STTs. Intuitively, 2-cut STTs “locally” behave like BSTs, which allows applying familiar BST techniques. In particular, a node, its parent, and its grandparent in a 2-cut STT must lie on the same path in the underlying tree. Recall that BSTs are search trees on paths, so rotations on these three nodes will behave like BST rotations. This observation is key in Berendsohn and Kozma's Splay generalization <cit.>. In the following, we prove a slightly stronger property. Let v be a node in a 2-cut search tree T on a tree G. Let p be the parent of v and let a ∈(T_p). Then v, p, a must lie on a common path in G (though not necessarily in that order). Suppose not. Then, there is a node x ∉{v,p,a} such that v, p, a are in pairwise distinct components of G ∖ x. Clearly, x cannot be an ancestor of p, since otherwise p and v would be in different subtrees. Since v is a child of p, we know that x must be a descendant of v. Let c be the child of v with x ∈ V(T_c). We trivially have v ∈(T_c). Since there is a path between x and p that does not contain v, we also have p ∈(T_c). Finally, there is a path between x and a that does not contain v or p, implying a ∈(T_c). But then T_c≥ 3, violating the 2-cut property. § IMPLEMENTING 2-CUT STTS Previous works <cit.> have not given an explicit implementation of STTs as a data structure. In this section, we show how to efficiently maintain a 2-cut search tree on a fixed tree G under rotations.[See stt/src/twocut/basic.rs in the source code.]Let T be a 2-cut search tree on a tree G, and let v be a node in T. We call v a separator node if |(T_v)| = 2. Observe that a separator node v is on the path (in G) between the two nodes in (T_v), hence “separates” them.We call a separator node v a direct separator node if (T_v) contains precisely the parent and grandparent of v in T, and an indirect separator node otherwise.Our representation of 2-cut STTs is based on maintaining separator and indirect separator children of nodes. It turns out that each node can have at most one separator child and one indirect separator child. Each node in a 2-cut STT has up to one child that is a direct separator node, and up to one child that is an indirect separator node. Suppose a node u has two direct separator children v, v'. Then (T_v) = (T_v') = {u,p}, where p is the parent of u. But (T_v) ∩(T_v') = {u} by <ref> (<ref>), a contradiction. Now suppose u has two indirect separator children v, v'. Then u has a parent p, and there are distinct ancestors a_1, a_2 of p such that (T_v) = {a_1,u} and (T_v') = {a_2,u} by <ref> (<ref>). Thus, {p,a_1, a_2}⊆(T_u) contradicting that T is 2-cut.<Ref> suggests the following representation of an STT T. For each node v, we store the following pointers: * (v): The parent node of v, orif v is the root. * (v): The unique child of v that is a direct separator node, orif v has no such child. * (v): The unique child of v that is a indirect separator node, orif v has no such child.With these pointers, T uniquely represents G (see <ref>).It is important to note that not all rotations are actually possible while maintaining the 2-cut property. Berendsohn and Kozma proved the characterization given below. <cit.> Let T be an STT and let v ∈ V(G) with parent p ∈ V(G). Rotating v with p maintains the 2-cut property if and only if |(T_v)| ≠ 1 or |(T_p)| ≠ 2. Rotations satisfying the requirements of <ref> can be implemented in constant time using the above data structure. This (rather technical) procedure is found in <ref>.§ LINKING AND CUTTING In this section, we show how to implement the operationsandto add and remove edges.[See stt/src/twocut/mod.rs] The underlying forest G is maintained as a collection of 2-cut STTs, which we call a search forest. For a node v in a search forest F, we denote by F_v the subtree rooted at v. Since we do not allow adding and removing nodes, we maintain all nodes in a fixed-size array. The structure of each STT is represented by the node pointers described in <ref>.We assume that we have a black-box algorithmthat, given a node, brings it to the top of its tree with some sequence of rotations. (Implementations are presented in <ref>.) We additionally assume thatis stable, which essentially means that a call todoes not move around the previous root too much. Formally, an algorithm foris called stable if, in the resulting search tree, the depth of the previous root r is bounded by some constant, and all nodes on the root path of r are 1-cut. The stability property simplifies the implementation ofand , but is not strictly necessary (see <ref>).The implementations ofandare shown in <ref>. Note that we ignore the supplied weight w infor now.We now argue the correctness of the two procedures. Below, G and G' denote the underlying forest before and after the operation. * Consider a call u,v,w. Let F be the search forest after the two calls to , and let F' the search forest after . If we only consider parent pointers, then F' is clearly a valid search forest on G'. It remains to show that child pointers are still valid. For this, observe that for every node x ∈ V(F) ∖{u}, we have (F_x) = (F'_x), and we have (F_u) = ∅, (F'_u) = {v}. Thus, no node becomes a separator child or stops being one, and no direct separator node becomes an indirect one or vice versa. Thus, all child pointers stay valid. Observe that the call v is not necessary for correctness. However, it is important for the complexity analysis in <ref>. * Now consider a call u,v. Again, let F be the search forest after the two calls to , and let F' be the search forest after . Stability implies that u is 1-cut in F. Since v is an ancestor of u and {u,v}∈ E(G), we have (F_u) = {v}, implying that v is the parent of u. Thus, setting (u) removes the edge {u,v} from the underlying forest. Again, the boundaries of subtrees other than T_u do not change between F and F', implying that no further pointer changes are necessary to make F' valid. The running time of both operations is dominated by the calls to , which we later show have amortized complexity (log n) for our SplayTT-based variants (see <ref>), where n is the number of vertices in the underlying forest. Non-stable implementations. Our Rust implementation supports non-stable implementations of ; although in this case, a second procedure v is required, which rotates v directly below the current root. The implementations of(<ref>) and(see <ref>) are easily adapted.§ MAINTAINING EDGE WEIGHTS In this section, we show how to maintain edge weights in 2-cut STTs under rotations and implement theoperation.[In the source code, the weight update procedures are found in stt/src/twocut/node_data.rs; implementations of(for stable and non-stable ) are found in stt/src/twocut/mod.rs] For simplicity, we assume that the edge weights come from a group here instead of a monoid. A somewhat more complicated way to handle monoids is shown in <ref>.Let F be a 2-cut search forest on a forest G with edge weights from a commutative group (W,+). The weight of a path in F is defined as the sum of the weights of its edges. For two vertices u, v ∈ V(G) in the same tree, let d(u,v) denote the weight of the unique path between u and v, i.e., the distance between u and v.For each node v, we store a field (v) indicating the distance between v and the parent of v in F. If v is the root, then (v) = ∞. Rotations. Consider a rotation of v with its parent p. Let c be the direct separator child of v, orif v has no direct separator child. Let g be the parent of p, orif p is the root. Let F, F' be the search forest before and after the rotation. We denote by (·) and '(·) the values before and after the rotation. * In F', the parent of p is v, so '(p) = d(p,v) = (v). * If p is the root of F, then v is the root of F', so '(v) = ∞. * If p is not the root, then g exists, and '(v) = d(v,g). Since F is 2-cut, v, p, g lie on a common path (by <ref>). * If v is between p and g on the path, then v is a direct separator (by definition). We have '(v) = d(v,g) = d(p,g) - d(p,v) = (p) - (v). * If p is between v and g on the path, then v is 1-cut or an indirect separator. We have '(v) = d(v,g) = d(v,p) + d(p,g) = (v) + (p). * g cannot be between v and p, since then v and p needed to be in different subtrees of g. * Suppose c exists. Since c is a direct separator in T, c lies on the path between v and p, so we have '(c) = d(c,p) = d(v,p) - d(v,c) = (v) - (c). Since only the parents of v, p, and (possibly) c change, an update procedure for (·) after a rotation follows from the observations above. Linking and cutting. Aside from rotations, at the end of u,v,w, we make v the parent of u. We can simply set (u)w here. Similarly, at the end of u,v, the node u is removed from its parent, and we set (u) ∞. Computing path weight. Finally, we implement u,v as follows. First, we call u, and then v. Afterwards, we follow parent pointers to check whether v is the root of the search tree containing u. If no, we return . If yes, we return the sum ∑_x ∈ P ∖{v}(x), where P is the root path of u.We now argue that this procedure is correct. Let F be the search forest after the two calls to . Clearly, v is the root of its search tree in F. If u is in a different search tree, the algorithm correctly returns .Otherwise, u is a descendant of v. Let u = u_1, u_2, …, u_k = v be the root path of u in F. Stability ofimplies that u_1, u_2, …, u_k-1 are all 1-cut. This means that for each i ∈ [k-2], the path from any node in V(T_u_i) to any node outside of V(T_u_i) must contain u_i+1. In particular, the path from u to u_i+2 contains u_i+1. Thus, by induction, the path from u to v = u_k contains u_1, u_2, …, u_k, in that order, and its weight is ∑_i=1^k-1 d(u_i, u_i+1) = ∑_i=1^k-1(u_i).The running time ofis dominated by the calls to , since stability ofimplies that k is bounded.§ HEURISTICS FOR NODETOROOTIn the following sections, we describe multiple implementations of theprocedure used by , , and . §.§ MoveToRootTTOne of the simplest dynamic BST algorithms is the move-to-root heuristic <cit.>. After finding a node v, it simply rotates v with its parent until v becomes the root.This algorithm does not work for STTs, since not all rotations are allowed. However, if a rotation of v with its parent p is not allowed, then |(T_v)| = 1 and |(T_p)| = 2 by <ref>, implying that p is not the root and, in particular, p can be rotated with its parent. Thus, we can bring v to the root by repeatedly rotating at v or, if that is not possible, rotating at its parent, until v is the root. We call this algorithm(see <ref>).[Found in stt/src/twocut/mtrtt.rs in the source code.]Observe that if the underlying tree G is a path, then all rotations are possible. Thus, in this case,is equivalent to the classical move-to-root algorithm. It is known that move-to-root performs poorly in the worst case, but well on uniformly random inputs <cit.>. Our experiments suggest the same for . In fact, probably due to its simplicity, it outperforms more complicated algorithms on uniformly random inputs. §.§ SplayTTIn this section, we present the SplayTT algorithm of Berendsohn and Kozma <cit.> and two simple variants. It is based on the classical Splay algorithm <cit.>, which we describe first.Splay can be seen as a slightly more sophisticated version of the move-to-root algorithm. After finding a node v, it is brought to the root by a series of calls to the procedure (v). If v has no grandparent, then (v) simply rotates v with its parent p (this is called a ZIG step). If the value of v is between the values of p and its grandparent g, then (v) rotates twice at v (ZIG-ZAG step). Finally, if the value of v is smaller or larger than both values of p and g, then (v) rotates first at p and then at v (ZIG-ZIG step). Afterwards, v is an ancestor of both p and g, so v is eventually brought to the root.In 2-cut STTs, (v) can be applied basically as-is, since <ref> implies that v, p, and g are on a path. If v is between p and g, then we execute a ZIG-ZAG step; otherwise, we execute a ZIG-ZIG step (see <ref>). However, simply applyingrepeatedly may destroy the 2-cut property. The following lemma characterizes situations where (v) is allowed.lemmapSplayStepAllowed Let v be a node in an STT T. If v is a child of the root of T, then (v) preserves the 2-cut property. If v has a parent p and a grandparent g, then (v) preserves the 2-cut property if and only if g is not a separator or both v and p are separators. If v is the child of the root r, then (v) performs a single rotation, which is valid (i.e., preserves the 2-cut property), since |(T_r)| = 0. Suppose v is note the child of the root, and we apply (v). Let T' be the search tree after the first rotation, and T” be the search tree after the second rotation. If (v) executes a ZIG-ZIG step, then it first rotates p with g, and then v with p. The first rotation is invalid iff |(T_p)| = 1 and |(T_g)| = 2. The second rotation is invalid iff |(T'_v)| = |(T_v)| = 1 and |(T'_p)| = |(T_g)| = 2. So the ZIG-ZIG step is invalid if g is a separator and at least one of v and p is not. This is precisely the negation of the condition stated in the lemma. If (v) executes a ZIG-ZAG step, then it rotates twice at v. The first rotation is invalid iff |(T_v)| = 1 and |(T_p)| = 2. This can never happen, since we only execute a ZIG-ZAG step if v is a separator. The second rotation is invalid iff |(T'_v)| = |(T_p)| = 1 and |(T'_g)| = |(T_g)| = 2. Assuming that v is a separator, this is again the negation of the stated condition. mainnode/.style=, labell/.style=left=-.5mm, labelr/.style=right=-.5mm, labelal/.style=left=-.5mm, labelar/.style=right=-.5mm, labelall/.style=above left=-1mm, labelarr/.style=above right=-1mm <Ref> sketches a ZIG-ZAG, resp. ZIG-ZIG step. In the figure, x, y, and z represent different types of children that may or may not exist (and there may even be multiple non-separator children like x). In the first tree in <ref>, observe that (T_z) ⊆{v}∪(T_v), so (T_z) = {v, g}. The remainder of the two sketches is easily explained from the definition of rotations; in particular, when rotating a node v with its parent p, the direct separator child of v becomes a child of p, and all other children of v and p are not affected.We now present three Splay-based algorithms using the (v) procedure to bring a node to the root.[All three variants are found in stt/src/twocut/splaytt.rs in the source code.] Greedy SplayTT. Our first algorithm is similar toand is inspired by the top tree implementation of Holm, Rotenberg, and Ryhl <cit.>. Greedy SplayTT brings a node v to the root by repeatedly trying executingon v, its parent, and its grandparent. See <ref> for pseudocode.The following lemma implies that Greedy SplayTT only performs valid rotations. Let v be a node in a 2-cut STT T with parent p and grandparent g. Then one of (v), (p), and (g) can be executed. Suppose all three calls are invalid. First, observe that then g must have a parent h and a grandparent i; otherwise, g is not a separator, so (v) is allowed. Now suppose (v), (p), (g) are all disallowed. Then g, h, and i must be separators. But then (g) is allowed, a contradiction. It remains to show that Greedy SplayTT actually brings the given node to the root. For this, observe that performing (x) for some node x decreases the depth of each child of x by at least one (see <ref>). Hence, the depth of each descendant of x is also decreased. Eachin NodeToRootGreedySplayv thus decreases the depth of v, eventually bringing it to the root. Two-pass SplayTT. We now describe the algorithm of Berendsohn and Kozma <cit.>. Suppose we want to rotate v to the root. The idea is to first do one pass over the root path of v and remove all nodes that might inhibit rotations. Then we splay v to the root. Notably, we do not use <ref>, but the first pass ensures that every rotation on the root path of v is valid afterwards.The algorithm uses the following helper procedure. Let x be a descendant of a node y. The procedure (x,y) executes (x) until y is the parent or grandparent of x. Then, if y is the grandparent of x, it executes a final rotation, so x becomes a child of y.We now describe the algorithm. Consider a non-root node x on the root path of v, and let p be the parent of x. Recall a rotation at x is not allowed if and only if T_x = 1 and T_p = 2 (see <ref>). In this case, we call p a branching node. We first find all branching nodes b_1, b_2, …, b_k on the root path of v. We then call (v, b_1), and subsequently (b_i, b_i+1) for each i ∈ [k-1]. Finally, we splay b_k to the root by repeatedly calling (b_k). This concludes the first pass. The second pass simply consists of repeatedly calling , until v is the root.It can be seen that only valid rotations are executed. We refer to Berendsohn and Kozma <cit.> for more details. Local Two-pass SplayTT. We also implement a variant of Two-pass SplayTT that essentially does both passes at once. Bringing a node v to the root works by repeating the following. If possible, we call (v). If not, then the parent or grandparent of v must be a branching node, and we (essentially) perform aon it to bring it closer to the next higher branching node. Pseudocode for this variant is found in <ref>.Note that this algorithm uses the condition of <ref> to determine whether (v) can be executed. In some cases, this might “skip” a branching node that would have been handled separately by the non-local Two-pass SplayTT algorithm. Otherwise, the rotations executed in the Local variant are the same as in the non-local variant (just ordered differently), and correctness follows similarly. Stability. All four proposed algorithms (including MoveToRootTT) are stable, as we show in <ref>. Running time. We now turn to the running-time analysis of the SplayTT variants described above. Recall that we want to achieve (log n) amortized time per call to .We use the potential method <cit.>. Our potential function is essentially the sum-of-logs function of Sleator and Tarjan <cit.>. Let T be an STT, and let v ∈ V(T). We define ϕ(T_v) = c ·log( |T_v|+1 ), and Φ(T) = ∑_v ∈ V(T)ϕ(T_v), where c is a constant to be chosen later.Sinceinvolves three nodes on a single path in the underlying tree, the following lemmas are easily generalized from the BST case. Let T be an STT, and let T' be produced from T by a single rotation at v ∈ V(T). Then Φ(T') - Φ(T) ≤ 3( ϕ(T'_v) - ϕ(T_v) ). Let T be an STT, and let T' be produced from T by a ZIG-ZIG or ZIG-ZAG step at v ∈ V(T). Then Φ(T') - Φ(T) ≤ 3( ϕ(T'_v) - ϕ(T_v) ) - 2c. For Splay BST, the remainder of the analysis is easy. We keep executing splay steps at v. Thus, if all steps are ZIG-ZIG or ZIG-ZAG, and if T^0, T^1, …, T^k is the sequence of trees produced, we haveΦ(T^k) - Φ(T^0)≤∑_i=0^k-1 3 ( ϕ(T^i+1_v) - ϕ(T^i_v) ) - 2c = 3 ( ϕ(T^k_v) - ϕ(T^0_v) ) - 2ck≤ 3 log |V(T)| - 2ck. Setting c to the running time of a single rotation yields the desired amortized running time ( log |V(T)| ). The very last step might be a ZIG, but then the amortized running time increases only by an additive constant.In our SplayTT variants, the splay steps do not produce a single telescoping sum as above. However, we can split the splay steps into a constant number of sets that do telescope nicely.For Two-pass SplayTT, we refer to Berendsohn and Kozma's analysis <cit.>. Essentially, each pass produces a telescoping sum adding up to (log n). While they use a different potential function (which is necessary to prove static optimality), replacing their analysis ofwith <ref> yields an overall ( log n ) amortized running time. Local Two-pass SplayTT only skips the removal of some branching nodes, which does not affect analysis.For Greedy SplayTT, we use a different analysis. Performingwith greedy SplayTT has amortized running time (log n ), where n = |V(T)|. Fix v and consider a call v with greedy SplayTT. Let T be the current tree before or after someduring execution, let p and g be the parent and grandparent of v in T (either isif that node does not exist). For x ∈ V(T), define ψ(T_x) = 3 ·ϕ(T_x), and we define ψ(T_x) = 3c ·log (n+1) for non-existing nodes x =. Let Ψ(T) = ψ(T_v) + ψ(T_p) + ψ(T_g). We claim that Ψ(T) is an upper bound for the amortized running time so far, for (essentially) every intermediate tree T, and in particular the final tree. This clearly implies the lemma. At the start, the claim is trivially true. Now suppose we execute some . Let T, T' be the STT before and after the execution, and let p, p', g, g' be the parent and grandparent of v in the respective tree. Recall that the amortized running time of (x) is 3(ϕ(T'_x) - ϕ(T_x)) = ψ(T'_x) - ψ(T_x). *If we call (v), then the amortized cost is ψ(T'_v) - ψ(T_v). The nodes p and g are simply removed from the root path of v, so p' and g' are ancestors of g in T (or nonexistent). This implies that ψ(T_p) ≤ψ(T'_p') and ψ(T_g) ≤ψ(T'_g'), so ψ(T'_v) - ψ(T_v) ≤Ψ(T') - Ψ(T). *Suppose we call (p). Since (v) was disallowed, we know (by <ref>) that at least one of v and p is not a separator in T, and g is a separator in T. If p is a separator in T, then v is not. This means that (p) removes g and its parent h from the root path of v (i.e., v stays a child of p), and essentially the same analysis as in <ref> applies. If p is not a separator, then a ZIG-ZIG step at p is performed. If v is a not a direct separator, again g and h are removed from the root path of v. Otherwise, we cannot guarantee that the invariant holds after this step. However, we can show that the step directly after is a ZIG-ZAG step, and the invariant holds afterwards. <Ref> illustrates the situation. The key insight is that g is a separator in T' (after the current ZIG-ZIG step at p). To see this, observe that (T_g) = {h, a} for some node a that is a proper ancestor of h. On the other hand, we have a ∉(T_p), since p is not a separator. Further observe that V(T'_g) ⊇ V(T_g) ∖ V(T_p). Thus, we have a ∈(T'_g), so g is a separator in T'. Since v is still a direct separator in T' (note that (T_v) = (T'_v) = {p,g} by assumption), the nextwill be a ZIG-ZAG at v. Call the resulting tree T” (see <ref>). We now prove that Ψ(T”) - Ψ(T) bounds the amortized running time t of the two s, which is t = ψ(T'_p) - ψ(T_p) + ψ(T”_v) - ψ(T'_v). We have V(T_v) = V(T'_v), V(T'_p) = V(T”_v), implying ψ(T”_v) - ψ(T'_v) = ψ(T”_v) - ψ(T_v),andψ(T'_p) - ψ(T_p) = ψ(T”_v) - ψ(T_p) ≤ψ(T”_p”) - ψ(T_p). Moreover, since V(T_h) = V(T”_v), we haveψ(T_g) < ψ(T_h) = ψ(T”_v) ≤ψ(T”_g”), so ψ(T”_g”) - ψ(T_g) > 0. Thus, t < ψ(T”_v) - ψ(T_v) + ψ(T”_p”) - ψ(T_p) + ψ(T”_g”) - ψ(T_g) = Ψ(T”) - Ψ(T), so the running time of both s together is bound by the change in Ψ. * Finally, suppose we call (g). Thenat both v and p must be disallowed. The former implies that g is a separator, and the latter implies that p and g cannot be separators at the same time, so p is not a separator (in T). Thus, (g) simply removes h and the parent of h from the root path of v, so we have g = g' and p = p' and the potential of v and p does not change. Hence, the amortized cost ψ(T'_g) -ψ(T_g) precisely matches the change Ψ(T')-Ψ(T). It remains to analyze potential increases in , , and , which we do below. Starting with a dynamic forest on n nodes without edges, using any of the above SplayTT variants, m operations , , and/orare performed in time ( n + m log n ). , , andeach require up to two calls to , along with a constant amount of additional work, for an amortized cost of ( log n ).andadditionally change the tree structure at the end, which changes the potential. u,v only decreases the potential of v, since v loses a child. u,v adds u as a child of v at the end. However, at that point, v is the root of its tree, hence only the potential of v increases, and by at most 3 c ·log n. Thus, the amortized time of every operation is ( log n ). At the start, the forest has no edges, so search trees consist of only one node each, for a total starting potential of (n). This yields at total running time of ( n + m log n ). §.§.§ SplayTT vs. link-cut trees Sleator and Tarjan's link-cut trees <cit.> maintain a decomposition of the underlying tree into paths, and consist of a hierarchy of BSTs on these paths. Moving a node v to the root is performed roughly as follows. First, for each BST B between v and the root, splay the node v_B to the root of B, where v_B is the lowest node in B that is an ancestor of v in the overall link-cut tree. This shortens the path from v to the root, such that every node on that path comes from a different BST. Then, an operation called splice is performed, which splits and merges BSTs until the path from v to the root is contained in a single BST. Finally, v is splayed to the root.The Two-pass SplayTT algorithm, in a way, works very similar. Bose, Cardinal, Iacono, Koumoutsos, and Langerman <cit.> observed that link-cut trees essentially are 2-cut STTs. We remark that, disregarding the left-right order of BST nodes in the link-cut tree, SplayTT performs almost the same rotations as link-cut trees. The main difference is that no path-decomposition is maintained; instead, SplayTT “automatically” detects a decomposition of the search path.As discussed in <ref>, link-cut trees “natively” maintain rooted forests. Each path in the decomposition can be seen as oriented towards the root, and this orientation is preserved by the left-right order in the corresponding BST. To support(and hence arbitrary s in unrooted forests), it must be possible to reverse paths and the corresponding BSTs. To preserve the (log n) amortized cost this has to be done lazily using a reverse bit. This complicates the implementation, and we speculate that it is the main reason why our data structures outperform link-cut trees in our experiments with unrooted forests.§ EXPERIMENTAL EVALUATION We performed multiple experiments on our Rust implementation. We used an AMD Ryzen 5 2600X processor running Debian Bullseye at 3.6 GHz with 512 KB of L1 cache, 3 MB of L2 cache, and 16 MB of L3 cache. §.§ AlgorithmsWe now describe the data structures we implemented and compared in our experiments. Edge-weighted unrooted forests. Using our basic 2-cut STT data structure (<ref>) and the , , andprocedures described in <ref> together with one of the fouralgorithms (Greedy SplayTT, Two-pass SplayTT, Local two-pass SplayTT, and MoveToRootTT), we obtain four different implementations. We call these implementations Stable since they assume stability.As mentioned in <ref>, our Rust implementation also includes procedures for , , andthat do not assume stability, but additionally require theoperation. We have four correspondingimplementations as slight variants of the fourimplementations.[Found next to the respectiveimplementation in the source code.]We denote the resulting eight data structures as (Stable) Greedy Splay, (Stable) 2P Splay, (Stable) L2P Splay, and (Stable) MTR.Further, we have Link-cut[Found in stt/src/link_cut.rs in the source code.], an implementation of the amortized variant of Sleator and Tarjan's link-cut trees <cit.>, where the handling of edge weights is similar to the way described in <ref>.[Sleator and Tarjan only describe how to maintain vertex weights. Tarjan and Werneck <cit.> simulate edge weights by adding a vertex on each edge and maintaining vertex weights. We did not test this approach.]Finally, we have two linear-time data structures. 1-cut[Found in stt/src/onecut.rs] is a naive dynamic forest implementation that maintains a rooting of each tree (i.e., a 1-cut search tree on each tree). The dynamic forest operations are implemented as described above, where v repeatedly rotates the root with one of its children until v is the root. Petgraph[Found in stt/src/pg.rs] is a naive dynamic forest implementation using the Petgraph[<https://crates.io/crates/petgraph>] library, which appears to be the most popular graph library for Rust at the time of writing. We tested the other implementations using Petgraph as a reference. Rooted forests. We also implemented data structures maintaining rooted forests without edge weights. We support , ,and (depending on the experiment) . An extension of our STT-based data structures is sketched in <ref> and yields four variants Greedy Splay, 2P Splay, L2P Splay, and MTR. Since we use bothand , there are no stable variants.Link-cut is the same link-cut tree implementation as above. Ifis not needed, we disable any checks and modifications of the reverse bit (though the slight space overhead remains). Finally, Simple[Found in stt/src/rooted.rs] is a naive implementation that maintains the rooted forest explicitly via parent pointers. §.§ Experiments and resultsWe now describe our experiments and discuss their results. To reduce variance, we performed every (sub)experiment between ten and twenty times. This section only shows a selection of results. More detailed result tables are found in <ref>. All experiments can be reproduced by calling a single script provided with the source code; see the includedfile for more details. Uniformly random connectivity queries. In our first experiment, weights are empty, so the updating logic from <ref> is not required and u,v simply returns whether u and v are connected or not. This allows us to directly compare the dynamic forest implementations without edge weight handling.A list of queries is pre-generated, starting with an empty forest. For each query, we draw two vertices u,v uniformly at random; if u and v are not connected, we call u,v; otherwise, we either call u,v or callon some edge on the path between u and v, with probability 1/2 each. We then execute the list of queries once for each implementation.First, we compared all implementations for n ≤ 1000 vertices and m = 20n queries. Petgraph performed very badly (worse than the next-worst implementation by a factor of over 15 at n = 1000), so we excluded it from all further experiments. We then tried larger values of n up to 8000, with m = 100n.[The maximum value for n is chosen such that the asymptotically worse behavior of 1-cut is clearly visible, but the overall experiment still takes a reasonable amount of time. The same applies to the other experiments.] The results for n = 8000 are shown in the first column of <ref> (see <ref> in <ref> for more details).Our SplayTT variants consistently outperform Link-cut trees by up to 25%. Among them, the stable variants are usually slightly faster than the non-stable ones, and Greedy Splay/L2P Splay are slightly faster than 2P Splay. All of this points towards simple algorithms performing better in practice.The even simpler MTR and Stable MTR are faster than all Splay-based data structures, perhaps because of the uniformity of the input (as discussed in <ref>). The simple linear-time 1-cut data structure is faster for smaller values of n (up to around 3000, see <ref> in <ref>), but is the worst by some margin at n = 8000. Incremental MSF. Our second, more practical experiment consists of solving the incremental minimum spanning forest (MSF) problem.We are given the edges of a weighted graph one-by-one and have to maintain an MSF. Edges are never removed. A simple solution using dynamic forests works as follows. Whenever an edge {u,v} with weight w arrives, if u and v are in different components, add the edge to the forest. Otherwise, find the heaviest edge on the path from u to v, and if its weight is larger than w, replace it with the new edge.To find the actual heaviest edge instead of just its weight, we extend our edge weight monoid (, max) to also contain a heaviest edge. The result is still a monoid, hence our algorithms can be used without change. As a first experiment, we follow Tarjan and Werneck <cit.> and randomly generate inputs on n ≤ 10^6 vertices with m = 8n edges. Second, we use thedata set[Available under the ODC Attribution License at <https://ogb.stanford.edu/docs/linkprop/#ogbl-collab>] <cit.> to generate an input that might be closer to real-world applications. The data set consists of a set of authors and collaborations between authors, annotated with a year. We interpret this as a dynamically changing graph where the first collaboration creates an edge with weight 1, and each subsequent collaboration increases the weight of the edge. Inverting the edge weights yields a natural dynamic MSF problem, with the additional allowed operation of decreasing an edge weight, which can be easily implemented by first removing the edge (if it exists in the current MSF), and then adding it again with the new weight. The resulting input has 235 868 vertices and 1 179 052 queries.We also compare the online algorithms with the Petgraph library's implementation of Kruskal's offline algorithm.Kruskal's algorithm outperforms the online algorithms by a large factor (this is expected, since it is offline and less general). Otherwise, the results of this experiment are similar to the uniformly random query experiments, except that Stable Greedy Splay is now clearly the fastest among the Splay-based data structures. It is not clear why this is not the case in the previous experiment, but we note that Stable Greedy Splay is our simplest SplayTT-based data structure. The results of theexperiment are similar except that the the difference between stable and non-stable implementations does not exist, for unknown reasons. Random queries with variable probability of . Informal experiments with a naive fully-dynamic connectivity algorithm lead us to believe that Link-Cut performs better compared to our approaches whenqueries are common (and thus the reverse bit is rarely changed). Hence, we repeated the first experiment (<ref>) with n = 5000, except that the probability p of generating aquery (instead of a ) is variable. <Ref> shows that the performance of link-cut gets closer to the STT data structures as p approaches 1 (even slightly outperforming the weaker 2P Splay variant), confirming our suspicion. Degenerate queries. MTR and Stable MTR outperform the other algorithms on uniform queries, despite having asymptotic worst-case performance of Θ(n) per operation. To experimentally confirm the worst-case behavior, we create a path G of n ≤ 10 000 nodes v_1, v_2, …, v_n, and then call v_i, v_n for all i ∈ [n] in order. While the queries have strong locality, the two vertices v_i, v_n are very far from each other on average. All Splay-based approaches are able to exploit the locality and outperform the linear-time data structures (MTR, Stable MTR, and 1-cut) by a factor of over 100 when n = 10 000.To check how “isolated” our degenerate example is, we performed the following “noisy” experiment. Fixing n = 5000, for each i ∈ [n], we call v_j, v_n, where j = i + ⌊ x ⌋ and x is drawn from a normal distribution with mean 0 and standard deviation σ, for some values σ≤ 300. (See <ref>.) As expected, 1-cut still performs very badly, since the added noise does not change the expected distance between v_i and v_n. MTR and Stable MTR, on the other hand, do adapt, though even with σ = 300 both are still slower than the Splay-based variants by at least 10%. Lowest common ancestors. In our final two experiments, we maintain a rooted forest on n vertices and execute 10n queries among u, v, v, and u,v, the latter of which returns the lowest common ancestor of two nodes in the same tree. The query distribution is as follows. A random non-root node iswith probability 1/2·m/n-1, where m is the current number of non-root nodes. Otherwise, a pair of nodes {u,v} is generated uniformly at random, and u,v or u,v is chosen depending on whether u and v are in the same tree. Overall, we have roughly 46% s, 38% s and 16% s.In the second experiment, we additionally allow v, i.e., changing the root of a tree. Eachis replaced withwith probability 1/2, resulting in roughly 30% s, 20% s, 30% s and 20% s.As expected, Link-cut outperforms our data structures considerably in the first experiment, where only the latter have to maintain extra data (to represent rooted trees). Whenis allowed, somewhat surprisingly, STT-based data structures are faster again. The Simple data structure performed much worse than all others and was excluded from experiments with large n. See <ref> for more details. §.§ Notes on the Rust implementation All implementations share a common interface (the Rust traits , resp. ) that is used by the experiments. Code is reused whenever possible through heavy use of generics.There are some differences between the pseudocode presented here and the actual Rust implementation. This is due to the fact that procedures like (v) and (v) contain multiple (·) checks, which can cause an unnecessarily large number of calls to the (·) function, even though the parent and possibly further ancestors of v may be already known (consider, e.g., <ref>). Hence, we eliminated some of the additional calls by, e.g., introducing a function 𝚒𝚜_𝚜𝚎𝚙𝚊𝚛𝚊𝚝𝚘𝚛_𝚑𝚒𝚗𝚝(v,p), which is more efficient, but requires p = (v) to be given.We applied this principle liberally in all STT-based variants and our Link-cut implementation. The performance gains were small, except for Greedy Splay, which was slightly slower than the other STT-based variants before, and now is slightly faster. We did not attempt any fine-tuning beyond this. § CONCLUSION We presented a new framework to implement dynamic forests based on STTs. Our data structures are as capable as link-cut trees, with a wide range of applications. For maintaining unrooted forests, our framework is arguably conceptually simpler than link-cut trees, since there is no need to explicitly maintain a (directed) path decomposition of the underlying forest. The main complexity lies in the implementation of the STT rotation primitive, which is easily separated and reused, simplifying the engineering of new variants. In contrast, variants of link-cut trees are somewhat restricted by the decomposition into BSTs; for example, no equivalent of our Greedy SplayTT algorithm for link-cut trees exists.In our experiments, the SplayTT-based data structures outperform link-cut trees by 15-20% if the dynamic forest is unrooted. Link-cut trees in turn are roughly 15-25% faster for rooted dynamic forests (without the root-changingoperation). A next step would be to attempt fine-tuning of our implementations and compare them with existing dynamic forest (in particular link-cut tree) implementations.Among the SplayTT-based variants we tested, Stable Greedy Splay generally performed best, and is also the simplest to implement and analyze. However, the even simpler MTR algorithm outperformed our more sophisticated algorithms by around 15%, except for specifically constructed inputs. It would be interesting to investigate whether there exist practical applications where the adaptivity of Splay-based data structures makes up for their increased complexity. § UNIQUE REPRESENTATION OF THE UNDERLYING TREE In this section, we show that the representation of STTs presented in <ref> is sufficient to uniquely represent the underlying tree. Recall that we store the pointers , , and . The parent pointers tell us the structure of the tree, and the child pointers tell us precisely which nodes are direct or indirect separators. We first show that this uniquely determines the boundaries of subtrees. Given an STT T and the pointers , ,for each node, we can determine (T_v) for each node v. For the root r, we always have (T_r) = ∅. If v is not a separator and not the root, then (T_v) contains only the parent of v. If v is a direct separator, then (T_v) consists of the parent and grandparent of v. Now consider an indirect separator node v with parent p and grandparent g. <Ref> implies that (T_v) = {p,x}, where x ∈(T_p). Since v is an indirect separator node, x ≠ g. But g ∈(T_p), so x must be the remaining node in (T_p) ∖ g. This observation allows us to determine all subtree boundaries in a top-down fashion. Once we have determined subtree boundaries, we can determine the edges of the underlying tree using the following lemma. Let T be a search tree on a tree G. Let u, v ∈ V(T) such that u is an ancestor of v. Then {u,v}∈ E(G) if and only if u ∈(T_v), but u ∉(T_c) for each child c of v. If u ∉(T_v), then there is no edge in G between u and V(T_v), so, in particular, {u,v}∉ E(G). Now suppose u ∈(T_v). Since G is a tree and G[V(T_v)] is connected, there must be exactly one edge {u,x} between u and V(T_v). If u ∈(T_c) for some child c of v, then x ∈ V(T_c), so {u,v}∉ E(G). Otherwise, we have x ∉ V(T-c) for each child c of v, and hence x = v. § IMPLEMENTING STT ROTATIONS Given a node v in an STT T, represented as described in <ref>, we can rotate v with its parent in (1) time. Let p = (v), let g = (p), and let c = (v) (g and/or c may be ). We denote by T' the tree after the rotation, and by '(·), '(·), '(·) the correct respective pointers in T'. In the following, we frequently make use of the fact that each node has at most one direct and at most one indirect separator child (<ref>). For the parent pointers, we have '(v) = g, '(p) = v, and, if c ≠, additionally '(c) = p. If g ≠, we may need to adjust its child pointers. Observe that V(T'_v) = V(T_p), so (T'_v) = (T_p). Thus, v is an (in)direct separator child in T' if and only if p was an (in)direct separator child in T'. One of '(g) and '(g) may accordingly change from p to v. We now consider the child pointers of p. Note that p gains a new parent (v) and keeps all other ancestors, loses a child (v) and possibly gains a child (c). First, we have '(p) = c, since c is the unique node with (T_c) = {v,p} if such a node exists; otherwise, '(p) == c. If p has a separator child x ≠ v in T, then x clearly still is a separator in T', and v ∉(T_x) = (T'_x), so '(p) = x. If x does not exist, then p has no separator child in T. Since p does not gain children in T' besides c, this means '(p) =. Now consider the child pointers of v. Note that v loses its parent and keeps all other ancestors, gains a child (p) and possibly loses a child (c). If g =, then v is the root of T', and thus '(v) = '(v) =. Suppose g ≠. Since T is 2-cut, v, p, g lie on a common path (using <ref>). The vertex g cannot lie between v and p on this path, otherwise v and p would be in different subtrees of g in T. * If v is a direct separator in T, then (T_v) = {p,g} and v lies on the path between p and g. We claim that '(v) = (v). Indeed, if x = '(v) ≠, then (T'_x) = {v,g}. Since v is on the path between p and g, we have x ≠ p (see <ref>), so x must already have been a child of v in T, and (T_x) = (T'_x) = {v,g}, so x = (v). Conversely, if y = (v) ≠, then y is still a child of v in T', so (T'_y) = (T_y) ⊆(T_v) ∪{v}∖{p} = {v,g}, implying that '(v) = y. To determine '(v), consider the following two cases. * p was a separator node in T. Then (T_p) = {g, a}, where a is some ancestor of g. <Ref> implies that p is on the path between g and a. Since v lies between g and p, the underlying tree has a path containing a, p, v, g, in that order (see <ref>). Hence, p lies on the path between a and v, so (T'_p) = {v,a} and thus '(v) = p. * p was a 1-cut node in T. We claim that then '(v) =. Suppose otherwise that x = '(v) ≠. Then (T'_x) = {v,a}, where a is some ancestor of g. This implies a ∈(T'_v) = (T_p). But then (T_p) = {g,a}, contradicting the assumption. * If v is not a direct separator in T, then p is on the path between v and g. Thus, '(v) = p. If x = (v) ≠, then x is still a separator child of v (by definition). Since v only gains p as a child, no other nodes can become the indirect separator child of v. Thus, '(v) = (v). Finally, consider a separator child x of c. Since c swapped parent (v) and grandparent (p), if x was a direct separator in T, it is an indirect separator in T', and vice versa. Hence, we have '(c) = (c) and '(c) = (c). All nodes other than v, p, g, c do not gain or lose children and do not change parent, hence their pointers are the same in T and T'. Implementing a rotation procedure based on the observations above is straight-forward. (See stt/src/twocut/basic.rs in the source code.)§ MONOID EDGE WEIGHTS In this section, we show how to maintain more general monoid weights. The difference between groups and monoids is that not every monoid element has an inverse. Thus, subtraction is not possible, so the approach from <ref> cannot be used.Let T be a 2-cut forest on a tree G with edge weights from a commutative monoid (W,+). Again, let d(u,v) denote the weight of the path between two nodes u and v (i.e., the distance between u and v), and let d(u,v) = ∞ if u = or v =. We now need two fields for every node. * (v) is the distance between v and its parent, or ∞ if v is the root. * (v) is ∞ if v is 1-cut. If v is a separator, then (v) is the distance between v and the node x ∈(T_v) that is not the parent of v. Together, (v) and (v) store the distance of v to each node x ∈(T_v). We now show how to maintain both fields under rotations.Consider a rotation of v with its parent p. Let T, T' be the search forest before and after the rotation. Let c be the direct separator child of v in T, orif v has no direct separator child. We denote by (·), (·) and '(·), '(·) the values before and after the rotation.Suppose c exists. Then the rotation exchanges parent and grandparent of c, both of which are in (T_c) = (T'_c). Hence, we can simply swap (c) and (c).As in the group setting, we always have '(p) = d(p,v) = (v). If p is the root of T, then v is the root of T'. We then have '(v) = '(v) = '(p) = ∞.We now turn to the more interesting case where p is not the root. Then, p has a parent g. Let a be the other node in (T_p) if p is a separator, orotherwise. We have '(v) = d(v,g) and '(v) = d(v,a).<Ref> implies that v,p,g lie on a common path. * If v is between p and g on the path, then v is a direct separator in T (by definition). We have '(v) = d(v,g) = (v). * If p is a separator in T, then p lies on a path between g and a in G, which means G has a path along a, p, v, g, in that order (see <ref>). Thus, '(p) = d(p,a) = (p) and '(v) = d(v,a) = d(v,p) + d(p,a) = (v) + (p). * If p is not a separator in T, then a =, so '(v) = d(v,a) = ∞. We claim that p is not a separator in T' either, so '(p) = ∞. Indeed, if p is a separator in T', then (T'_p) ⊆{v}∪(T'_v) = {v,g} (since (T'_v) = (T_p) = {g}). But this contradicts the assumption that v is on the path between p and g. * If p is between v and g on the path, then '(v) = d(v,g) = d(v,p) + d(p,g) = (v) + (p) as in the group case. Also, (T'_p) = {v,g}, so '(p) = d(p,g) = (p). It remains to compute '(v) = d(v,a). If p is not a separator in T, then a = and thus '(v) = ∞. If p is a separator in T, then v must also be a separator in T (otherwise, the rotation is not valid by <ref>). We have (T_v) ⊆{p}∪(T_p) = {p,g,a} by <ref>. Since p is between v and g by assumption, we have g ∉(T_v), and thus (T_v) = {p,a}, implying that '(v) = (v). * g cannot be between v and p, since then v and p needed to be in different subtrees of g. § STABILITY In this section, we argue thatand the three SplayTT variants are stable. Given an STT T and a node v ∈ V(T), a node x ∈ V(T), we define the property P(T,v,x) as satisfied if * all nodes on the root path of x are 1-cut; and * the root paths of v and x only intersect at the root of T. Let T be an STT, let v, x ∈ V(T), and let T' be the result of rotating at a node u such that * u is on the root path of v; and * if u is a child of the root and u ≠ v, then the child of u on the root path of v is 1-cut. Then, P(T,v,x) implies P(T',v,x). Let p be the parent of u in T. First, suppose that p is not the root. Then P(T,v,x) implies that p is not on the root path of x. Hence, rotating at u may remove p from the root path of v, but does not change the root path of x. Thus, P(T',v,x) holds. Now suppose that p is the root of T. Then rotating at u adds u to the root path of x. We have (T'_u) = 0, (T'_p) = 1, and (T'_y) = (T_y) = 1 for every other node y on the root path of x. This proves the first part of P(T',v,x). In order to prove the second part, let v' be the child of u on the root path of v. Assumption (ii) implies that v' is 1-cut, so rotating at u does not change the parent of v', and thus rotating at u removes p from the root path of v. Hence, the second part of P(T',v,x) holds. We now argue that all four algorithms only perform rotations satisfying <ref>. Clearly, all rotations are performed on the root path of v. To see that (ii) holds, we need to consider the algorithms in more detail. We are only interested in rotations at a node u ≠ v that is the child of the root. Such a rotation only happens in the following circumstances. * The second-to-last rotation in , when we rotate at the parent of v. This only happens if v is 1-cut, and thus satisfies (ii). * A ZIG-ZIG step at a grandchild u' of the root r, in any of the SplayTT variants. A ZIG-ZIG step only happens if u', u, r are on a path in G in that order, implying that u' is 1-cut before the rotation. * A final ZIG step in the first pass of (Local) Two-pass SplayTT, when bringing the final branching node to the root. In that case, u was a branching node before rotating it to the root, so by definition, the child of u on the root path of v is 1-cut. Let T be an STT with root r, and let T' be the result of calling v (using any of the four algorithms). Then P(T,v,r) trivially holds, and <ref> implies that P(T',v,r) also holds.To prove stability, it remains to show that the depth of r in T' is bounded. Since P(T”,v,r) holds for every intermediate tree, the depth of r can only increase when a rotation involving the (current) root is performed. It is easy to see that each variant performs at most three rotations orcalls that involve the root, so the final depth of r is at most six. We thus conclude thatand all three SplayTT variants are stable.§ MAINTAINING ROOTED TREES WITH STTS In contrast to link-cut trees, our implementation of STTs (as described in <ref>) cannot represent rooted trees without modification. Consider a call to v when v is the root of its STT. Since v has no separator children, both its child pointers are , thus we cannot navigate to the root of the underlying tree (or any node other than v, for that matter). In this section, we show how we can implement v and other rooted-tree operations using extra data. The relevant part of the source code is found in stt/src/twocut/rooted.rs.Let G be a tree with a designated root r. Let T be an STT on G. We store (a pointer to) the root r in each node on the root path of r in T. Formally, we maintain the following property for each node v ∈ V(T):(v) = r,ifr ∈ V(T_v) ,otherwise Implementing v is now trivial: Call v, and then return (v).We now describe how to update (v) under rotations. Consider a rotation of v with its parent p. Let T, T' denote the search tree before and after the rotation and let , ' denote the respective values before and after the rotation.Observe that only (v) and (p) may change. Since V(T'_v) = V(T_p), we have '(v) = (p). For '(p), consider the following cases. * If (p) =, then '(p) =, since p gains no new descendants with the rotation. * If (p) ≠ and (v) =, then r is in V(T_c) for some child c ≠ v of p. Observe that c is still a child of p in T', hence '(p) = (p). * Finally, suppose (p) ≠ and (v) ≠. Let c be the direct separator child of v. If c does exist and (c) ≠, then r is in V(T_c) = V(T'_c) and hence in V(T'_p), so '(p) = (p). Otherwise, r = v or r ∈ V(T_c') for some child c' ≠ c of v in T, hence '(p) =. We now sketch the implementations of the remaining operations. We refer to the code in stt/src/twocut/rooted.rs for more details.u,v works as described in <ref>, except that afterwards we set (u). Note that, by assumption, u was the root of its underlying tree before the operation. After bringing u to the root, each descendant x of u has (x) =. Then, u becomes the child of v, so it is no longer the underlying tree root.v now only takes one parameter. If the parent u of v (in the underlying tree) is known, we can simply call v and u, then detach u from v and set (v)v. However, we do not assume that u is known. To find u, after calling v, we first call r (note that r = (v)). Then, we can find u by first moving to the direct separator child of r, and then following indirect separator child pointers as long as possible. This moves along the underlying path from r to v (possibly skipping nodes), stopping at u. If r has no direct separator child, then u = r.For u,v, we first call v and u. Some simple checks determine whether u is an ancestor of v or vice versa. Otherwise, the LCA has to be in the direct separator child x of u. Now, we check if (d) ≠ or (i) ≠, where d and i are the direct and indirect separator children of x. If either is true, we repeat with xi, resp., xd. Otherwise, it can be seen that x must be the LCA of u and v. Calling x at the end pays for following the root path to x (via amortization).Finally, v is implemented as follows. First, call v. We then need to set (v)v and (x) for each node x ≠ v. Observe that every node x with (x) ≠ must be on the root path of r, hence we can make the necessary changes by following parent pointers from r. To pay for this, we call r afterwards.§ MORE DETAILED EXPERIMENTAL RESULTS See <ref> on tab:full_queriestab:full_lca_evert | http://arxiv.org/abs/2310.18036v2 | {
"authors": [
"Benjamin Aram Berendsohn"
],
"categories": [
"cs.DS"
],
"primary_category": "cs.DS",
"published": "20231027102824",
"title": "Fast and simple unrooted dynamic forests"
} |
Edge AI-Based Vein Detector for Efficient VenipunctureE. Salcedo and P. PeñalozaDepartment of Mechatronics EngineeringUniversidad Católica Boliviana “San Pablo”, La Paz, Bolivia{esalcedo,patricia.penaloza}@ucb.edu.bo Edge AI-Based Vein Detector for Efficient Venipuncture in the Antecubital FossaAccepted for publication in MICAI 2023, Part II, LNCS 14392Edwin Salcedo10000-0001-8970-8838 Patricia Peñaloza10009-0004-2151-485X January 14, 2024 =========================================================================================================================================== Assessing the condition and visibility of veins is a crucial step before obtaining intravenous access in the antecubital fossa, which is a common procedure to draw blood or administer intravenous therapies (IV therapies). Even though medical practitioners are highly skilled at intravenous cannulation, they usually struggle to perform the procedure in patients with low visible veins due to fluid retention, age, overweight, dark skin tone, or diabetes. Recently, several investigations proposed combining Near Infrared (NIR) imaging and deep learning (DL) techniques for forearm vein segmentation. Although they have demonstrated compelling results, their use has been rather limited owing to the portability and precision requirements to perform venipuncture. In this paper, we aim to contribute to bridging this gap using three strategies. First, we introduce a new NIR-based forearm vein segmentation dataset of 2,016 labelled images collected from 1,008 subjects with low visible veins. Second, we propose a modified U-Net architecture that locates veins specifically in the antecubital fossa region of the examined patient. Finally, a compressed version of the proposed architecture was deployed inside a bespoke, portable vein finder device after testing four common embedded microcomputers and four common quantization modalities. Experimental results showed that the model compressed with Dynamic Range Quantization and deployed on a Raspberry Pi 4B card produced the best execution time and precision balance, with 5.14 FPS and 0.957 of latency and Intersection over Union (IoU), respectively. These results show promising performance inside a resource-restricted low-cost device. The full implementation and data are available at: <https://github.com/EdwinTSalcedo/CUBITAL> § INTRODUCTION Venipuncture is a necessary procedure applied by medical staff, either to draw a blood sample, start an intravenous infusion, or instil a medication. While this procedure can be applied to several regions of the anatomy, doctors prefer the antecubital fossa due to the higher visibility and stability of veins there. Initially, physicians identify and ascertain suitability of the median cubital (MC), cephalic (C) and basilic (B) veins in the antecubital fossa, as depicted in Figure <ref>. It is worth mentioning that the median cubital vein is usually referred as the best site to perform catheterization <cit.><cit.>. However, people who do not have good vein visibility might require longer pre-inspection times, which can cause an early start of a trial-and-error venipuncture process to localize a suitable vein. This is the case for children, elderly people, dark-skinned people, and people with overweight or diabetes. Palpation, warm water, tourniquets, NIR vein finders are among some well-known good practices to improve vein visibility. Yet, if veins are still not noticeable, the need for health professionals to assist the next patients might cause bruises, pain, and bleeding to the current one. Since the beginning of the 2010s, several companies started commercializing hand-held vein finders based on ultrasound, transillumination, or infrared light to facilitate venipuncture. Nowadays, these devices' features range from basic vein visualization enhancement to simultaneous detection and mapping of veins in any part of the body (e.g. AccuVein AV400 and AV500). However, the widespread adoption of these devices has been rather limited owing to their high cost and closed software. Recently, in response to these limitations, several proposal systems based on Computer Vision, Deep Learning, and Near Infrared imaging (NIR) have emerged as promising approaches for vein visualization enhancement <cit.><cit.><cit.><cit.>. Nevertheless, they are usually designed to improve vein visualization in the entire forearm region, so healthcare professionals must still choose the most suitable region or vein with which they should work. Also, most recent algorithms are oriented to run in a central server, instead of being deployed to portable devices. So, there is still room for research to develop better AI-based devices that recommend which vein or region to select for venipuncture in real-time and on-site.Deep learning at the edge can be applied not only for more precise NIR imaging-based vein segmentation, but also to identify which region to choose for venipuncture. Therefore, our proposal aims to extend this body of work with the following contributions:* A new dataset containing 2,016 NIR images with low visible veins in arms is introduced, in tandem with their respective ground truth vein segmentation masks. The dataset also comprises bounding box, centroid and angle annotations for antecubital fossa localization inside the images.* We test five DL-based semantic segmentation models and perform a thorough comparison, from which we select and modify the best one to also act as a regression model for antecubital fossa localization and arm direction prediction. * We test the resulting model on four common microcomputers (Raspberry Pi 4B, Raspberry Pi 3B+, Khadas VIM3, and NVIDIA Jetson Nano) and using four common quantization modalities (dynamic range quantization, full-integer quantization, integer quantization with float fallback, and float16 quantization). The best combination is finally implemented in a bespoke, portable device that shows suitable veins in the antecubital fossa. The remainder of the paper is structured as follows. Section <ref> presents the state of the art on vein image acquisition approaches, as well as new DL and Edge AI-related tendencies for vein localization. Section <ref> describes the prototyping process of the end device as well as the implemented DL models and metrics. Then, in Section <ref>, we present the experimental results in terms of prediction accuracy and inference time. Finally, Section <ref> offers conclusions and discusses potential future research threads.§ LITERATURE REVIEWMany image acquisition, processing, and visualization techniques have been proposed and released to the market to enhance subcutaneous vein localization. By way of illustration, AccuVein vein finders feature simultaneous localization and mapping using light projections towards the skin. Nevertheless, their prices range from 1,800 USD to 7,000 USD per unit <cit.>, which keeps them inaccessible to many medical centers in developing countries. In the current section, we present a review on the main technologies and research trends on open-source vein detectors development.§.§ Image acquisition approaches Two image acquisition approaches can be clearly distinguished for forearm vein localization: transillumination-based and reflectance-based methods. The first ones are more extended in the literature because of their portability and low-cost. They mainly transmit light through the skin and tissue of a body sector, which is then followed and captured by a light sensitive camera at a given wavelength. While regular RGB cameras capture light in the human visible spectrum (400-700 nm), transillumination-based techniques such as multi-spectral imaging or hyper-spectral imaging aim to capture illumination in different ranges of the electromagnetic spectrum, e.g. the ultraviolet range or the infrared range. This approach was widely explored by investigators. For instance, Shahzad et al. <cit.> propose an illumination wavelength selection algorithm for vein detection using a multi-spectral camera, such that the system can recommend what wavelength to use for a patient based on his skin-tone.Y>XParticularly, Near-Infrared light (NIR) has been broadly explored over the past years as a vein visualization enhancing technique. As shown in Figure <ref>, this requires NIR illumination and a special camera able to capture NIR transillumination, which in turn generates digital images. NIR light can go through human skin reaching between 700 nm and 1,200 nm depth depending on the person's complexion. Since this range can provide information on a body's temperature and structure, it makes it suitable to capture vein presence in the subcutaneous tissue. Furthermore, oxygenated and deoxygenated hemoglobin, two components of blood, absorb and transmit NIR light better through them. About NIR capture devices, some common cameras available in the market are described in Table <ref> (under the “Imaging method & Camera” column) from where we can conclude Raspberry Pi NoIR 1 and 2 are the most frequented NIR cameras for research. For instance, academics in <cit.> proposed a detection device that combines two NIR cameras to obtain depth information about the subcutaneous layer of an arm and overlapped 3D visualizations of veins to enhance their illustration.Ultrasound imaging (US) and photoacoustic imaging are amongst the most-used methods in reflectance-based commercial devices for forearm vascular localization. While US provides a high-resolution frame-of-reference for identifying density, flow and perfusion of veins, Photoacoustic Imaging (PI) permits registering important factors such as oxygen saturation, total hemoglobin and the microdistribution of biomarkers. Both solve the problem of finding vessels by reflecting a high frequency sound (US) or non-ionizing light (PI) over a focused part of a body. Then, the return time travel of the reflected waves is registered with an imaging probe as electrical signals <cit.>. These waves, also known as ultrasonic waves, are detected by ultrasonic transducers to reconstruct physiological organs in living beings. In the case of human vessels, hemoglobin concentration and oxygen saturation are physiological properties that form 2D or 3D images with distinguishable contrast between skin tissue and vessels due to their distance concerning the light source. Combining both US and NIR modalities has recently brought new opportunities for robotic catheterization. For instance, researchers in <cit.> employed both NIR and US imaging inside a robot to perform venipuncture autonomously. A similar combination of US and NIR imaging was brought to a handheld robotic device by Leipheimer et al. in <cit.>, where the authors propose the use of machine learning models to safely and efficiently introduce a catheter sheath into a peripheral blood vessel. §.§ Computer-based vein distribution localization Venipuncture success of intravenous procedures depends on the timely localization of veins. Although a great majority are applied in the antecubital fossa, some procedures require finding veins in lower arm sections. Thus, semantic segmentation of veins over the forearm region is a crucial task that should be performed as precisely and timely as possible. Specifically, semantic segmentation aims to classify each pixel inside a collected image with a label. Most investigations interpret veins anatomically as hollow tube structures that join each other along an image, and they assume two categories for each pixel: vein pixel and background pixel. There are two notorious computer vision-based approaches that are regularly applied for forearm vein segmentation: traditional image processing methods and deep learning architectures. §.§.§ Image processing-based methods Segmentation approaches based on traditional image processing methods for NIR, US, multi-spectral, and hyper-spectral images usually comprehend steps for contrast and illumination enhancement, morphological operations, vein structure discovery, and edge detection. For instance, several investigations apply Histogram Equalisation or Contrast-Limit Adaptive Histogram Equalisation to enhance the contrast of the input images <cit.> <cit.>. Then, vessel segmentation approaches aim to discriminate regions with veins from the background. Here, vein segmentation techniques can be also classified as vein structure-based, region-based, gradient-based, and pixel-based. For example, Li et al. <cit.> proposed a convex-regional-based gradient preserving method that use edge information to enhance the low contrast and reduce the noise in NIR images for better vein segmentation. By applying a convex function, they find global minimums as optimal locations to detect veins. Recently, researchers in <cit.> proposed an image preprocessing system for existing vein detection devices to remove hair digitally from NIR images. They achieved an improvement of 5.04% of Structural Similarity Index (SSIM) with respect to their original vein segmentation algorithm, which shows the relevance of image processing methods for newer approaches.§.§.§ Deep learning-based methodsRecently, deep learning has demonstrated huge success in detection tasks from visual information due to its generalization power. Recent investigations leverage deep learning-based algorithms to classify pixels as vein or background inside the collected images. In contrast to image processing, deep learning models do not require strict controlled environments, which makes them more suitable to perspective, distance or illumination variations. U-Net based architectures are amongst the most used approaches for vein semantic segmentation <cit.> <cit.>. Moreover, Shah et al. <cit.> proposed a forearm vein segmentation model based on the Pix2Pix architecture to translate NIR images of arms into their segmented vascular versions. Their architecture consists of a student, which is a U-Net model that learns to generate new vascular masks from NIR images, and a teacher, which is a PatchGAN-based model that discriminates each generated image into fake or original images. Combined into a common architecture, the approach obtained 0.97 of accuracy. This model outperformed previous methods for forearm vein segmentation. §.§ Edge AI methodsDuring the last years, the advent of better microprocessors has increased the opportunities to bring deep learning models into standalone end devices. Edge AI and Edge Computing are two paradigms that have attracted much of the attention recently. While Edge Computing aims to bring information processing closer to the users, Edge AI is the implementation of artificial intelligence in an edge computing environment. Edge AI-based environments are usually implemented in embedded systems beside Computer Processing Units, Graphics Processing Units (GPUs), Tensor Processing Units (TPUs), Vision Processing Units (VPUs), Field Programmable Gate Arrays (FPGAs), Application-Specific Integrated Circuits (ASICs), or Systems-on-Chip (SoC) <cit.>. Specifically, development cards such as Nvidia Jetson, Khadas, Neural Computer Stick, or Google Coral can be of huge help to speed up new Edge-AI based applications. Hardware implementation and Edge AI are important for the present project since venipuncture is applied to patients in situ. So far, several investigations have proposed forearm segmentation algorithms deployed in portable scanners or fixed stations, and the host devices range from Raspberry Pi cards <cit.> <cit.> <cit.> <cit.> <cit.> to NVIDIA Jetson cards <cit.>. An updated publication list on vein distribution finders is described in Table <ref>. To the best knowledge of the authors, the great majority of investigations focus on general forearm vein segmentation and do not detect specific sites for optimal venipuncture. For instance, Chaoying et al. <cit.> proposed one of the first investigations to deploy deep learning-based forearm vein detectors on an embedded device with meaningful results: 0.78 of accuracy and 0.31 seconds per processed frame. The availability increase of low-cost NIR cameras, such as Pi NoIR, Jai, OV5647, Omnivision, among others, make them suitable for new on-device forearm vein finding applications. For example, Ng. et al <cit.> proposed a vein detection and visual guidance system to show the location of veins through a mixed-reality-based interface. They used a HoloLens 2 device and its infrared emitter to obtain new images, which in turn let them segment and visualize veins in real time. Their vein segmentation approach was based on the U-Net architecture with a RegNet-based encoder and achieved 0.89 precision. In the case of robotized venipuncture, Chen et al. <cit.> proposed a robotic system solution named Venibot to determine an optimal area on a forearm and perform puncturing autonomously. Their proposal combines US and NIR imaging to control the movements of the venipuncture robot. § MATERIAL AND METHODSTo localize the hidden veins of a patient, we developed a Deep Learning-based model that processes NIR images of their forearm and segments the present veins. This model also localizes the antecubital fossa to hide all veins except the ones located in that zone, such that a healthcare practitioner can only see the suitable veins for venipuncture. Later, the algorithm was implemented on an embedded system by applying compression techniques. In the present section, we describe the complete software and hardware implementation process in detail. §.§ Forearm Vein Segmentation §.§.§ Dataset Collection and Preprocessing As stated before, dehydration, young and old age, overweight, dark skin tone, and diabetes are among some factors that can affect patients' veins visibility. Specifically, in the case of young subjects, this series of injections can cause medical trauma, which in turn might cause future self-medication and conflicting feelings when requiring healthcare assistance <cit.>. Therefore, the present research focused on enhancing the visibility of veins in young patients. 2,016 NIR images were collected from both arms of 1,008 young subjects during the year 2022. The volunteers, whose age frequency is shown in Figure <ref>, were students in elementary and secondary schools in the cities of Sacaba and Santa Cruz, Bolivia. About the setup, each patient located an arm at a time on a flat surface covered with a white fabric for the sake of better contrast. Meanwhile, the initial version of the vein finder depicted in Figure <ref> was located 30 centimeters above using a lamp arm printed in 3D.Given that collecting information from children also requires parental consent, volunteers' parents were asked to sign a consent agreement to use the captured images for research purposes. This resulted in an approximate time of 5 minutes per subject, making a total of 83.8 hours. Data was saved and administered in a laptop's internal memory as CSV files and PNG images using a bespoke Tkinter application. This had the purpose of registering and managing NIR images along with the full name, complexion, age, medical condition, gender, and signed consent agreement of volunteers. To form the final version of the base dataset, NIR images were converted to grayscale and enhanced using Contrast-Limit Adaptive Histogram Equalisation (CLAHE). Then, ground-truth was manually annotated with background, arm, and vein segments using Roboflow. Finally, the images were normalized to 512 x 512 pixels to obtain pairs of images and masks suitable for semantic segmentation DL architectures.To avoid the risk of overfitting, we generated an augmented version of the base dataset applying sequential randomly-selected augmentation techniques. We implemented the following techniques from the ImgAug library: flipping images horizontally, perspective, rotating images in the range of 180^∘ and -180^∘, blurring images with Gaussian and average filters, contrasting with gamma functions, among others. In the end, the augmented version of the dataset contained 8,000 images with their corresponding segmentation masks. §.§.§ Model Selection and Training The recent progress made on vein subcutaneous segmentation based on NIR imaging in <cit.><cit.><cit.> let us understand the great generalization capabilities of Deep Learning-based (DL) methods with respect to previous approaches. Thus, we focused on implementing various recently proposed generic architectures for semantic segmentation:U-Net, Segnet, PSPNet, DeepLabV3+, and Pix2Pix. The models were implemented using TensorFlow 2.12.0 and Colab Pro+ with NVIDIA A100 GPUs. Besides modelling with both tools, they let us code a unified data loading and munging pipeline for the dataset and experiment parallelly with multiple instances per model, so that optimizing the base code and hyperparameters was completed efficiently. Both versions of the dataset, the base one and augmented one, were split into three subsets: 70% for training, 20% for validation, and 10% for testing. The available resources provided by Colab limited us to use a batch size of 8 instances per step when training each model. Although all models might have trained longer or shorter times, we made sure to use 10 epochs for a fair comparison. This was also supported by the fact that some models (DeepLabV3+ and Pix2Pix) started overfitting when training longer. We used Binary Cross Entropy (BCE) as the unique loss function for all models to measure the dissimilarity between the ground truth and predicted masks. A mathematical representation of BCE is shown in Equation <ref>, where y_i and ŷ_̂î represents a ground truth binary classification vector and a predicted binary classification vector, respectively. Also, in the same equation, Tstands for the number of pixels per instance, and f for the sigmoid activation function, as defined in Equation <ref>. BCE = -1/T∑_i=0^T y_i· log(f(ŷ_̂î)) + (1 - y_i)· log(1 - f(ŷ_̂î))f(s_i) = 1/1+e^-s_i Our Pix2Pix implementation was inspired on the work proposed by Zaineb et al. <cit.>, however, we followed the original Pix2Pix architecture proposed by Isola et al. <cit.>. This base version contained a generator based on the PatchGAN model and discriminator module based on the U-Net architecture. Therefore, we reused our base U-Net architecture and implemented PatchGAN. About the loss functions, we also applied BCE (as defined in Equation <ref>) to differentiate the ground truth and generated masks. Yet the generator required to use Mean Squared Error (MSE), commonly defined as in Equation <ref>, where M is the number of image pairs (ground truth and predicted masks), and N is the number of pixels per image pair.MSE = ∑_i=1^N(x_i-y_i)^2/M*N To measure the models' precision, metrics such as Pixel Accuracy, Intersection over Union (IoU), Dice Score, Pixel F1Score, and Peak signal-to-noise ratio (PSNR), were calculated according to Equations <ref>, <ref>, <ref>, <ref>, and <ref>, respectively. These metrics required to process ground truth G and predicted P masks first, with which we quantified the well and wrong classified pixels as True Positive (TP), True Negative (TN), False Positive (FP), or False Negative (FN), as defined in <cit.>.Accuracy = TP + TN/TP+FP+TN+FNIoU = |G ∩ P|/|G ∪ P| Dice Score = 2*|P ∩ G|/|P|+|G|F1Score = 2*TP/2*TP+FP+FN PSNR = 10*log_10( 255^2/MSE) §.§ Cubital Fossa Localization Once we found U-Net was the best semantic segmentation architecture for the task, we continued the investigation by experimenting with methods to localize the Antecubital Fossa. This required another labelling iteration to enclose the cubital fossa region with a bounding box in all 2,016 NIR images on Roboflow. Moreover, we made sure the bounding boxes' centroids were exactly located in the fossa, which means the center of the bounding boxes' coordinates were located in the median cubital (MC) areas in Figure <ref>. It is worth noting that the fossa location prediction also required the angle of the examined arm to hide veins out of the antecubital fossa region. So, we labelled the orientation of each arm synthetically by following the process shown in Figure <ref>. As depicted, we worked with the ground truth mask, removed veins, and converted the arm segment into a shape similar to a line by applying a series of morphological erosion operations. Then, we used OpenCV's function Hough Transform for Lines (HTL) to obtain the polar coordinates of lines from an accumulator matrix. According to this matrix, the more concur points in an image, the more probable they depict a line, therefore, HTL obtains a set of θ and ρ where points frequently concur. Given that we started with a single line representing the entire arm, we averaged the θ and ρ values and converted them to a degree value between 0 degrees and 180 degrees, starting from the very right in a counterclockwise direction. Obtaining the final version of the dataset let us model the problem as a combination of semantic segmentation and regression tasks. Thus, we integrated a neuronal network into the U-Net architecture. The layers, resolution, and channels of the final architecture are illustrated in Figure <ref>. Consequently, we created a multi-task loss function to combine BCE and MSE as defined in Equation <ref>. We also included the metric Mean Absolute Error (MAE) in the performance analysis stage when training and validating the architecture, as defined in Equation <ref>. MutiTaskLoss = BCE + MSEMAE = ∑_i=1^N|x_i-y_i|/M*N Finally, once the model was implemented and tested, we used the compression methods available in TensorFlow Lite to reduce the size of the model and embed it inside the final end device. The implemented approaches were Dynamic Range Quantization, Integer Quantization with Float Fallback, Full Integer Quantization and Float 16 Quantization. §.§ Hardware Development §.§.§ Device design with its components The availability of 3D printing technology and standalone microcomputers has opened new possibilities for innovative product design and manufacturing. To prototype the vein finder, we integrated electronic circuitry design, components assembly and 3D printing techniques. Most importantly, the device required initially the implementation and parameter optimization of the NIR imaging system in order to improve the quality of acquired NIR images. The initial version of this system is shown in Figure <ref>. Moving on to the development of the final vein finder device, we aimed to develop an embedded system to contain a DL architecture for simultaneous vein segmentation and antecubital fossa localization. Then, the testing and compression stages of the final architecture carried out on different cards let us define that the Raspberry Pi 4B card was the best choice for the prototype due to its good balance with respect to cost, precision, and inference time. Consequently, it was chosen for on-device image processing and DL model deployment. To enhance portability and autonomy, a Xiaomi portable battery of 10000 mah was connected to the Raspberry Pi 4B card through a micro-USB cable. For image capture, we included a Raspberry NoIR V2 camera to the Raspberry Pi 4B through a 2-lane MIPI CSI camera port. A touch screen was also installed to provide a Graphic User Interface (GUI) to the end user. The electronics schematics are presented in Figure <ref>.The illumination matrix of 12 infrared LEDs developed in the initial prototype, as shown in Figure <ref>, was included on a perforated breadboard, which was used to assemble the necessary circuits, mainly 100 Ω 1/2 W resistors. The LED matrix was powered by a 9 V battery, considering an appropriate resistor for each group of 3 LEDs. In addition, a 5 V relay module was implemented to control the energization of the LED matrix, and an On/Off switch was designed for the device activation. Moreover, a mechanism was implemented to synchronize the Raspberry Pi 4B and the relay module's power, ensuring simultaneous activation and deactivation. §.§.§ Manufacturing and Assembly As presented in Figure <ref>, the design and implementation of the external case aimed to embed the LED matrix, microcomputer card, battery, powerbank, and camera using 3D printing technology and Polylactic Acid filament (PLA). The top and bottom parts of the casing were printed separately, with careful control of printing parameters to achieve optimal layer adhesion and surface finish. In addition, the battery slot and camera cover were also printed as separate components to facilitate easy assembly and maintenance. The 3D-printed ergonomic case was designed with the software SolidWorks, which let us achieve a lightweight structure and sufficient rigidity, as well as durability and protection for the electronic components. Most importantly, the camera was mounted at the center of the illumination matrix, allowing for higher accuracy in vein detection under a frontal annular lighting setup. The final dimensions of the device are23 cm x 9.5 cm x 3.5 cm. § EXPERIMENTAL RESULTSThe present section reports the results of the two main detection tasks in the proposal: forearm vein segmentation, and antecubital fossa detection including forearm vein segmentation. §.§ Forearm Vein Segmentation A summary of the quantitative results of the models (Pix2Pix, U-Net, Segnet, PSPNEt, DeepLabV3+), calculating the metrics defined in Equations <ref>, <ref>, <ref>, <ref>, and <ref>, is shown in Table <ref>. The numbers in bold define the lowest or highest performance for each metric. While we aimed to obtain high values for almost all columns, we identified some models' weights (shown in the last column) were higher than others. This was an important factor in choosing one model over the others. For instance, we identified U-Net as one of the most precise models requiring fewer kilobytes than Segnet or Pix2Pix. For the augmented dataset, several metrics were affected heavily due to the modifications applied to augmented instances. This was an important aspect in the present research since any portable vein finder should also work in more challenging environments than the one where the base dataset was collected. Therefore, we decided to continue the work with the U-Net architecture.§.§ Antecubital Fossa Localization The final model performance was measured considering the metrics MSE and MAE, as defined in Equations <ref> and <ref>, respectively. These results are shown in Table <ref>, from where we could identify that the compression method Dynamic Range Quantization was the best for the model in terms of precision. Consequently, this was the selected model to be deployed in the end device. The final version of the Graphical User Interface developed with PyQT with the final compressed model inside is shown in Figure <ref>. Finally, the final printed device is shown in Figure <ref>. § CONCLUSIONS AND FUTURE WORK In this study, we addressed the challenges associated with venipuncture by proposing a comprehensive solution that combines Near Infrared (NIR) imaging and deep learning (DL) techniques for precise vein localization in the antecubital fossa. The significance of accurate vein assessment before intravenous catheterization cannot be understated, especially for patients with low visible veins due to various factors such as fluid retention, age, obesity, dark skin tone, or diabetes. Our proposal comprehends three principal contributions. We introduced a novel dataset comprising 2,016 NIR images of arm veins with limited visibility, accompanied by meticulous annotations that include ground truth images, bounding boxes, centroids, and angle information for precise antecubital fossa identification. Furthermore, we devised and compared five different deep learning-based semantic segmentation models, ultimately selecting the most suitable one for antecubital fossa localization and direction prediction. Thirdly, the integration of this model into a compact vein finder device, through rigorous testing of various microcomputers and quantization methods, underlined its feasibility and efficiency in real-world applications. The experimental results demonstrated that the compressed model utilizing Dynamic Range Quantization, deployed on a Raspberry Pi 4B, achieved optimal performance in terms of execution time and precision balance. This achievement, with an execution time of 5.14 frames per second and an Intersection over Union (IoU) of 0.957, showcased the potential of our approach in a resource-constrained and cost-effective portable device.For future work, other imaging modalities should be combined. Moreover, we highlight the importance of recognizing the median cubital vein, as well as other vascular structures shown in Figure <ref>, in future computer vision-based vein detectors. In addition, suitable vein recommendations according to a given intravenous procedure should be also considered in future research to enhance venipuncture procedures and patient care.splncs04 | http://arxiv.org/abs/2310.18234v1 | {
"authors": [
"Edwin Salcedo",
"Patricia Peñaloza"
],
"categories": [
"eess.IV",
"cs.CV"
],
"primary_category": "eess.IV",
"published": "20231027161926",
"title": "Edge AI-Based Vein Detector for Efficient Venipuncture in the Antecubital Fossa"
} |
Note on power hypergraphs with equal domination and matching numbers D. J. Fritzewski1 T. Van Reeth1 C. Aerts1,2,3 J. Van Beeck1 S. Gossage4 G. Li1 ===============================================================================================================================================María José Chávez de DiegoDpto. de Matemática Aplicada I, Universidad de Sevilla, Avda. Reina Mercedes s/n, Sevilla, Spain, [email protected] Pablo Montero Moreno,María Trinidad Villar-Liñán Dpto. de Geometría y Topología, Universidad de Sevilla, C/ Tarfia s/n, 41012 Sevilla, Spain, [email protected],[email protected] We present some examples that refute two recent results in the literature concerning the equality of the domination and matching numbers for power and generalized power hypergraphs. In this note we pinpointthe flaws in the proofs and suggest how they may be mended. § PRELIMINARIES A (finite)hypergraph H=(V, E) consists of a (finite) set Vand a collection E of non-empty subsets of V. The elements of V are called vertices and the elements of E are called hyperedges, or symply edges of the hypergraph. A k-uniformhypergraph is a hypergraph such that each edge consists of k vertices. A simple graph with no isolated vertices is a 2-uniform hypergraph. Two vertices of H, u and v,are adjacent if there is an edge e such that u, v, ∈ e. The number of edges containing a vertex v is called the degree of v. Given a hypergraphH=(V, E), D ⊂ Vis a dominating set of H if for every v∈ V - D there exists u∈ D suchthat u and v are adjacent. The minimum cardinality of a dominating set of H, γ(H), isits domination number. A matching in H is a set of disjoint hyperedges. The matching number of H, ν(H), is the maximum size of a matching in H. A subset T⊂ V is a transversal (or a vertex cover) of H if T has nonempty intersection with every hyperedge of H. The transversal number of H, τ(H), is the minimum size of a transversal of H. According to <cit.>,a power hypergraph H is obtained from a graph G by adding at least one vertex to each edge of G; thus, every hyperedge of a power hypergraph contains at least one vertex whose degree is one. The generalized power hypergraph of a simple graph G, denoted H^k,s, is obtained by blowing up each vertex into a s-set and each edge into a k-set, where 1≤ s≤k/2. Clearly, H^k,1 is the k-uniform power hypergraph. Next we recall a useful notation. (<cit.>) Let G=(V,E) be a simple graph. For any k≥ 3 and 1≤ s≤k/2, the generalized power hypergraph of G, denoted by G^k,s, is defined as the k-uniform hypergraph with the vertex set { v : v∈ V}∪{ e :e∈ E}, and the edge set { u∪ v∪ e :e={u,v}∈ E }, where v is as-set containing v and e is a (k-2s)-set corresponding to e.G^k,s is calledthe generalized power hypergraph obtained from G. Particularly, for s=1, G^k,1 is the kth-power hypergraph of G. It is not difficult to check that the domination number is not hereditary for power hypergraphs in general. However, the next result holds. (<cit.>) Let H=(V,E) be a simple graph, fork≥ 3,1≤ s≤ k/2natural numbers, let H^k,sdenotethe generalized power hypergraph obtained from H. We get: * ν(H^k,s)=ν(H) and τ(H^k,s)=τ(H). * γ(H^k, k/2)=γ(H). * γ(H^k, s)=τ(H^k, s), if1≤ s < k/2. § COUNTEREXAMPLES TO TWO THEOREMS IN DONG ET AL., 2020. A main result from <cit.> stablishes lower and upper bounds for domination number of power and generalized power hypergraph. In particular, for any connected generalized power hypergraph H^k,s and k≥ 3, if 1 ≤ s < k/2, we haveν(H^k,s)≤γ(H^k,s) ≤ 2 ν(H^k,s)and if s = k/2,we get γ(H^k,k/2) ≤ν(H^k,k/2). (See Theorems 2.1 and 3.1 in <cit.>). Dong et al. <cit.> also demonstrate that these bounds are sharp and they give characterizations of extremal hypergraphs for them. Namely the two following theorems are stated and proved. (Theorem 2.2, <cit.>) For any connected power hypergraph H of rank r≥ 3, γ(H)=ν(H) if and only if H ∈ℋ_1. (Theorem 3.4, <cit.>) For any connectedgeneralized power hypergraph H^k,s, γ(H^k,s)= ν (H^k,s) if and only if H^k,s∈ℋ^k,s_1 for1 ≤ s < k/2 or H^k,s∈ℋ^k,k/2_1 for s= k/2. However,we present some examples that refute these results concerning the equality of the domination and matching numbers for power and generalized power hypergraphs.Firstly, let us start by consideringthe family ℋ^k,s_1, for k≥ 3 and1 ≤ s < k/2, consisting of generalized power hypergraphs obtained from bipartite connected graphs and ℋ_1 (=ℋ_1^k,1) denotes the set of power hypergraphs obtained from bipartite connected graphs. We next study separately the cases 1 ≤ s < k/2 and s = k/2.§.§The case 1 ≤ s < k/2 It is not difficult to refute Theorems <ref> and <ref> by consideringthe following counterexamples. Let H = C_p ∨ C_q denotes the wedge of two cycles joined by a common vertex, p, q ≥ 3, being p or q an odd number. The following identities hold. * ν(H) = ⌊p/2⌋ + ⌊q/2⌋; * τ(H) = ⌈p/2⌉ + ⌈q/2⌉ - 1 From Proposition <ref>, we getγ(H^k,s)=τ(H^k,s)=τ(H) for 1≤ s< k/2. Set n=ν(H)= ⌊p/2⌋ + ⌊q/2⌋, and let us distinguish two cases. * If p and q are both odd numbers, then ⌈p/2⌉ + ⌈q/2⌉ - 1=n+1 and, therefore γ(H^k,s)=n+1=ν(H)+1=ν(H^k,s)+1. * If p is even and q is odd, then ⌈p/2⌉ + ⌈q/2⌉ - 1= n, and hence γ(H^k,s)=n= ν(H^k,s). Observe that every graphofthe form C_p ∨ C_q, being peven and qodd, is not bipartite whilethecorresponding generalized power hypergraph is extremal for the equality of domination and matching numbers. Therefore, if s=1 we foundcounterexamples for Theorem <ref> and, if 1<s<k/2, the counterexamples refute Theorem <ref>. Next, we point out what seems to be the main confusion in the characterization given byDong et al. in <cit.>. The proofs of both Theorems2.2 and 3.4 in <cit.> are quite similar for the case 1≤ s< k/2, hence wefocus on reasoning only Theorem2.2. For the necessary condition it isaffirmed that the equality of domination and matching numbers is raised only by bipartite graphs; however, it is also raised for other graphs, as we have seen in Example <ref>. The misunderstanding is produced because they use König's Theorem as a characterization ofgraphs with equal transversal and matching numbers. But the fact is that the bipartiteness is only a sufficient condition, not a necessary one as we check from <cit.>. (König's Theorem, <cit.>)If G is a bipartite graph, then τ(G)=ν(G). Let us notice that the graphs G satisfying τ(G)=ν(G) are called König-Egerváry graphs or said to have the König- Egerváry property. Trivially, bipartite graphs are examples of such graphs.König-Egerváry graphs have been extensively studied in the literature; see <cit.>and the references there. Hence,knowing the family of König-Egerváry graphswould lead us tocomplete the correct characterization of power and generalized powerhypergraphs with equal domination and matching numbers. We next sketch a proof. Let us denote the family 𝒦E^k,s_1, for k≥ 3 and1 ≤ s < k/2, consisting of generalized power hypergraphs obtained from König-Egerváry graphs. For anyH^k,s∈𝒦E^k,s_1, by Proposition <ref>, we get ν(H^k,s) =ν(H)=τ(H)=γ(H^k,s). Conversely, if H^k,s is a generalyzed power hypergraph obtained from a graph H andsuch that ν(H^k,s) =γ(H^k,s), then by Proposition <ref>, τ(H^k,s) =γ(H^k,s)and ν(H)=τ(H) hold and, consequently, we reach H^k,s∈𝒦E^k,s_1. Differentcharacterizations ofKönig-Egerváry graphs are given in terms of forbidden subgraphs (<cit.> and references there in). Unfortunately, we have not found a description of the family of König-Egerváry graphsin an analougous way as the families𝒢_1 and 𝒢_≥ 2. §.§The case s = k/2 The family ℋ^k,k/2_1 contains the generalized power hypergraphs obtained from connected graphs of the family 𝒢_1∪𝒢_≥ 2. The family of graphs 𝒢_≥ 2 was defined by Randerath and Volkmann in <cit.> andit was depictedin <cit.> as the following nine graphs. On the other hand, the family of graphs 𝒢_1 was defined by Kano et al. in <cit.> by using some terminology and notation which are recalled here. The minimum degree of a graph G is denoted by δ(G). End(G) denotes the set of end-vertices (i.e., vertices of degree one) of G. An edge incident with an end-vertex is called a pendant edge. A vertex adjacent to an end-vertex is called a stem, and Stem(G) denotes the set of stems of G. A graph with a single vertex is called a trivial graph. The corona G ∘ K_1 of a graph G is the graph obtained from G by adding a pendant edge to each vertex of G. A connected graph G of order at least three is called a generalized corona if V(G) = End(G) ∪ Stem(G). Then the family 𝒢_1 is that one in which each graph G is the complete graph K_2 or a generalized corona, or for each component G_j of G∖(End(G) ∪ Stem(G)) for j ≥ 1 satisfies one of the following: * G_j is a trivial graph. * G_j is a connected bipartite graph with bipartition V_1 and V_2, where 1 ≤| V_1| < | V_2|. Let U_G_j = V(G_j)∩ N_G(Stem(G)). Then ∅≠ U_G_j⊆ V_2 and for any two distinct vertices x_1, x_2 ∈ V_1 that are adjacent to a common vertex of V_2, there exist two distinct vertices y_1,y_2 ∈ V_2 ∖ U_G_j such that N_G_j(y_i)={x_1,x_2 }, i ∈{1,2 }. * G_j is a graph isomorphic to (f), (g), (h) or (i) shown in the precedent figure, and γ(G_j ∖ V_1)=γ(G_j) ∀ ∅≠ V_1 ⊆ U_G_j⊂ V(G_j), where U_G_j = V(G_j)∩ N_G(Stem(G)). The following example refutes Theorem 3.4 in Dong et al., <cit.> for s=k/2. Let us consider thegraph G=K_2,n, n≥ 2. Hence γ(G^k,k/2)=2 = ν(G^k,k/2). However, it is easy to check that G does not belong to 𝒢_1∪𝒢_≥ 2. An analysis of the proof of Theorem 3.4 in <cit.> for the cases = k/2 (observe that the case s = k/2 does not appear in Theorem 2.2 in <cit.>) shows that the necessary condition is based on necessary conditions ofLemmas 3.2 and 3.3 in <cit.>. However, Lemma 3.2 in <cit.>, as stated, is quite misleading. We include here the complete characterization of graphs with equal matching and domination numbers summarized from <cit.>. (<cit.>) Let G be a connected graph with δ(G)=1. Then γ(G)=ν(G) if and only if G∈𝒢_1. (<cit.>) Let G be a connected non bipartite graph with δ(G)≥ 2. Then γ(G)=ν(G) if and only if G∈𝒢_≥ 2. (<cit.>) Let G be a connectedbipartite graph with δ(G)≥ 2 and bipartition of vertex set X∪ Y with |X|≤ |Y|. Then γ(G)=ν(G) if and only if G possesses the following properties: * ν(G)=γ(G)=|X|; * for anytwo distinct vertices x_1, x_2 ∈ X that are adjacent to a common vertex of Y, there exist two distinct vertices y_1, y_2 ∈ Y such that y_i is adjacent precisely to {x_1, x_2}, for i=1,2. As a consequence, byusing Proposition <ref>, the characterization of generalized power hypergraphs H^k,k/2 with equal domination and matching numbers isimmediately deduced. Aknowledgements. Authors are very thankful to Dr. Antonio Quintero Toscano for his helpful coments while preparing this note. 99 Bonomo Bonomo, F., Dourado, M. C., Durán, G., Faria, L., Grippo L. N. and Safe, M. D. Forbidden subgraphs and the König-Egerváry property. Discrete Applied Mathematics, 161 (2013), 2380–2388 DongSohnLiang2020 Dong, Y., Young Sohn, M., Liang, Z. Domination and matching in power and generalized power hypergraphs. Journal of Combinatorial Optimization, 39 (2020), 425–436. Star-UniformGraphs Kano, M., Wu,Y. and Yu, Q.Star-Uniform Graphs. Graphs and Combinatorics26(2010), 383–394 KhanFan2015 Khan, M., Fan, Y. On the spectral radius of a class of non-odd-bipartite even uniform hypergraphs, Linear Algebra and its Applications, 480 (2015), 93–106. Konig König, D. Graphen and Matrizen, Matematikai és fizikai lapok 38 (1931), 116–119. RanderarthVolkmann1999 Randerath, B., Volkmann, L. Characterization of graphs with equal domination and matching number, Utilitas Mathematica, 55 (1999), 65–72. | http://arxiv.org/abs/2310.19824v1 | {
"authors": [
"María José Chávez de Diego",
"Pablo Montero Moreno",
"María Trinidad Villar-Liñán"
],
"categories": [
"math.CO",
"05C65, 05C69, 05C70"
],
"primary_category": "math.CO",
"published": "20231027183308",
"title": "Note on power hypergraphs with equal domination and matching numbers"
} |
firstpage–lastpage Instance Segmentation under Occlusions via Location-aware Copy-Paste Data Augmentation Son Nguyen†, Mikel Lainsa†, Hung Dao†, Daeyoung Kim†, Giang Nguyen169KAIST, South Korea†, Auburn University, US169{nguyendinhson,mikel,hicehehe,kimd}@kaist.ac.kr†, [email protected] 14, 2024 ========================================================================================================================================================================================================= [O III]λλ4960,5008 doublet are often the strongest narrow emission lines in starburst galaxies and quasi-stellar objects (QSOs), and thus are a promising probe to possible variation of the fine-structure constant α over cosmic time. Previous such studies using QSOs optical spectra were limited to z<1. In this work, we constructed a sample of 40 spectra of Lyα emitting galaxies (LAEs) and a sample of 46 spectra of QSOs at 1.09<z<3.73 using the VLT/X-Shooter near-infrared spectra publicly available. We measured the wavelength ratios of the two components of the spin-orbit doublet and accordingly calculated α(z) using two methods. Analysis on all of the 86 spectra yielded =(-3±6)×10^-5 with respect to the laboratory α measurements, consistent with no variation over the explored time interval. If assuming a uniform variation rate, we obtained α^-1 dα/ dt = (-3±6)×10^-15 yr^-1 within the last 12 Gyrs. Extensive tests indicate that α variation could be better constrained using starburst galaxies' spectra than using QSO spectra in future studies. § INTRODUCTION <cit.> proposed that fundamental physical constants may change with the evolution of the universe. The fine structure constant α is defined as:α= e^2/4πε_0 ħ c,where e is the electron charge, ε_0 is the permittivity of free space, ħ is the Planck constant, and c is the speed of light. It is a fundamental physical constant charactering the strength of the electromagnetic interaction between elementary charged particles. It also quantifies the gap in the fine structure of the spectral lines of atom. This gap is proportional to the energy of main level by a factor of α^2 <cit.>. Thus variation of α over time could be directly constrained by comparing the wavelengths of fine-structure splitting of atomic lines from another epoch with that from today. Possible variation of α over time has been studied with the help of measurements from laboratories, geology and astronomy <cit.>. Among these, the studies involving astronomical spectra can provide possible variation of α with the longest lookback time and the greatest spatial spans, in which case trivial variation of α with spacetime might accumulate to a measurable level.Observational studies using astronomical spectroscopy began in the 1950s <cit.>. From the 1960s, doublet lines of λλ4960, 5008 in the spectra of galaxies and QSOs have been used to study the variation of α over time. <cit.> analyzed the doublet in five QSOs at 0.17<z<0.26 and found no variation in α as they measured =α(z)/α(0)-1=(1±2)×10^-3, where α(z) and α(0) are the α values at redshift z and in laboratory. The doublet method was sidelined for decades and returned in the 2000s when the SDSS project produced a large sample of QSO spectra with resolution R≈2000. The final precision can be greatly improved by averaging the results of the measurements from a large number of QSOs. <cit.> measured =(0.7±1.4)×10^-4 using doublet in spectra of 42 QSOs at 0.16<z<0.80 from the SDSS Early Data Release. <cit.> measured =(2.4±2.5)×10^-5 using 1568 QSOs at z<0.8 from SDSS Data Release 6. <cit.> measured =(-2.1±1.6)×10^-5 using 2347 QSOs at 0.02<z<0.74 from SDSS Data Release 7. <cit.> measured =(0.9±1.8)×10^-5 using 13,175 QSOs at z<1 from SDSS Data Release 12. In addition to the doublet, other doublet emission lines had been attempted, such as [Ne III] λλ3870,3969, to study the α variation at 1<z<1.5. However, <cit.> demonstrated that the accuracy is limited to be worse than 10^-3 due to systematic errors caused by the contamination from Hϵ to [Ne III] λ3969.At z>1, when the doublet is redshifted beyond the wavelength ranges of optical spectrometers, the studies on the variation of α using astronomical spectroscopy can be carried out using alkali doublet absorption lines in the ultraviolet spectra of QSOs <cit.>. With a principle similar to the doublet method, measurements ofinvolving this alkali-doublet method have reached a precision of ∼10^-5, slightly better than the doublet method and did not detect variation in α either.The most precise astronomical limits on α variation till now were achieved using many-multiplet method <cit.>. In this method, the wave number of a spectra line, in producing which the fine-structure splitting of an energy level is involved, can be expressed as ω_z=ω_0+q_1 x+q_2 y <cit.>, where ω_z and ω_0 are the wave numbers in vacuum at redshift z and at today, respectively, and x=(α_z/α_0)^2-1, y=(α_z/α_0)^4-1. The parameters q_1 and q_2 can be computed using relativistic many-body perturbation and experimental data, while their values for light elements and heavy elements can differ by a factor of tens to thousands. This great diversity of the values of q_1 and q_2 can amplify possible variation of α and then reveal it in the differences between the wave numbers of the spectral lines of different atoms and ions. Multiple absorption lines in the damped Lyα absorption systems in QSOs' spectra, including Mg II, Fe II, and others, were used. In practice, the absorption lines generally have complex profiles, and therefore, a fitting technique involving many Voigt profiles is used. All absorption lines in an absorption system are fitted simultaneously, in which every line is decomposed into multiple Voigt components, and the number of free parameters is minimized by linking the physically related absorption components of different lines. Some early works claimed to have found evidence of significant differences between the past and present values of α <cit.>. However, subsequent works <cit.> indicated that the early results were affected by wavelength distortion. After taking the wavelength distortion into account, later works, including those using QSO spectra with resolutions of 40,000∼60,000 aiming at several to hundreds of absorbers at 0.2<z<4 <cit.> and those using spectra of bright QSO HE 0515-4414 with resolutions up to 145,000 <cit.>, measuredwith precisions of (0.8∼4)×10^-6 and did not detect variation in α over time. Recently, <cit.> applied the many-multiple method to the absorption spectra of neighbouring stars within 50 pc, findingconsistent with 0 with a precision of 1.2×10^-8.Though having archived great precision, possible problems lurking in works using the many-multiplet method have been pointed out <cit.>, which may emerge from the techniques for correcting wavelength distortion <cit.>, the physically assumed linking that the Voigt fitting technique relies on <cit.>, the technical details of the fitting process <cit.>, the unconsidered systematic errors <cit.>, and other. These problems can lead to biases in the best-fittingvalues or to underestimations of their errors, and some of the problems lack clear solutions despite much effort. Anyhow, the results of the many-multiplet method need to be tested with entirely different methods. The doublet method relies on fewer assumptions and suffers from fewer systematic errors. No assumptions on chemical composition, ionization state, and distribution of energy levels are required, because the doublet lines originate in the downward transitions from the same upper level of the same ion. Also, there is no need to decompose one emission line into multiple components by assuming that it originates from multiple clouds, as was usually the routine in the study of absorption lines, for the doublet lines must have the same profile. The doublet method is more tolerant of the wavelength distortion because of the small wavelength range used in the measurement (doublet line interval 47.9(1+z) Å). At present, although the precision of themeasured by the doublet method is much lower than that measured by the many-multiplet method, improvement in the future can be achieved by using more spectra of QSOs and starburst galaxies with better quality. In addition, current measurements based on absorption lines were limited to z<7.1 <cit.>. Studying the α variation using absorption lines at higher redshifts is challenging because QSOs are extremely rare in the early universe, and the normal galaxies' continua are too weak. Fortunately, many starburst galaxies have been discovered in the early universe, and their doublet or other doublet emission lines can be used.In studies of α variation using the doublet method, works with large sample sizes (N>20) have used only spectra of QSOs, not starburst galaxies. This may be because QSOs have higher luminosities, or it may be related to the observational strategy of the SDSS project. In reality, starburst galaxies far outnumber QSOs in the universe. They have narrower emission lines than QSOs, which is advantageous for improving the accuracy ofmeasurement and may compensate for their disadvantage of lower luminosity. If this can be proved, the number of available spectra could be significantly increased by including starburst galaxies in studies of α variation, and the precision of the final results improved. In addition, previous works only used optical spectra (wavelength <1 μm), and hence themeasurements were limited to z<1. Applying this method to z>1 demands infrared spectroscopy.The XShooter<cit.> is an intermediate-resolution echelle spectrometer mounted on UT2 of the Very Large Telescope (VLT) since 2009. Entering celestial radiation is split into three arms optimized for the UVB,VIS and NIR wavelength ranges by dichroic mirrors. Each arm has an echelle and a detector, and the wavelength ranges of the UVB, VIS and NIR arms are 299–556, 534–1020 and 994–2478 nm, respectively. By splicing the spectra from the three arms, a continuous wavelength coverage from 300 to 2470 nm can be achieved. The XShooter can work under the long-slit or the integral field unit modes. For the long-slit mode, slits with widths of 0.4 to 1.5 are available in the NIR arm. The slits with widths of 0.6, 0.9 and 1.2are often used, producing NIR spectra in resolutions of 8030, 5570 and 4290, respectively. After an observation finishes on the Xshooter, the raw data will be processed automatically by the pipeline <cit.>, giving the extracted spectra of the targets. Overall, the XShooter has a unique wavelength coverage of up to 2.47μm with a moderate spectra resolution, and it is highly sensitive due to the large aperture of the VLT and the high efficiency of the spectrometer, not to mention the pipeline makes accurate and reliable wavelength calibration. Thus, spectra taken by the XShooter are suitable for studies of the variation of α via the doublet method at z > 1.In this work, we measured the α variation by the doublet method using XShooter spectra of a sample of Lyman-α emitters (LAEs) and QSOs. LAEs are distant galaxies selected by their strong Lyman-α emission lines, among which most are starburst galaxies, and few are obscured active galactic nuclei. We achieved a measurement at 1.1<z<3.7 for the first time with this method. In addition, we found that α variation can be better constrained by using LAE spectra than QSO spectra. Throughout the paper, we assume a ΛCDM cosmology and use the cosmological parameters obtained by <cit.>, which are H_0=69.6 km s^-1 Mpc^-1, Ω_M=0.286, and Ω_Λ=0.714. § DATA REDUCTION AND SAMPLE SELECTION§.§ Data reductionWe collected information on the normal, the large, the small, the ToO and the DDT XShooter programs in period 84 - period 104 between 2009 and 2020. Projects related to LAEs and QSOs at 1<z<4 and observed in the long-slit mode were selected. The redshifts of the included targets were then determined roughly by inspecting their optical/NIR spectra visually. Although these rough redshifts were only used for selecting targets, their accuracy was guaranteed later by the more precise redshifts obtained in further analysis by fitting the spectra. The uncertainties of the rough redshifts are on the 0.01 level, which is sufficient for selecting reliable targets at this stage. Targets with redshifts between 1.07 and 3.77 were selected, ensuring the NIR spectra cover the wavelength range of 4800–5200 Å in the rest frame.We obtained the two-dimensional (2D) and one-dimensional (1D) NIR spectral data of the selected targets from the ESO database[The spectral data products query form, http://archive.eso.org/wdb/wdb/adp/phase3_spectral/form ], which had been processed by the XShooter pipeline (version is related to the observation time). These data are referred to as pip-2D and pip-1D hereafter. The pip-2D data underwent processes including wavelength calibration, CCD cleaning, target tracking and straightening, combining different exposures, removing cosmic rays, and merging spectra from different orders. They had also been interpolated to a wavelength grid with a fixed interval of 0.6 Å. Based on the pip-2D data, the pip-1D data underwent spectral extraction and flux calibration. Instead of using the pip-1D data directly, we used the pip-2D data to extract the final 1D spectrum. In this section, we briefly introduce the typical extraction approach and display more details and the treatment of some exceptional cases in Appendix A.Dither mode was used in most observations, and under this mode, a target will experience several consecutive exposures. The pipeline combines the data from the group of multiple exposures of a target, leaving three parallel images of the target in its pip-2D data, in which one image in the middle has positive flux, and the other two have negative fluxes. The extraction of such pip-2D data of a target is as follows. First, we determined the aperture for extraction by fitting the brightness profiles of the three images at the spatial direction of the 2D data. Second, we extracted the spectra of the three images and combined them. Third, we made flux calibration and telluric correction for the combined spectrum. Finally, we identified the unreliable pixels and assigned marking masks for them. Pixels seriously affected by sky emission lines (SELs) or telluric absorption lines (TALs) were masked in this step, and Figure 1 shows the examples. In addition, if a target has more than one group of exposures in a project and hence several spectra were extracted, we would combine these extracted spectra into one.We extracted 95 LAE and 601 QSO spectra (including obscured QSOs). Note that if one target was observed in different projects, we extracted a spectrum for each project. §.§ Spectral AnalysisAimed to build a sample with spectra that have strong and clean doublet lines, we fit the doublet lines and their adjacent regions in each spectrum for the present. The fitting model includes one component for the continuum and one for the doublet. The former component is a linear function for an LAE and a second-order polynomial for a QSO. We did not set a separate component for the Hβ broad emission line in a QSO spectrum because it has been included in the second-order polynomial. For the latter component, a sum of multiple Gaussian components was used for each line. The λ4960 and λ5008 lines contain the same number of Gaussian components, and for each Gaussian component, we set:{[ w_i,5008=w_i,4960×(1+η); σ_i,5008=σ_i,4960×(1+η);f_i,5008=f_i,4960× A ].where w_i,5008, σ_i,5008 and f_i,5008 are the central wavelength, standard deviation and flux of the ith Gaussian component of the λ5008 line, and those with a subscript 4960 are the corresponding parameters of the λ4960 line. The definitions of η and A followed <cit.>, where 1+η is the ratio of the wavelengths of the doublet, and A is the ratio of the fluxes of the doublet. We fixed η as the theoretical value η_0 in this stage to distinguish the doublet lines when blended or avoid misidentification of the λ4960 line when weak. We adopted the vacuum wavelengths of 4960.295 and 5008.240 Å for the doublet, which were taken from the NIST Atomic Spectra Database[https://www.nist.gov/pml/atomic-spectra-database], and hence η_0=0.00966576. For spectra with >10σ detections of both doublet lines, fixing η to η_0 generally leads to an increase of the signal-to-noise ratios (SNR) of the λ4960 by <20%, and an increase of that of the λ5008 by <5%. Here an SNR was calculated as the flux of a line divided by its error.The rest-wavelength ranges for fitting are different for LAEs and QSOs. In most cases, the rest-wavelength range for LAEs is 4900–5100 Å and 4900–5140 Å for QSOs. When fitting some QSOs' spectra, the result contains Gaussian components with FWHMs of several 10^4 km s^-1, which lack astronomical interpretations and are actually caused by a miscalculation of the continuum. In these cases, we adjusted the red end of the range to 5160, 5180 or 5200 Å to define the continuum component better. For QSOs with broad and blueshifted emission lines, such as the obscured QSOs with strong outflows in the sample of <cit.>, we adjusted the blue end of the range to 4880 Å to decompose the continuum and the λ4960 line effectively. For any spectrum, pixels with masks were excluded from the fitting.We fit the spectra by minimizing χ^2. First we tried one Gaussian component for each line, and then gradually added more Gaussian components. One component more will lower the χ^2 and the degree of freedom by 3 at the same time. Supposing the fitting were improved after adding a component, we calculated the confidence probability using the F test:F = Δχ^2/χ^2/Δ dof/dof ,where χ^2 and dof are the chi-square and the degree of freedom after adding the component, Δχ^2 and Δ dof are the decreased amounts, and Δ dof is 3. We calculated the probability p(F, dof, Δ dof) corresponding to this F value. If p is greater than 99.99%, we would add this component, and vice versa. This threshold of p corresponds to Δχ^2 of 20–30 for most spectra (dof is 1000–1500). Figure 2 shows an example of how we gradually added the Gaussian components. All the doublets in the spectra can be fitted with no more than 4 components, while when 4 components have been used, adding one more only improves the fitting slightly but raise difficulty in finding the best solution.<cit.> indicated that the SDSS pipeline overestimated the errors for lower flux levels(see their section 4.4). This situation also happened to the Xshooter pipeline. Therefore we corrected the error following <cit.>. If the fitting's reduced chi-square (χ^2/dof) is less than 1 when the original error is used, we multiplied the error of all pixels by a correction factor (equal to the square root of the reduced chi-square) and redid the fit. We corrected the error for 43% of the spectra, and the correction factors in our final sample are 0.6–1.0. This correction decreases the errors of model parameters and increases the SNRs of doublet lines. After correction, the SNRs are more consistent with visual estimates (e.g., 3–5 for a weak detection or >7 for a clear detection).§.§ Sample Selection Our criteria for sample selection include the following aspects.First, we required the SNRs of both the λ4960 and λ5008 lines to be above 10.Second, we required that the doublet in a spectrum hardly bears any SELs, TALs, cosmic rays or bad CCD pixels. Although the pixels affected by these factors had been masked in Section 2.1 and were excluded from spectral analysis, these factors can still cause problems because automated programs may fail to mask all affected pixels when the effects are severe. In addition, for an emission line with half the pixels masked, including those near the peak, setting η as a free parameter (as we will do in Section 3) may lead to mistaken identification of the peak. We calculated a mask index I_ mask for each line to describe these effects quantitatively. As shown in Figure 1, this index was calculated as the sum of the fluxes of the masked pixels (grey region) divided by the total flux of the line. After visual inspections of the effects mentioned above, we required that the sum of I_ mask of λ4960 and λ5008 lines be below 0.5. The LAE spectra excluded by this criterion were mainly due to SELs, while the QSO spectra excluded were mainly due to TALs.Third, we excluded QSOs with doublet lines severely blended. If so, the decomposition of the doublet may be wrong by setting A as free parameters if that by fixing A at theoretical value treated as correct. A blending index I_ blend was calculated to quantify the blending, as is shown in Figure 3(b). On the multiple Gaussian models of the doublet lines, we measured the flux of the highest point near 4960 Å as f_1 and the flux of the lowest point in the range between 4960–5008 Å as f_2. I_ blend was set to be the ratio between f_2 and f_1. According to visual inspections, to ensure the doublet lines be properly decomposed, we required that I_ blend is below 0.2. The QSO spectra excluded by this criterion all have broad and blueshifted emission lines. The decomposition by fixing A at theoretical value show that the FWHMs of these excluded QSOs are 800–4500 km s^-1, significantly greater than the FWHMs of the final sample, which has a median value of 610 km s^-1.Finally, we excluded QSOs with strong Fe II bumps. The Fe II bump between 4900 and 5050 Å may lead to a severe systematic error when fitting the continuum with a simple second-order polynomial. To measure the intensity of the Fe II bump, we calculated a Fe II index I_ FeII as follows. As shown in Figure 3(a), we measured I_4590 and I_5250, which represent the intensity of the Fe II bumps around 4590 Å (Fe II λ4590) and around 5100–5400 Å (Fe II λ5250). We inferred the continuum below the Fe II bumps using the spectra in the green shade in Figure 3(a): the wavelength ranges for fitting the continuum are 4420-4460 Å and 4720–4760 Å for Fe II λ4590, and 5060–5100 Å and 5400–5440 Å for Fe II λ5250; the model used was a power law; the best-fitting model for the continuum is expressed as f_ con(λ), as is shown by the blue line in Figure 3(a). We fit the continuum-subtracted spectra in 4460–4720 Å and 5100–5400 Å using cubic spline functions whose nodes have a uniform interval of 20 Å, and the fitting result is expressed as f_ FeII(λ). Hence we calculated:I_4590 = ∫_4460^4720 f_ FeII(λ)dλ/ f_ con(4590 Å ) ,I_5250 = ∫_5100^5400 f_ FeII(λ)dλ/ f_ con(5250 Å )Note that for most QSOs, only one of I_4590 and I_5250 can be reliably measured because of the telluric absorption bands. For uniformity, if I_5250 can be reliably measured, then we set I_ FeII=I_5250. And if not, we set I_ FeII=1.3× I_4590. The coefficient 1.3 here was chosen to be the ratio between I_5250 (16 Å) and I_4590 (12 Å) measured on the Xshooter QSO composite spectrum <cit.>. We excluded QSOs that meets the following criteria: I_ FeII>10 Å and I_ FeII>0.5× EW_ [OIII], where EW_ [OIII] is the equivalent width of λ5008.The final sample passing these criteria contains 86 spectra, including 40 spectra of 32 LAEs and 46 spectra of 45 QSOs. The observing information of these 86 spectra is listed in Table 1 (the LAE subsample) and Table 2 (the QSO subsample). Six of the LAE spectra come from a galaxy at z=2.37 named “Sunburst arc” <cit.>, which is highly magnified by a gravitational lens (magnification >20). We refer to these six spectra as SA1 to SA6. LAE J0332-2746 has three spectra, J0217-0502 has two spectra, and each of the other 29 LAEs has only one spectrum. QSO J1313-2716 has two spectra, and each of the other 44 QSOs has one spectrum.For each spectrum, we measured the peak wavelength of the λ5008 line and calculated a redshift as the approximation of the systematic redshift. For typical QSOs, this would cause an error of <0.001 (<300 km s^-1) except for a small fraction of QSOs called“blue outliers” <cit.>, but those blue outliers generally have blended doublet and would not meet our criteria and hence would not be included in our sample. We also measured the luminosity of λ5008 in each spectrum. The statistical errors of the luminosities are <6% , corresponding to <0.03 dex in logarithm. Changes in the slit width, the seeing condition and the atmospheric transmittance can cause systematic errors in the luminosities, and we estimated them to be 0.1–0.2 dex based on the multiple observations of some targets. These redshifts and luminosities of the spectra are listed in Table 1 and Table 2 and are displayed in Figure 4. We found that all the targets are distributed in three redshift intervals, 1.09<z<1.66, 2.01<z<2.57, and 3.21<z<3.74, corresponding to the observed wavelengths of the J, H and K bands respectively. Most of the LAEs are in the redshift range corresponding to the H-band, and this may be caused by the methods with which the LAEs were selected. The luminosities of the QSOs are in the range between 10^42.8 and 10^44.8 erg s^-1, and have a median value of 10^43.7 erg s^-1. The luminosities of the SA spectra are in the range of 10^43.6-44.4 erg s^-1, and those of other LAEs are in the range between 10^42.1 and 10^43.2 erg s^-1, and have a median value of 10^42.7 erg s^-1, which is one order of magnitude lower than that of QSOs.We measured the FWHMs of the doublet lines in every spectrum of the final sample and showed them in Figure 5 with the SNR of λ4960 (SNR_4960). These two parameters are critical indicators for the measuring precision of the wavelengths (see Section 3.3 and Appendix D for analysis and simulation). In brief, the larger the SNR_4960 or the smaller the FWHM, the more accurate the measurement of the wavelengths. The SNR_4960 of the six SA spectra is 60–200, that of other LAE spectra is 10–50, and that of the QSO spectra is 10–460. The median values of the SNR_4960 of the LAE and the QSO subsamples are both around 20. The FWHM of of the LAEs is 90–490 km s^-1 with a median value of 130 km s^-1, and that of the QSOs is 270–1260 km s^-1 with a median value of 610 km s^-1. The emission lines of the LAEs are narrower and less luminous than those of the QSOs, while the SNRs of the two subsamples are similar.§ MEASUREMENTS OF EMISSION-LINE WAVELENGTHS Following <cit.>, we calculated the variation in the fine structure constant as:Δα/α = 1/2{[(λ_2-λ_1)/(λ_2+λ_1)]_z/[(λ_2-λ_1)/(λ_2+λ_1)]_0 -1},where λ_1 and λ_2 are the wavelengths of λ4960 and λ5008 lines, respectively. Those with a subscript of 0 are the wavelength values at z=0 from laboratories, and those with a subscript of z are the observed values of the doublet from a target's spectrum at a redshift of z. Equation (5) can be rewritten as:Δα/α = η-η_0 /η_0(2+η) ,where η≡ (λ_2/λ_1-1)_z, η_0 ≡ (λ_2/λ_1-1)_0=0.00966576. The Taylor expansion of equation (6) around η_0 gives:Δα/α = 51.4802(η-η_0) - 25.6163(η-η_0)^2 + o( (η-η_0)^2 ). We will show later that for our sample, |η-η_0| is less than 3×10^-4. In this case, ignoring the quadratic and the high-order terms only results in a relative error of Δα/α no more than ∼10^-4. Thus, we calculated the change in the fine structure constant and estimated its error with:Δα/α≈ 51.4802(η-η_0).In this way, the measurement ofcan be converted into the measurement of η. §.§ Two Methods to Measure η We measured η using the following two methods. The first is called the Multiple-Gaussian (MG) method. The λ4960, λ5008 lines were fitted simultaneously with the model expressed as equation (2). The fitting is basically the same as that described in section 2.2, and the only difference is that η is now a free parameter instead of the fixed value of η_0. The wavelength ranges and the number of Gaussian functions are consistent with those used in section 2.2. We fit the spectra using a Levenberg-Marquardt technique with the MPFIT package. We set the initial parameters as those obtained in section 2.2. We took the values of η and A yielding the minimum χ^2 (expressed as χ_ min^2) as the measurement result and calculated the statistical errors σ_ stat(η) and σ_ stat(A) according to the range of η and A yielding χ^2(η,A) < χ_ min^2 + 1. An example of measuring η and A using the MG method is shown in the upper row of Figure 6.The second is called the Profile-Matching (PM) method. This method is based on the fact that the doublet originates from the same upper energy level and should have identical line profiles. The analysis approach is similar to that adopted by <cit.>. After subtracting the best-fitting continuum model from the observed spectrum, we obtained the profiles of λ5008 and λ4960 emission lines. We moved the λ5008 line leftward (λ divided by 1+η) and decreased the amplitude (f_λ divided by A/1+η). Theoretically, at some specific values of η and A, the adjusted λ5008 line should be the same with the λ4960 line, in which case they can be regarded as two samplings from one profile beneath. So for a pair of values of η and A, we fit the adjusted λ5008 line and the λ4960 line simultaneously with a cubic spline function, from which a χ^2 value was obtained. With different values of η and A yielding different values of χ^2, we obtained a χ^2 surface in the 2D-parameter space. The η and A yielding the minimum χ^2 were taken as the result, and their statistical errors were obtained using χ^2(η,A) < χ_ min^2 + 1. In the matching, we used the pixels in the following wavelength ranges: the range for the λ5008 is where the flux exceeds 20% of the peak flux (we use the flux obtained from the best-fitting Gaussian models to avoid random fluctuations), while the range for the λ4960 is the one of λ5008 divided by 1+η. The pixels with masks were excluded. When η changes, the wavelength range of the λ4960 line changes accordingly, and the degree of freedom may change when the λ4960 line contains masked pixels. If so, we slightly adjusted the wavelength range to keep the degree of freedom constant. An example of measuring η and A using the PM method is shown in the bottom row of Figure 6. For all the 86 spectra, the χ^2(η,A) surfaces obtained by the MG method look good: they are smooth, and their contours are close to ellipses in the vicinity of the minimum χ^2, as shown in Figure 6(c) and 6(d). This holds in the range of χ^2 < χ_ min^2 + 1 for all spectra and in the range of χ^2 < χ_ min^2 + 10 for most spectra. Therefore, the measurements on η, A and the corresponding statistical error are reliable. However, only a small fraction of χ^2 surfaces obtained by the PM method look good, and we showed two bad examples in Appendix B. The PM method works well only for spectra with narrow doublet lines that have high SNR and almost bear no bad pixels. We visually checked the χ^2 surfaces and selected 16 LAE spectra and 1 QSO spectrum with which η can be reliably measured.The consistency between the two methods was checked by the measurements of η from the 17 spectra. The η values are showed in Figure 7(a), and their statistical errors σ_ stat(η) in Figure 7(b). The values of σ_ stat(η) obtained from the two methods differ by less than 12%. To check whether the η values were consistent, we calculated deviation Δ as:Δ=η( MG)-η( PM)/√(σ_ stat^2(η, MG)+σ_ stat^2(η, PM)).The distribution of Δ is shown in Figure 7(c). The average value is close to 0, and their absolute values are no more than 1.12. These indicate that the results from the two methods agree with each other. As the MG method works well for all the spectra in our sample, we adopted the measurements using it as the final result. The best-fitting Multiple-Gaussian models are displayed in Appendix C. The number of Gaussian functions, the best estimates and the statistical errors of η and A are listed in Table 3, 4 and 5. As the measurements of the six spectra of SA enormously contributed to the final results, we also list the estimates of η and A using the PM method in Table 3 for comparison.§.§ Uncertainties from the Wavelength Calibration The wavelength calibration for the XShooter NIR Arm has a systematic error of Δλ=0.04Å <cit.>. As η≡ (λ_2/λ_1-1), assuming that the wavelengths of λ4960 and λ5008 lines in the observer's frame both have a systematic error of Δλ=0.04 Å, we obtained:σ_ sys(η) = √(λ_1^2+λ_2^2)/λ_1^2Δλ= 1.146×10^-5/1+z.The total error σ(η) can be calculated as:σ(η) = √(σ_ stat^2(η) + σ_ sys^2(η))The added systematic errors will be on the order of 10^-6∼10^-5. We checked whether or not these systematic errors should be considered using 17 spectra yielding σ_ stat(η)<10^-5. If η values from the 17 spectra were not to suffer from the systematic errors and their current statistical errors were enough to be responsible for their scatter around η_0, the values of (η-η_0)/σ_ stat(η) would obey the standard normal distribution with the standard deviation around 1. However, using the actual measured η, we obtained the standard deviation of (η-η_0)/σ_ stat(η) to be 1.28, indicating that the probability of the current statistical errors being account for the scatter of η around η_0 to be only 4.7%. Therefore, the systematic errors caused by the uncertainties of raising from the wavelength calibration should be considered. After considering the systematic errors, the standard deviation becomes 1.04, consistent with the prediction of a standard normal distribution. Therefore, involving the above systematic errors is reasonable, and we list the systematic errors of η for all spectra in Tables 3, 4 and 5.§.§ Robustness of The Measurements Results from the final sample were checked statistically. We first checked if η correlated with A or not. The distribution of η and A measured from the 86 spectra is shown in Figure 8. The Pearson correlation coefficients of the LAE subsample, the QSO subsample and the whole sample are -0.13, -0.15 and -0.11, respectively, indicating no correlation. This is consistent with theoretical expectations, as <cit.> discussed. Using a bootstrap method described in Section 4.1, we calculated the average values of A of the LAE and the QSO subsamples, which are 3.03±0.03 and 3.05±0.06, respectively. These values are consistent with 2.99±0.02 measured by <cit.> and 2.96±0.02 measured by <cit.> within 3σ, and are also consistent with the theoretical value of 2.98 <cit.>.We then checked whether the measured σ_ stat(η) is related to the SNR and FWHM of the lines as expected. As can be seen in Figure 9, σ_ stat(η) is inversely related to SNR_4960 for similar FWHM and is positively related to FWHM for similar SNR_4960. We verified this finding through numerical simulations, as detailed in Appendix D. Numerical simulations give quantitative correlations, as expressed in formulas D1 and D2. In brief, σ_ stat(η) is roughly inversely proportional to SNR_4960 and positively proportional to FWHM. In Figure 9, we show the relation with FWHMs of 130 and 610 km s^-1, which are the median FWHM values of LAE and QSO subsamples, respectively. For both the subsamples, the measurements are in good agreement with the theoretical expectations with scatters less than 0.4 dex.We finally checked whether χ=(η-η_0)/σ(η) obeys the standard normal distribution or not, which should be the case if the true value of all η were η_0 and σ(η) could be account for the uncertainty in the measurement of η. Figure 10 shows the distribution of χ. The mean value and standard deviation of χ and the mean value of χ^2 are 0.11, 1.02 and 1.04, respectively. These values are consistent with the theoretical values with a sample size 86, which are 0±0.11, 1±0.08 and 1±0.15 (68.3% confidence level), respectively. We made a Kolmogorov-Smirnov (KS) test. We calculated a KS statistic value of 0.11, corresponding to a probability of 0.21, >0.05. Thus, we concluded that the distribution of χ obeys the standard normal distribution. § RESULTS OF VARIATION IN THE FINE STRUCTURE CONSTANT We converted all the measurements of η and their errors σ(η) into values and errors of Δα/α using equation (8). The results are listed in Tables 3, 4, and 5, and shown in Figure 11. None of the measurements with all 86 spectra shows a deviation of Δα/α from 0 by more than 3σ, and accuracies are between 2×10^-4 and 10^-2.According to equation (8), Δα/α is proportional to η-η_0. Hence χ=(η-η_0)/σ(η) defined in section 3.3 equals Δα/α/σ(Δα/α), and also represents the deviation of Δα/α from 0. In section 3.3, we have shown that χ for the 86 spectra obeys the standard normal distribution. This is consistent with the assumption that the truth value of Δα/α is 0, and the errors were estimated reasonably.§.§ Averages We calculated average values for the final sample and different subsamples using two methods. The first is the weighted average method (WM). For values x_i=(Δα/α)_i and errors σ_i, we calculated the weight as:w_i = 1/σ_i^2.Thus, the weighted average is:x̅_ WM = ∑_i w_i x_i/∑_i w_i,and its error is:σ(x̅)_ WM = √(∑_i (w_i σ_i)^2)/∑_i w_i.The second is the bootstrap method <cit.>. For a sample containing N elements, we randomly generated 10^5 fake bootstrap samples, each containing N elements by put-back sampling. We calculated the weighted averages of these bootstrap samples with equations (12) to (14). These 10^5 weighted averages typically obey a Gaussian distribution. We adopted the mean value and standard deviation of this Gaussian distribution x̅_ BS and σ(x̅)_ BS as the bootstrap average and its error. Both WM and BS methods have their advantages and disadvantages. On the one hand, when the errors σ_i are estimated inaccurately, the error of average value by the BS method is more accurate. For example, suppose all the errors are overestimated or underestimated by a constant factor; the error by the BS method will not change, but the error by the WM method will be overestimated or underestimated correspondingly. On the other hand, when the sample size N is small or the errors σ_i vary significantly from one another, the weighted averages of bootstrap samples in the BS approach may not obey Gaussian distribution. In this case, the results of the WM method are more reliable. For each sample or subsample, we calculated the average value of Δα/α and its error using both methods, and the results are listed in Table 6.The average of Δα/α of all the 86 spectra is (-3±6)×10^-5 and (-3±5)×10^-5 using WM and BS methods, respectively. The two values are consistent and indicate no deviation between Δα/α and 0 in the 6×10^-5 error level. We calculated the averages of the LAE and QSO sub-samples and further divided the former into SA and other LAE sub-samples and calculated the averages. For each subsample, the averages by both two methods agree with 0.We found that for the SA subsample, the error of average by the BS and WM methods have a great difference, while for other subsamples, the difference is slight. This may be because the SA subsample has a small size N. Because we have shown that the errors σ(Δα/α) are estimated reasonably, we accept the results by the WM method as the final results.The above results are based on η measured by the MG method. As described in section 3.1, there are 17 spectra with which η can be measured by the PM method, including 6 spectra of SA. The averages of Δα/α measured by the PM method are shown in Table 6, which are also consistent with 0. This indicates that the coincidence of Δα/α and 0 does not depend on how we measured η.Due to telluric absorption bands, the whole sample can be naturally divided into three subsamples in three redshift ranges. The averages of Δα/α of the three subsamples, listed in Table 6, are all consistent with 0. The errors of these averages are 3×10^-4, 6×10^-5, and 5×10^-4, respectively. Because the results obtained from the spectra of SA (z=2.37) have high precision, one may worry that the average value in the redshift range of 2.01<z<2.57 is greatly affected by SA and cannot be treated as the representative value of this redshift range. Thus, we calculated the average of Δα/α obtained from other 54 spectra in this redshift range, and the result agrees with 0 with an error of 9×10^-5.§.§ Variation over Time We do not find any variance of Δα/α over time because it agrees with 0 in all the redshift ranges. We limited the rate at which α may vary, which can be used to constrain cosmological models further.The redshifts of LAEs and QSOs in our sample are in the range of 1.097–3.735, corresponding to look-back time (t_ LB) of 8.2–12.0 billion years, or the age of the universe of 1.7–5.5 billion years. We assumed that α varies uniformly over time from then to now:Δα/α = k t_ LB.The k here means the same as α^-1 dα/ dt in <cit.>. We fit the Δα/α measurements of all the 86 spectra, and obtained k=(-3±6)×10^-15 yr^-1.We then included Δα/α measured by <cit.> using SDSS QSO spectra (see their Table 3) into the rate limitation. Note that we did not use the data in the redshift bin 0.580–0.625 in their results because it may be seriously affected by the SELs. If only using measurements at z<1, the result is k=(2±5)×10^-15 yr^-1. And by combining the measurements at z<1 and our measurements at 1.0<z<3.8, we obtained k=(0±4)×10^-15 yr^-1. This is by far the most precise limitation of the rate using the doublet method. Moreover, our results extend the limitation to an era when the universe is only 2 to 5 billion years old. § DISCUSSIONS§.§ Considerations on Selecting LAE and QSO for α Measurement We discuss the reliability and necessity of our final sample selection criteria.First, we only selected spectra where SNRs of both the doublet lines are greater than 10. An SNR of >10 guarantees that the contours of the χ^2(η,A) surface near the minimum value are close to ellipses, and hence, the probability density distributions of η and A are close to Gaussian, which is convenient for subsequent analysis. In addition, the η measured from spectra with low SNR have large errors and hence have small weights when calculating average and rate. Therefore, discarding them hardly affects the precision of the final results. Some previous work studying α variation with optical QSO spectra used the SNR of λ5008 as a criterion for sample selection. We suggest the SNR of λ4960 line to be included in the criteria when using NIR spectra, because it is not strictly correlated with the SNR of λ5008 line (Figure D1) due to the influence of telluric lines.Second, we added masks to pixels affected by SELs, TALs, cosmic rays and bad CCD pixels, and excluded these pixels when measuring line wavelengths. In addition, we excluded spectra where many pixels near doublet are affected, using a mask index I_ mask for quantification. Note that this approach differs from previous works using optical spectra, where pixels affected by SELs and TALs were included in the measurements but were given lower weights due to larger errors. In analysis concerning NIR spectra, systematic errors caused by corrections for SELs and TALs must be addressed, and therefore, the errors of the pixels where strong SELs and TALs are located may be significantly underestimated. Fortunately, the VLT/XShooter spectra have high spectral resolutions, so the impacting ranges of single SEL or TAL are narrow. For a typical XShooter spectrum, the proportion of pixels strongly affected by SELs or TALs is generally between 10% and 20%, except for wavelength ranges where telluric absorption lines are dense. Most spectra can pass the criterion that the sum of the I_ mask of the doublet lines is less than 0.5. Even if the affected pixels are directly masked for these spectra, the remaining pixels are sufficient to achieve reliable wavelength measurements.Finally, we had two additional requirements for QSO spectra compared with previous works. One is that there should be no strong Fe II bump, and the other is that the doublet lines should not blend severely. For these two requirements, we measured Fe II index I_ FeII and blending index I_ blend as the criteria. We tested whether the two criteria are reasonable by examining the spectra they excluded. For a continuum-subtracted spectrum, we obtained the profiles of the doublet emission lines in wavelength ranges corresponding to a velocity range of -600 to 600 km s^-1. We made linear interpolation for the profile of λ5008 to align it with the profile of λ4960 in velocity space, and then calculated the Pearson correlation coefficient between them. This coefficient represents the consistency of the shapes of the doublet lines. In Figure 13, we show the coefficients of QSO spectra excluded by the Fe II or the blending criterion, those of QSO spectra in the final sample, and the A values measured using these spectra. We only show the results of spectra with SNR_5008>50 to ensure the accuracy of the parameters. We found that the doublet profiles have a good correlation in the final sample as the coefficients have a median value of 0.95 and are all greater than 0.87, and the measurements of A are all around the theoretical value of 3. However, the correlation between the doublet profiles is not as good in QSO spectra excluded by the Fe II criterion, as the coefficients have a median value 0.83. Also, a large A value of >4 is measured for more than half of these excluded spectra. The likely reason is that the Fe II bump under the doublet causes the continuum to be fit improperly. The correlation can be poor in QSO spectra excluded by the blending criterion as more than half of coefficients are <0.5, and the measured values of A spread over a wide range. The likely reason is the mistaken decomposition of the very broad emission components. The anomalies in the correlation coefficients and measurements of A indicate that these spectra are unsuitable for the measurements of doublet wavelengths. Hence, our criteria are necessary and practical.According to the initial redshift limit of 1.07<z<3.77, 95 LAE spectra and 601 QSO spectra were selected. However, the final sample only contains 40 LAE and 46 QSO spectra. The passing rate of LAE and QSO subsamples are 42% and 8%, respectively. About 20% of the LAE spectra are excluded because of SELs: in some cases, the mask index is too large, and in other cases, the SELs result in insufficient SNR of . This fraction is similar to the fraction of pixels strongly affected by SELs, which is 10%–20% as we previously estimated.QSO spectra have a lower pass rate. A few spectra are excluded due to observational factors, such as short exposure time causing insufficient SNR or inappropriate redshift causing to fall into the telluric absorption bands. These factors are similar for QSOs and LAEs. However, more than half of the spectra are excluded because of the QSOs' intrinsic properties, such as the weak relative to the continuum, the blending doublet, or a strong Fe II bump. The composite of the 46 QSO spectra in the final sample, displayed in Figure 14, differs from that of general QSOs. To demonstrate this difference, we also present other QSO composite spectra for comparison, including that of <cit.> using XShooter data of 1<z<2 high-luminosity QSOs, and that of <cit.> using SDSS data (of z<1 QSOs for the wavelength range around ). The QSOs in our sample have much stronger lines, and the EW of lines in their composite spectrum can reach 31.9 Å, much higher than those in the other two composite spectra (Table 7). Also, they have weaker Fe II bumps as the Fe II indexes are smaller. Thus, QSOs suitable for α measurement may belong to a particular type, accounting for a small proportion of the total number of QSOs. This may be the main reason for the low pass rate of QSO spectra. Assuming that the proportions of LAEs and QSOs excluded due to observational factors are similar, with a pass rate of ∼40%, we estimated that only 20% of the QSO spectra are suitable for α measurement. §.§ Implications for Future Works We suggest that LAE spectra have more advantages than QSO spectra studying of α variation using the doublet method. The reasons are as follows.First, the doublet lines are narrower in LAE spectra than those in QSOs' spectra, as the median values of the FWHM in the LAE and QSO subsamples are 130 and 610 km s^-1, respectively, with a difference of 4.7 times. The value of the FWHM affects the statistical error of η, as narrated in Appendix D, where σ_ stat(η) ∝ FWHM^3/2 for the same flux. Therefore, a difference of 4.7 times in the values of FWHM would lead to a 10 times difference in σ_ stat(η) if other conditions were the same. In our sample, the resultant precision of Δα/α from the LAE and QSO subsamples are on the same level, for the higher luminosities of the lines in the QSOs' spectra compensate for their disadvantage in the FWHM. The median luminosity of QSOs (10^43.7 erg s^-1) in our sample is 10 times that of LAEs (10^42.7 erg s^-1), as can be seen in Figure 4.Second, the volume density of LAEs suitable for studying α variation is much higher than that of QSOs at z>1. We calculated the volume density of LAEs with Lyα luminosity greater than the mean value of our LAEs, and that of QSOs with continuum luminosity greater than the mean value of our QSOs, both at z∼2. We extracted the spectra on the UVB arm of z∼2 LAEs in our sample and measured the Lyα luminosities, which have a median value of 10^43.1 erg s^-1. Using the Lyα luminosity function of z=2.2 LAEs from <cit.>, we estimated the volume density of LAEs with L_ Lyα>10^43.1 erg s^-1 is ∼5×10^-5 Mpc^-3. We measured the G-band absolute magnitudes of the QSOs in our sample, with a median value of -26.2. Using the continuum luminosity function of 2.2<z<2.6 QSOs from <cit.>, we estimated that the number density of QSOs with g<-26.2 is only ∼5×10^-7 Mpc^-3. In addition, only a small fraction of QSOs have strong and narrow lines and weak Fe II bumps. We estimated this fraction at 20% in section 5.1. As a result, the number density of LAEs suitable for α measurement is 2–3 orders of magnitude higher than QSOs.Finally, systematic errors may be larger and more challenging to analyze when measuring α with QSO spectra. In this work, we considered two possible sources of systematic error for QSOs, the blending of doublet and Fe II bump. We selected QSOs with not severely blended and with weak Fe II. The resultant η/σ(η) obeys the standard normal distribution, implying that the possible systematic errors from these two sources hardly affect the results of our work. However, if the method is applied to future α measurements with higher precision, these systematic errors may become significant as other errors decrease. In addition, theoretically, there are some other sources of systematic errors, such as the uncertainty of the shape of Hβ broad emission lines, weak QSO emission lines, and others. These errors may also affect future works using QSO spectra for accurate α measurement. In LAE spectra, the continuum is weak and mainly comes from starlight. Thus, the modelling of the LAE continuum is more reliable than the modelling of the QSO continuum.We discuss the possible precision of future α measurements using spectroscopy of LAE and other starburst galaxies at z<2. This work obtained an error of Δα/α of 7×10^-5 using 40 XShooter spectra of LAEs. The error is 9×10^-5 if only using 6 spectra of SA and is 1.3×10^-4 if only using the other 34 spectra. To obtain an error of Δα/α below 10^-6 to test the measurements using the many-multiplet method, N>50,000 spectra similar to the SA's or N>600,000 spectra similar to other LAEs' are required, assuming that the error is inversely proportional to √(N). To achieve the required sample size, observing multiple spectra of starburst galaxies simultaneously using multi-object spectrometers is necessary. The ongoing project Dark Energy Spectroscopic Instrument <cit.> may meet the sample size demand. The DESI project is expected to observe about a dozen million emission line galaxies, of which ∼7 million at z<0.95, suitable for the study of α variation using doublet as the wavelength range is 3600–9800 Å. Spectra with sufficiently high SNRs may be on the order of 10^5∼10^6. Because the sensitivity, resolution and sample size of DESI are all better than SDSS, the final accuracy of α measurement must be better than the existing results using SDSS spectra (∼2×10^-5), and might be comparable with the results using the many-multiplet method at present.So far, the highest redshift at which α variation has been measured is 7.1 <cit.>. Their work used the absorption lines in the VLT/XShooter spectra of QSO J1120+0641 at z=7.085. Measuring α variation at higher redshift using QSO absorption lines is challenging because QSOs at z>7 are extremely rare. Fortunately, a number of z>7 LAEs have been spectroscopically identified <cit.>. Their spectral energy distributions suggest they have strong emission lines. These LAEs can be used to constrain the variation of α at z>7 as long as deep mid-infrared spectroscopic observations are obtained because the wavelengths are redshifted to >4 μm.The Near InfraRed Spectrograph (NIRSpec) mounted on James Webb Space Telescope <cit.> can take spectrum in a wavelength range of 0.6–5.3 μm, and hence can observe emission lines of LAEs at z<9.4. Depending on the observational mode, the spectral resolution R can be 100, 1000 or 2700. Assuming that the LAEs have an intrinsic FWHM of 130 km s^-1 and are observed with JWST/NIRSpec with R of 1000 or 2700, we estimated the observed FWHM to be 300 or 170 km s^-1 using equation D3 in Appendix D. Assuming an SNR of λ4960 line of 10, we further estimated that the precision of Δα/α measurement could reach (2∼3)×10^-3. Higher precision could be obtained if the SNR is higher or more than one LAE is observed. This measurement will, for the first time, provide a constraint on the variation of α at 7.1<z<9.4, corresponding to an age of the universe of only 500–700 million years.§ CONCLUSIONS We constructed a sample of 86 public NIR spectra observed by VLT/XShooter of LAE (40) and QSOs (46) at 1.09<z<3.73. The selection criteria of the sample concern the SNRs of the doublets and three indexes: the first describes to what extent the data is affected by SELs, TALs, cosmic rays and bad CCD pixels, the second describes the strength of Fe II bump in the QSO spectrum, and the third describes the blending of the doublet. We measured the parameter η describing the ratio of the wavelengths of the doublet. We further inferred Δα/α, the difference between α in the distant universe and that in the laboratory. When measuring η, we tried a Multiple-Gaussian method and a Profile-Matching method, in which the former applies to all spectra (86) while the latter only applies to a part (17), and we adopted the results using the former method. Reassuringly, the two methods yield consistent results for spectra that are applied to both. Our main results are as follows.1. The Δα/α measured using the 86 spectra are all consistent with 0 by <3σ. The ratio of Δα/α to its error obeys the standard normal distribution, which supports the assumptions that the truth value is 0 and that our estimates of errors are reliable.2. The weighted average value of Δα/α agrees with 0, whether the whole sample or different subsamples is used. If using all the 86 spectra, the average is (-3±6)×10^-5. The average values using the LAE and QSO subsamples have similar errors of 7×10^-5 and 1.1×10^-4, respectively.3. The average Δα/α at three redshifted ranges of 1.09<z<1.66, 2.01<z<2.57 and 3.21<z<3.73 are (5±3)×10^-4, (-6±6)×10^-5 and (4±5)×10^-4, respectively, all in accordance with 0. We found no variation of α over time. We limited the rate at which α varies to be k=(-3±6)×10^-15 yr^-1 since z=3.73 using our measurements. By combining our results with the measurements at z<1 using SDSS QSO spectra, we limited the rate to be k=(0±4)×10^-15 yr^-1.In addition, our results suggest that starburst galaxies' spectra have a better application prospect than QSO spectra in future studies of α variation.§ ACKNOWLEDGEMENTS This work is based on observations obtained with the Very Large Telescope, programs 084.A-0303, 086.B-0320, 087.A-0610, 087.B-0229, 088.A-0672, 088.B-1034, 089.B-0275, 089.B-0936, 089.B-0951, 0909.A-0830, 090.B-0424, 091.A-0413, 091.B-0900, 092.A-0391, 092.B-0860, 093.A-0882, 093.B-0553, 094.B-0111, 095.B-0507, 096.A-0348, 097.A-0153, 098.B-0556, 099.A-0018, 099.A-0254, 099.A-0758, 099.B-0118, 101.A-0528, 101.B-0262, 101.B-0739, 101.B-0779, 102.A-0335, 102.A-0391, 102.A-0652, 103.A-0253, 103.A-0688, 103.B-0446, 104.A-0236, 189.A-0424. Based on data products from observations made with ESO Telescopes at the Paranal Observatory under ESO programme ID 179.A-2005. § DATA AVAILABILITY The data underlying this article are available in the ESO data archive at http://archive.eso.org.[Albareti et al.2015]Albareti2015 Albareti F. D., Comparat J., Gutierrez C. M., et al., 2015, MNRAS, 452, 4153 [Bahcall & Schmidt1967]Bahcall1967em Bahcall J. N., Schmidt M., 1967, Phys.Rev.Lett., 19, 1294 [Bahcall et al.1967]Bahcall1967ab Bahcall J. N., Sargent W. L. W., Schmidt M., 1967, ApJ, 149, L11 [Bahcall et al.2004]Bahcall2004 Bahcall J. N., Steinhardt C. L., Schlegel D., 2004, ApJ, 600, 520 [Bainbridge & Webb2017]Bainbridge2017 Bainbridge, M. B., Webb, J. K., 2017, MNRAS, 468, 1639 [Bennett et al.2014]Bennett2014 Bennett C. L., Larson D., Weiland J. L., Hinshaw G., 2014, ApJ, 794, 135Chand H., Petitjean P., Srianand R., Aracil B., 2005, A&A, 430, 47 [Croom et al.2009]Croom2009 Croom S. M., Richards G. T., Shanks T., et al., 2009, MNRAS, 399, 1755 [Dahle et al.2016]Dahle2016 Dahle H., Aghanim N., Guennou L., et al., 2016, A&A, 590, L4 [DESI Collaboration et al.2016]DESI2016 DESI Collaboration, et al., 2016, arXiv:1611.00036 [Dirac1937]Dirac1937 Dirac P. A. M., 1937, Nature, 139, 323 [Dumont & Webb2017]Dumont2017 Dumont, V., Webb, J. K., 2017, MNRAS, 468, 1568 [Dzuba et al.1999]Dzuba1999 Dzuba V. A., Flambaum V. V., Webb J. K., Phys.Rev.A, 59, 230 [Endsley et al.2021]Endsley2021 Endsley R., Stark D. P., Charlot S., et al., 2021, MNRAS, 502, 6044 [Evans et al.2014]Evans2014 Evans, T. M., Murphy, M. T., Whitmore, J. B., et al., 2014, MNRAS, 445, 128 [Gardner et al.2006]Gardner2006 Gardner J. P., Mather J. C., Clampin M, et al., 2006, Space Sci.Rev., 123, 485 [Gonneau et al.2020]Gonneau2020 Gonneau A., Lyubenova M., Lançon A., et al., 2020, A&A, 634, A133 [Griest et al.2010]Griest2010 Griest, K., Whitmore, J. B., Wolfe, A. M., et al., 2010, ApJ, 708, 158 [Gutiérrez & López-Corredoira2010]Gutierrez2010 Gutiérrez C. M., López-Corredoira M., 2010, ApJ, 713, 46 [Jung et al.2020]Jung2020 Jung I., Finkelstein S. L., Dickinson M., et al., 2020, ApJ, 904, 144 [King et al.2012]King2012 King J. A., Webb J. K., Murphy M. T., et al., 2012, MNRAS, 422, 3370 [Konno et al.2016]Konno2016 Konno A., Ouchi M., Nakajima K., et al., 2016, ApJ, 823, 20 [Kotuš et al.2017]Kotus2017 Kotuš, S. M., Murphy, M. T., Carswell, R. F., et al., 2017, MNRAS, 464, 3679 [Lee et al.2021]Lee2021 Lee, Chung-Chi, Webb, J. K., Milaković, D., et al., 2021, MNRAS, 507, 27 [Lee et al.2023]Lee2023 Lee, Chung-Chi, Webb, J. K., Carswell, R. F., et al., 2023, MNRAS, 521, 850 [Levshakov2004]Levshakov2004 Levshakov, S. A., 2004, LNP, 648, 151 [Marziani et al.2016]Marziani2016 Marziani P., Sulentic J. W., Stirpe G. M., et al., 2016, Astrophys. Space Sci., 361, 3 [Milaković et al.2021]Milakovic2021 Milaković D., Lee C. C., Carswell, R. F., et al., 2021, MNRAS, 500, 1 [Modigliani et al.2010]Modigliani2010 Modigliani A., Goldoni P., Royer F., et al., 2010, SPIE, 7737, 28 [Murphy et al.2001]Murphy2001 Murphy M. T., Webb J. K., Flambaum V. V., et al., 2001, MNRAS, 327, 1237 [Murphy et al.2003]Murphy2003 Murphy M. T., Webb J. K., Flambaum V. V., et al., 2003, MNRAS, 345, 609 [Murphy et al.2007]Murphy2007 Murphy, M. T., Tzanavaris, P., Webb, J. K., et al., 2007, MNRAS, 378, 221 [Murphy & Cooksey2017]Murphy2017 Murphy M. T., Cooksey K. L., 2017, MNRAS, 471, 4930 [Murphy et al.2022a]Murphy2022qso Murphy, M. T., Molaro, P., Leite, A. C. O., et al., 2022, A%A, 658, 123 [Murphy et al.2022b]Murphy2022star Murphy M. T., Berke D. A., Liu F., et al., 2022, Science, 378, 634 [Oliva et al.2015]Oliva2015 Oliva E., Origlia L., Scuderi S., et al., 2015, A&A, 581, A47 [Rahmani et al.2014]Rahmani2014 Rahmani H., Maheshwari N., Srianand R., 2014, MNRAS, 439, L70 [Rivera-Thorsen et al.2017]Rivera-Thorsen2017 Rivera-Thorsen T. E., Dahle H., Gronke M., et al., 2017, A&A, 608, L4 [Savedoff1956]Savedoff1956 Savedoff, M. P., 1956, Nature, 178, 688 [Selsing et al.2016]Selsing2016 Selsing J., Fynbo J. P. U., Christensen L., Krogager J. K., 2016, A&A, 585, A87 [Songaila & Cowie2014]Songaila2014 Songaila A., Cowie L. L., 2014, ApJ, 793, 103 [Stark et al.2017]Stark2017 Stark D. P., Ellis R. S., Charlot S., et al., 2017, MNRAS, 464, 469 [Storey & Zeippen2000]Storey2000 Storey P. J., Zeippen C. J., 2000, MNRAS, 312, 813 [Uzan2003]Uzan2003 Uzan Jean-Philippe, 2003, Rev.Mod.Phys., 75, 403 [Uzan2011]Uzan2011 Uzan Jean-Philippe, 2011, Living Rev.Relativ., 14, 2 [Vanden Berk et al.2001]VandenBerk2001 Vanden Berk D. E., Richards G. T., Bauer A., et al., 2001, AJ, 122, 549 [Vanzella et al.2011]Vanzella2011 Vanzella E., Pentericci L., Fontana A., et al., 2011, ApJL, 730, L35 [Vanzella et al.2020]Vanzella2020 Vanzella E., Meneghetti M., Pastorello A., et al., 2020, MNRAS, 499, L67 [Vernet et al.2011]Vernet2011 Vernet J., Dekker H., D'Odorico S., et al., 2011, A&A, 536, A105 [Webb et al.2001]Webb2001 Webb J. K., Murphy M. T., Flambaum V. V., et al., 2001, Phys.Rev.Lett., 87, 091301 [Webb et al.2022]Webb2022 Webb, J. K., Lee, Chung-Chi, Milaković, D., 2022, Univ, 8, 266 [Whitmore & Murphy2015]Whitmore2015 Whitmore, J. B., Murphy, M. T., 2015, MNRAS, 447, 446 [Wilczynska et al.2020]Wilczynska2020 Wilczynska M. R., Webb J. K., Bainbridge M., et al., 2020, Sci.Adv., 6, eaay9672 [Zakamska et al.2016]Zakamska2016 Zakamska N. L., Hamann F., Pâris I., et al., 2016, MNRAS, 459, 3144 § DETAILS OF DATA REDUCTIONWe briefly introduce the pip-2D data generated by the XShooter pipeline. The data is contained in fits files with three header data units (HDU). As shown in Figure A1(a), the first HDU stores the 2D spectrum of a target (in the unit of ADU); the second stores the flux error of spectrum; the third stores the bad CCD map, marking those pixels affected seriously by bad CCD pixels, cosmic rays, and other factors. The 2D spectrum had been straightened and aligned, with the wavelength direction going horizontally at a fixed interval of 0.6Åand the spatial direction going vertically. This kind of 2D spectrum was produced by combining the 4 frames taken in 4 consecutive exposures under the “ABBA” dither mode. Using an A-B-B+A algorithm, the pipeline eliminated the sky emission and left 3 images of the target on the combined frame, among which the bright image originated from the two exposures labelled as A and have positive fluxes and the other two dark images originated from the two exposures labelled as B and have negative fluxes.We extracted 1-D spectrum based on the pip-2D data as follows.First, the aperture for extracting the spectra from the three images was determined, as shown in Figure A1(b). Almost all the targets are point-like sources, or close to point-like sources, and their brightness profiles along the spatial direction, cutting at the non-emission wavelengths for QSOs and at the λ5008 emission lines for LAEs, were well fitted by Gaussian functions. The best-fitting Gaussian models give positions of the three images and the FWHMs of their brightness profiles, in which the latter were used to determine the width of the aperture for extracting the 1-D spectra. The only exceptional target whose brightness distribution cannot be approximated by a Gaussian function is SA, and we will describe its extraction in another paragraph.Second, spectra were extracted from the three images and merged into one spectrum, as shown in Figure A1(c). Using the previously determined aperture width, we extracted the 1-D spectra from the three images, to which the first HDU contributed the flux and the second HDU the error. With reference to the third HDU, we prepared a boolean value named “mask” for each wavelength of the 1-D spectra. The value was set to 0 if the data quality of all the pixels inside the aperture at this wavelength is 0, and to 1 otherwise. The three spectra were then rescaled to eliminate the small differences between their fluxes caused by weather changes, and the negative fluxes were turned positive. After the rescaling, new contaminated pixels were identified, at which the flux of one spectrum deviates from those of the other two spectra by more than 4.5σ. These pixels were affected by cosmic rays or other adverse factors but had not been identified by the pipeline, so we changed their mask values to 1. We calculated the weighted average of the three spectra using pixels with mask values of 0 and obtained a 1-D spectrum. This averaged spectrum was also assigned a mask value at each wavelength: it was set to 1 if the mask values in the three spectra are all 1, and to 0 otherwise. Besides, a few observations were not taken under the dither mode and each had only one image in its pip-2D data. In these cases, the rescaling and the averaging were skipped.Third, for each target, we calibrated the flux of its 1-D spectra and corrected the telluric absorption, as shown in Figure A1(d). The flux of the 1-D spectra of a target was calibrated using the sensitivity function (the conversion from ADU to flux) stored in its flux-calibrated pip-1D data. As telluric absorptions appeared in the targets' spectra, they must have also appeared in the spectra of standard stars observed close in time. These standard stars' spectra, which belong to project 60.A-9022(C), can be obtained from the ESO archive. So, to correct the telluric absorption in each target's spectrum, we built a telluric absorption model using the spectrum of the standard star, which was observed closest in time and of whose continuum the SNR is >300 in the wavelength range of the NIR arm. For the spectrum of each standard star, we searched for a similar stellar spectrum in the XShooter Spectral Library <cit.> and considered it as the template, and a telluric absorption model can be calculated by dividing the observed spectrum by the template. Corrected QSO spectra using such telluric absorption models were examined in the continuum. The correction was successful at most of the wavelengths. At some wavelengths with severe telluric absorption, the correction was less successful, which may be caused by the inhomogeneity or the variability in the properties of the atmosphere.Finally, for each spectrum pixel, once it meets any of the following situations, we changed its mask value to 1. The situations are: 1. The telluric absorption model of the pixel is below 0.5. 2. A strong SEL affects the pixel. We adopted the list of SELs from <cit.>, and selected 184 strong lines with flux rates >1000. With the information on these lines and with that revealed by the 1-D error spectrum of each target, we identified the sky emission lines in each spectrum and calculated their width for masking, in which an algorithm was involved to give greater widths for stronger lines. 3. The pixel lies near the doublet and is affected by visually identified cosmic rays that the pipeline had missed. The final mask values label pixels affected by strong SELs, TALs, cosmic rays and CCD bad pixels.We extracted a 1-D spectrum for each pip-2D data. For a target, if there were more than one group of exposures taken under the dither mode and hence multiple pip-2D data in the ESO archive, we combined the extracted spectra, as we had done in combining spectra of the three images in the same pip-2D data. SA is a lensed galaxy with multiple clumps, which could be star clusters. The VLT/XShooter observation of SA has been described in detail by <cit.>. This observation contained 3 exposures, during which the slit was placed differently but covered at least two clumps each time. The 2-D spectra of the three exposures are displayed in Figure A2. We extracted 6 spectra using 6 apertures shown in the red line in the figure, denoted as SA1 to SA6.§ APPLICABILITY OF PROFILE-MATCHING METHODAs we have demonstrated in section 3.1, the χ^2(η,A) surfaces obtained by the MG method are smooth, and the contours are close to ellipses, and the best estimates and statistical errors of η and A can be reliably obtained. However, the results obtained by the PM method do not guarantee these properties. We found two types of problems. We illustrate each type with an example (Figure B1) and analyze the possible origin of the problem.We show the first type of problem with LAE J0332-2746 as an example. When the PM method was used, a feature similar to a geological fault occurs on the χ^2 surface. Accordingly, jumps can be seen on the χ^2(η) curve which are roughly evenly spaced with an interval of Δη=0.6Å/λ_1=1.2×10^-4/1+z. Such jumps in the values of χ^2 come from the changes in the pixels of the λ4960 line included in the matching. When η increases, the rightmost pixel is excluded, and at the opposite end, a new pixel joins in to become the new leftmost pixel. The jumps are not obvious in the range of χ^2(η)<χ^2_ min+1 when the λ4960 line has high SNR and small FWHM, but they can be very prominent when the λ4960 line suffers from great noises, which will make the measurement of η unreliable.We then show the second type with LAE J1001+0206 as an example. When measured by the PM method, wavy structures appear on the χ^2 surface. The origin of this structure is as follows. Strong noise causes fault signals to be mistaken for structures on the line profiles. When the fault signals from the two emission lines happen to be aligned, a local minimum appears on the χ^2(η) curve. The width of the emission lines' profile plays an important role in the emergence of such wavy structures. The broader the lines, the more likely the wavy structures will appear.By visually examining the χ^2 surfaces, we selected 17 spectra with which η can be reliably measured by the PM method, including 16 LAE and 1 QSO spectra. The emission lines in these spectra are narrow, have high SNR and are little affected by masks. Notably, the FWHMs of these spectra's emission lines span within 16 pixels. The previous work of <cit.> used QSOs' SDSS spectra, in which the typical emission lines' FWMHs of 400-1000 km s^-1 span for 6–14 pixels. These are informative to the applicability of the PM method. We suggest that for spectra taken by Xshooter, SDSS or other spectrometers, as long as the emission lines' FWMHs span no more than 14–16 pixels, the application of the PM method can be considered. § THE MULTI-GAUSSIAN FITTING RESULTSWe present all 86 spectra and the best-fitting results using the multiple-Gaussian model in Figures C1, C2, C3 and C4.§ FACTORS AFFECTING MEASUREMENT PRECISION OF ΗFactors that affect the precision of η include the width and the SNRs of the doublet, and we explored their influence using simulated spectra. Assuming that both lines have the same Gaussian profile, we generate spectra in which the FWHMs and SNRs of the doublet vary independently. In a simulated spectrum, the FWMHs of the doublet were set to be equal, and the SNRs of the doublet, which were adjusted by assigning varied flux errors, are uncorrelated and can be different. Measurements on η using these simulated spectra revealed a relation between σ_ stat(η) and the FWHM and SNRs of the doublet, which can be expressed as:σ_ stat(η) ≈ 1.9×10^-4(FWHM/ 100 km s^-1) √(1/ SNR_4960^2 + 1/ SNR_5008^2),This relation holds when the FWHM is 100–1200 km s^-1 and SNRs are 10–500.We plot the observed SNRs of the final sample in Figure D1. Figure D1 also shows the contour of the values of σ_ stat(η) with the FWHM of the doublet fixed at 300 km s^-1. In our sample, the values of SNR_5008 are greater than those of SNR_4960 for all the spectra, and their average ratio is 2.06. By assuming that SNR_5008=2.06SNR_4960, we can simplify Equation D1 as:σ_ stat(η) ≈ 2.1×10^-6(SNR_4960/100)^-1(FWHM/ 100 km s^-1).In the simulation, when we were adjusting the SNRs of the doublets by assigning varied flux errors, we found that the values of SNR follow such relation with the value of FWHMs and the flux errors: SNR∝ FWHM^-1/2 Err. Hence if considering the influence of the FWHM alone, σ_ stat(η) will be proportional to FWHM^3/2.The spectral resolution R was generally considered to be one of the factors affecting the measurement precision. Higher resolution leads to more accurate wavelength calibration because the calibrating emission lines of arc lamps and SELs will be narrower and more resolved, offering more information for finding the wavelength solution. It then helps reduce the systematic error σ_ sys(η) originating from the wavelength calibration. Higher resolution also helps reduce the statistical error σ_ stat(η), because the SELs and TALs will be narrower and affect fewer pixels, leading to an increase in the SNRs of lines and hence reducing σ_ stat(η). In addition, when the resolution is poor, the observed lines will appear broader. Assuming that the intrinsic profile of a emission line is a Gaussian with a FWHM of FWHM_intr, and assuming that the broadening function of spectrometry is also a Gaussian, the observed FWHM (FWHM_obs) of this line is roughly:FWHM_obs^2 = FWHM_intr^2 + (c/R)^2,where c is the speed of light. Higher resolution leads to smaller FWHM_obs, and then smaller σ_ stat(η). Nevertheless, to what extent can σ_ stat(η) be reduced by improving the resolution is limited by FWHM_intr. For Xshooter, the resolution R is 4200–8100 (Tables 1 and 2), corresponding to c/R of 37–70 km s^-1. To a emission line with FWHM_intr of 100 –- 150 km s^-1, the instrumental broadening from the Xshooter will bring an increase in FWHM_obs of about 10%–30%. When the emission line's FWHM_intr is above 150 km s^-1, the increase will be no more than 10%. For our sample, the observed FWHMs of the LAE subsample are affected by <30%, while those of the QSO subsample are almost unaffected. In summary, increasing resolution benefits on improving the precision of η measurement. In future studies, if the LAE spectra are used for constraining α variation, we suggest that the resolution R should be no less than 4000. Otherwise, the observed width of the lines would be significantly larger than the intrinsic width, and the measurement precision would be unpromising. | http://arxiv.org/abs/2310.17947v1 | {
"authors": [
"Ge Li",
"Luming Sun",
"Xiangjun Chen",
"Hongyan Zhou"
],
"categories": [
"astro-ph.GA",
"astro-ph.CO"
],
"primary_category": "astro-ph.GA",
"published": "20231027074146",
"title": "Time Variation of Fine-Structure Constant Constrained by [O III] Emission-Lines at 1.1<z<3.7"
} |
firstpage–lastpage Integer Sequences: Irregular Arraysand Intra-Block Permutations Boris Putievskiy October 27, 2023 =================================================================== The estimation of galaxy stellar masses depends on the assumed prior of the star-formation history (SFH) and spatial scale of the analysis (spatially resolved versus integrated scales). In this paper, we connect the prescription of the SFH in the Spectral Energy Distribution (SED) fitting to spatially resolved scales (∼kpc) to shed light on the systematics involved when estimating stellar masses. Specifically, we fit the integrated photometry of ∼970 massive (log (M_⋆/M_⊙) = 9.8-11.6), intermediate redshift (z=0.5-2.0) galaxies with , assuming both exponentially declining tau model and flexible SFHs. We complement these fits with the results of spatially resolved SFH estimates obtained by pixel-by-pixel SED fitting, which assume tau models for individual pixels. These spatially resolved SFHs show a large diversity in shapes, which can largely be accounted for by the flexible SFHs with . The differences in the stellar masses from those two approaches are overall in good agreement (average difference of ∼ 0.07 dex). Contrarily, the simpler tau model SFHs typically miss the oldest episode of star formation, leading to an underestimation of the stellar mass by ∼ 0.3 dex. We further compare the derived global specific star-formation rate (sSFR), the mass-weighted stellar age (t_50), and the star-formation timescale (τ_SF) obtained from the different SFH approaches. We conclude that the spatially resolved scales within galaxies motivate a flexible SFH on global scales to account for the diversity of SFHs and counteract the effects of outshining of older stellar populations by younger ones.galaxies: evolution — galaxies: structural — galaxies: star formation §INTRODUCTIONThe formation of stars from gas and dust is a fundamental process that significantly impacts the development of galaxies throughout the universe. Various factors influence the star-formation (SF) activity <cit.>, including sporadic bursts resulting from major mergers <cit.>, interaction with other galaxies <cit.>, episodes of violent disk instabilities and compaction <cit.>, stellar feedback <cit.>, active-galactic nuclei (AGN) feedback mechanisms <cit.>, and dynamical process related to spiral arms and bars <cit.>.These complex processes, in turn, shape the characteristic features of the galaxies, such as the morphological features <cit.>, the interstellar medium content <cit.>, and the metallicity content within the galaxies <cit.>. The detailed study of these SFHs can offer us crucial insights into the processes of galaxy formation and the galaxy's evolutionary pathways over cosmic time.One technique commonly used to measure the SFHs is modelling and fitting the observed SED of galaxies <cit.>. This involves comparing the observed light from a galaxy across a range of wavelengths to models of galaxy evolution that predict how a galaxy's SED changes as it forms stars over time. To generate these SED fitting models, we require a set of priors to infer the galaxy's physical properties (stellar mass, specific star-formation rate [sSFRs], stellar ages t_50, and attenuation) from the data. These fitting models typically incorporate SFH as a critical component, making the quality of the fitted SFH crucial in determining the reliability and precision of the derived physical properties of the galaxy <cit.>. Various approaches can be adopted by SED fitting method for determining the SFHs of galaxies. One such approach is to employ parametric functional forms for the SFHs, such as the exponentially declining tau model, delayed tau model, or lognormal model <cit.>, and parametric models with additional flexibilities incorporating multiple episodes of SF <cit.>. The restricted nature of these parameterised forms with a fixed number of parameters are unlikely to capture the rich diversity of galaxy SFHs.As a result, the inferred galaxy properties based on these SFHs may be subject to systematic biases, with potential under-reporting of uncertainties, see, e.g., <cit.>. A solution to these issues is to use flexible or non-parametric models that do not assume any explicit functional form and allow for arbitrary SFRs as a function of time, allowing to capture the complexity of physical SFHs. Examples include piecewise constant SFRs in time <cit.>, the dense Basis SFH reconstruction method <cit.>, and libraries of SFHs measured from theoretical models of galaxy formation <cit.>. Though flexible SFH models are more computationally expensive than parametric ones, they promise more reliable recovered SFHs. For instance, <cit.> tested the galaxy properties inferred from the non-parametric model using the SED fitting code<cit.> by ground-truthing them against mock observations. On the other hand, <cit.> tested the robustness of the inferred galaxy properties by comparing them with that inferred from the parametric models. The conclusion from these works is that though the flexibility offered by the non-parametric priors is a clear advantage of this approach, yielding robust and reliable results, there remains a potential weakness in the dependence of the SFHs on the prior selection <cit.>.A complimentary view on the issue of SFH model and prior selection can be gained by using spatially resolved information from galaxies, i.e., by extracting SFHs from spatially resolved scales <cit.>. Most of the findings described above rely on either single-aperture spectroscopic data or integrated multi-band photometric data. By studying the galaxies using integrated photometry, we can only study the statistical evolution of the stellar populations in galaxies as a whole, but it is not easy to understand SF activity happening on small scales which actually trace the galaxy evolution pathways. Additionally, several works have reported that the spatial resolution of the data can also introduce significant biases in the stellar mass estimates and other inferred properties of the galaxies <cit.>. With a similar motivation, <cit.> studied the spatial distribution of stellar mass in a galaxy using the spatially resolved (i.e., pixel-by-pixel) broad-band SED fitting tool, introduced by <cit.> and <cit.>, and found a clear bias between the stellar masses estimated on unresolved and resolved scales. This study motivated us to investigate the impact of spatial resolution on the recovered SFHs, and how this affects the determination of physical properties such as stellar masses, sSFRs, t_50, and τ_SF.Being able to estimate accurate SFHs is fundamentally important not only for galaxies in the local Universe, but also to interpret the light emission of early galaxies. JWST is revolutionizing how we see high-redshift galaxies, though deriving physical quantities from those observations is challenging because galaxies' SFHs are expected to be more variable (i.e. burstier; ). This leads to an increased importance of outshining, where the most recent burst of star formation dominates the SED, making it difficult to assess the stellar mass in the older stellar populations <cit.>. Early JWST results indeed indicate that many of the rest-UV bright galaxies are undergoing bursts of star formation <cit.>, though accreting black holes are also contributing and can lead to difficulty to interpret the SEDs <cit.>. One way to address these challenges is to spatially resolve these early galaxies and connect the global SED to the resolved SEDs <cit.>.In this study, we present detailed measurements of SFHs both on global and spatially-resolved scales for a sample of ∼970 distant galaxies with redshifts z = 0.5-2.0 to shed light on the systematics involved when estimating galaxy properties and to motivate more complex SFHs on global scales. We adopted four types of simple and flexible models for the SFHs, both on spatially resolved and unresolved scales.On spatially resolved scales, we derive SFHs of individual pixels using pixel-by-pixel SED fitting adopting iSEDfit, which we then combine to a total SFHs by summing the pixel-based SFHs (SFH_⋆, res). On global scales, we fit the integrated photometry using (i) a simple tau model within iSEDfit (SFH_⋆, int,τ), (ii) a simple tau model within(SFH_⋆, int,non-flex), and (iii) a flexible, non-parametric model within(SFH_⋆, int,flex).The organisation of the paper is as follows. In Section <ref>, we briefly overview the data sets and methodology adopted to get the SFHs of the galaxies from different models. In Section <ref>, we determine the reconstructed SFHs of the galaxies with all the models and compare the inferred galaxy properties from these SFHs. The core of this paper is Section <ref>, where we discuss the implications of how the assumed SFH models and the spatial resolution affect the inferred physical properties of the galaxies. We summarise our results in Section <ref>.§ METHODS§.§DataIn order to perform pixel-by-pixel SED fitting <cit.>, the measurements of the stellar mass, age, and τ distributions as well as SFHs on kpc-scale for ∼ 970galaxies have been taken from <cit.>. The galaxy sample has been confined to a redshift range of 0.5 ≤ z ≤ 2.0 and a stellar mass range of 9.8 ≤ log (M_⋆/M_⊙)≤ 11.5. The lower redshift bound and the upper stellar mass bound are motivated by completeness due to volume effects. The lower stellar mass bound is motivated by completeness, because of detecting faint galaxies. The upper redshift limit ensures that the galaxies' rest-frame optical SEDs are probed by several filters.<ref> plots the ranges of stellar masses and redshifts of the galaxies used for the sample.The author utilised the publicly available catalogue and imaging dataset from the 3D-HST TreasuryProgramme <cit.> and theCosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS; ) to conduct theirstudy.We use GOODS-South field data because of the largest number of filters and best depths, i.e., the highest S/N ratio on spatially resolved scales for individual galaxies <cit.>. We utilise PSF-matched mosaic images of GOOD-South field data, comprising a maximum of seven filters (B_435, V_606, i_775, z_850, J_125, JH_140, H_160). The total area of this field is about 170 arcmin^2. Using these ancillary data, the stellar masses and photometric redshifts (if no spectroscopic or grism redshift is available) determined with the EAZY <cit.> and FAST <cit.> codes, respectively.§.§Creating 2d Stellar Maps With the setup described in <cit.>, the spatially resolved SED fitting (pixel-by-pixel method) method is used to obtain the spatially resolved physical maps such as mass maps, age maps, and τ maps of each of the galaxies. The pixels have a size of 0.06 arcsec, corresponding to 0.38 and 0.52 kpc at z=0.5 and z=2.0, respectively. The best-fit SED model for each pixel is used to obtain these resolved stellar properties' maps. They used iSEDfit <cit.>, a Bayesian code, to perform the SED fitting. They created a full grid of 100,000 models based on the <cit.> stellar population evolution models with ages between 0.1 and 13.5 Gyr. The SFHs for these models are assumed to be exponentially declining (SFR∝ exp(-t/τ)), with the e-folding timescale τ between (0.01 - 1.0 Gyr) and the <cit.> initial mass function (IMF) is adopted. This exponentially declining model is also known as simple tau model. The metallicity range used is 0.004-0.03, and the <cit.> dust attenuation law is assumed. For each galaxy, the redshift of all pixels used is the redshift of the galaxy from the 3D-HST catalogue. To study the spatial distribution of the physical properties of the galaxies, <ref> shows the mass, age, and τ maps of a few galaxies in the left, middle, and right panels, respectively. It displays the 50^th percentile of the inferred parameters. §.§ SFHs from spatially resolved scales Using the derived maps of stellar mass, age, and τ from <cit.>, we estimate first the SFH of each individual pixel and then combine those to obtain the total SFH (SFH_⋆, res), ensuring propagation of the errors. For a galaxy observed when the age of the universe was t_obs Gyr old, the SFH assuming a simple tau model can be calculated as follows:SFR(t)= norm· e^-(t - (t_obs-t_age))/τ ift > t_obs - t_age 0ift < t_obs - t_agewith normalisation factor norm given by,norm =M_⋆/∫_0^t_obs e^-(t - (t_obs-t_age))/τ dtwhereM_⋆ is the total stellar mass, t_age is the lookback age when SFH started, and τ represents the e-folding timescale.For each galaxy, we fix the stellar mass associated with each j^th pixel and draw N values of both t_age and τ from the Gaussian distribution (with 1 σ uncertainty). We take N = 100.This results in two N draws: the first set consider the variation in age, keeping stellar mass and τ fixed to their best-fit values,{(M_⋆, j, t_age,j^i, τ_j)},andthe second set considers the variation in τ, keeping stellar mass and age fixed,{(M_⋆, j, t_age,j, τ_j^i)}∀i ∈ [1,N=100].Using the above parameter sets and assuming a simple tau model (see Equation <ref>), results in two distinct sets of N SFHs for each pixel: one accounting for errors in stellar age t_age, {SFH_j(M_⋆, j, t_age,j^i, τ_j)} and another accounting for errors in τ, {SFH_j (M_⋆, j, t_age,j, τ_j^i)}.Next, we sum the SFHs{SFH_j} associated with same varied parameter (τ or t_age) across all pixels , resulting in two sets of N global SFHs for the entire galaxy:{SFH_i(t)}_τ = {∑_j=1^No. of pixelsSFH_j(M_⋆, j, t_age,j, τ_j^i)}, and {SFH_i(t)}_t_age = {∑_j=1^No. of pixelsSFH_j(M_⋆, j, t_age,j^i, τ_j)} fori ∈ [1,N=100]where the subscript outside the curly bracket on LHS represents the parameter varied to obtain the SFHs set. Then, to obtain resolved SFHs that accounts for the error in parameter-τ (SFH_⋆, res, τ(t)), and t_age (SFH_⋆, res, t_age(t)), we take the median of the respective sets containing N SFHs,SFH_⋆, res, τ(t) = median({SFH_i (t)}_τ),and SFH_⋆, res, t_age(t) = median({SFH_i (t)}_ t_age)∀i ∈ [1,N=100]Finally, we calculate the global SFH (SFH_⋆, res (t)) by averaging the resolved SFH accounting for the error in τ and t_age,SFH_⋆, res (t)= SFH_⋆, res, τ(t)+SFH_⋆, res, t_age(t)/2. <ref> is a schematic figure illustrating the obtained global SFH from the pixel-by-pixel information. The left panels of <ref> plot the stellar mass, age, and τ map of an example galaxy. The smooth maps highlight the spatially varying galaxy parameters used to calculate SFHs. The right panel shows the global SFH inferred from the spatially resolved maps (we call this the spatially resolved SFH; SFH_⋆, res; red line), the SFH derived from the SED fitting of the total fluxes of all the pixels using iSEDfit code (blue line; SFH_⋆, int,τ), the SFH derived from themodel that assumes simple tau-model (green line; SFH_⋆, int,non-flex), and the SFHs obtained from , which adopts a flexible SFH as our fiducial model in this paper (purple line; SFH_⋆, int,flex). The shaded regions indicate the 16^th-84^th percentile range in each case.§.§ SFH from integrated photometryWe compare the SFHs from spatially resolved scales to the ones from integrated photometry. We obtain the integrated photometry by summing the fluxes of all pixels that belong to the galaxy, as identified in the stellar mass maps in Figure <ref>. This ensures that any differences in the SFHs and stellar masses are not based on aperture effects. We then derive SFHs and stellar masses from the integrated photometry in three different ways, assuming: (i) a simple tau model within iSEDfit (SFH_⋆, int,τ), (ii) a simple tau model within(SFH_⋆, int,non-flex), and (iii) a flexible, non-parametric model within(SFH_⋆, int,flex). We run iSEDfit (approach (i)) with the same setup as described in Section <ref> and <cit.>. We obtain a single value of derived stellar mass, age, τ for the entire galaxy from this approach. To ensure the propagation of errors while estimating the SFH from these derived parameter values, we extracted N parameter values within 1σ uncertainty (similar to Section <ref>) using Gaussian distributions in τ and age. Finally, we take the 50^th percentile (Figure <ref>; solid blue line) of these SFHs and take the 16^th-84^th percentiles corresponding to the shaded region in blue (see Figure <ref>). For approaches (ii) and (iii), we use the Bayesian inference SED-fitting code<cit.>, which adopts the Flexible Stellar Population Synthesis () package <cit.> for stellar population synthesis. In this work, we use the MIST stellar evolutionary tracks and isochrones <cit.>. We adopt a similarmodel as outlined in <cit.>. Specifically, the redshift is set fixed to the photometric redshift (or spectroscopic redshift when available). We adopt a single stellar metallicity that is varied with a prior that is uniform in log(Z_⋆/Z_⊙) between -1.0 and 0.19, where Z_⊙=0.0142. We assume a flexible attenuation law, where we tie the strength of the UV dust absorption bump to the best-fit diffuse dust attenuation index, following the results of <cit.>. The dust attenuation law index n is a multiplicative factor relative to the <cit.> attenuation curve. We parameterise the dust attenuation curve in accordance with the prescription outlined in <cit.>,τ_λ = τ_V/4.05(k(λ)+E_b D(λ))(λ/5500 Å)^nwithE_b = 0.85 - 1.9n.In the above Equation <ref>, k(λ) is the (fixed) <cit.> attenuation curve, D(λ) is a Lorentzian-like Drude profile describing the UV dust bump at 2175 ÅE_b represents an empirical correlation between the strength of the 2175Ådust absorption bump and the slope of the curve, andτ_V is the optical depth of the diffuse component in the V band.The free parameters in this equation are τ_λ, which controls the normalization of the diffuse dust, and n. We assume flat prior for n ∈ (-1, 0.4).We run two different versions of themodel based on different assumptions regarding the SFH. In approach (ii), we assume a simple τ model that has three free parameters: the total stellar mass (uniform prior in log-space between 10^9 and 10^12 M_⊙), the start of the SFH (flat prior between 0.001 Gyr and 85% of the age of the universe at the galaxy's redshift), and τ (uniform prior in log-space between 0.01 and 30 Gyr). On the other hand, in approach (iii), we adopt a flexible model, where we assume that the SFH is step-wise constant in 8 time bins. We fit for the ratio of the SFR in those bins (7 free parameters) plus the total stellar mass. We use the standard continuity prior <cit.> and assume a uniform prior for the stellar mass in the range of 10^9 and 10^12 M_⊙. The first three-time bins are fixed to 0-30, 30-100, and 100-300 Myr, while the remaining bins are logarithmically in time up to a lookback time of 85% of the age of the universe at the galaxy's redshift. Throughout this work, we assume that the flexiblemodel is our fiducial model against which we compare the other SFHs. Recent works have highlighted the flexibility of these models compared to parametric models <cit.>. We choose the flexiblemodel as our fiducial model because it has been widely adopted in the literature <cit.> and integrated photometry is readily available (in comparison with spatially resolved SEDs). The purpose of choosing a fiducial model is to test how well different SFH models are able to recover the properties of galaxies and agree with each other. It is important to note that the paper is not aimed at ground-truthing the accuracy of various SFH approaches but to test the reliability of different approaches when compared to each other. In principle, we expect the SFH from resolved SED fitting should give the most reliable results, because when analyzing a single pixel, the star-dust geometry is simplified, and there may be less variation in age and metallicity, making the results more trustworthy. However, since many pixels need to be fit, the SED models on spatially resolved scales make usually simplified assumptions (for example a simple tau model for the SFH in our case here) in order to shortening the run-time of the fitting (see Section <ref>).§RESULTS In this section, we compare and discuss differences between the galaxies' reconstructed SFHs using all the approaches described in Sections <ref> and <ref>. We aim to evaluate the consistency with which we can infer a galaxy's physical properties from the four different approaches. Our analysis demonstrates that spatially resolved SFHs (SFH_⋆,res) and the flexibleSFH (SFH_⋆, int,flex) are the most consistent with each other in deriving the galaxies' physical properties, while the simpler, parametric SFH approaches are too simplistic to account for the physical diversity of the SFHs.§.§ Overall shapes of galaxy SFHs <ref> shows the recovered SFHs for randomly selected galaxies using the four approaches discussed in Sections <ref> and <ref>. We compare the SFHs obtained from the pixel-by-pixel SED fitting method (SFH_⋆,res; red line), SED fitting of the total fluxes of all the pixels using iSEDfit code (SFH_⋆,int,τ; blue line), parametric fitting models from themodelling (SFH_⋆,int,non-flex; green line) and the flexible non-parametric fitting models from themodelling (SFH_⋆,int,flex; purple line) taken as the fiducial one. The shaded regions indicate the 16^th - 84^th percentiles. For most of the galaxies, the SFH_⋆,res are tracing the fiducial SFH_⋆,int,flex much better than the SFH_⋆,int,τ and SFH_⋆,int,non-flex. The most prominent feature of SFH_⋆,res is that it is able to capture the stochastic behaviour of SF expected from the physical SFH of a galaxy better than the other two models when compared to our fiducial one. Furthermore, SFH_⋆,int,τ and SFH_⋆,int,non-flex fail to match the variations and amplitude of our fiducial SFHs at early cosmic times for most of the galaxies. This may result in missing of a significant portion of the formed stellar mass, potentially leading to an underestimation of the stellar masses of galaxies and other related properties that rely on the assumed shape of the SFHs. §.§ Differences in Stellar MassesWe adopt the integral of the SFH as the stellar mass throughout this work. We use the stellar masses obtained from theassuming a flexible non-parametric SFH (M_⋆,int,flex) to compare with the masses we get from the spatially resolved SFHs (M_⋆,res), with SFHs obtained by fitting the integrated photometry using a simple tau model within iSEDfit (M_⋆,int,τ), and theassuming a non-flexible SFH (M_⋆,int,non-flex). This allows us to understand the systematic differences in estimating stellar masses obtained from the different SFHs. The mass differences of M_⋆,int,τ, M_⋆,int,non-flex and M_⋆,res with respect to the fiducial mass (M_⋆,int,flex) is shown in <ref>. To gain insights into the physical processes driving the observed differences over redshifts, we divided the galaxies into 0.5<z<1.3 and 1.3<z<2.0. For the redshift range of 0.5<z<1.3 (left panel), the median differences between M_⋆,int,flex and M_⋆,res is -0.08 dex. The absolute value of differences is nearly the same, ∼ 0.07 dex for both redshift ranges, but it tends to be positive for the redshift range of 1.3<z<2 (right panel). However, the median differences between M_⋆,int,flex and M_⋆,int,τ is -0.12 dex and -0.14 dex for the left and right panels respectively. Also, the median differences between M_⋆,int,flex and M_⋆,int,non-flex is -0.3 dex and -0.36 dex for the left and right panels respectively. This suggests M_⋆,res are in better agreement with the masses obtained from our fiducial model M_⋆,int,flex than M_⋆,int,τ and M_⋆,int,non-flex.One cause for the differences in the stellar masses is the inability ofSFH_⋆,int,τ and SFH_⋆,int,non-flex to capture early, prolonged SF activity as discussed in Section <ref>. §.§ Mass-weighted ages recovery To further investigate the impact of the captured details of the SF activity in the shape of the SFHs, this following section shows the comparison between mass-weighted stellar ages (t_50) from different SFHs. The t_50 of a galaxy corresponds to the age when it had assembled half of its total stellar mass. <ref> plots the differences in t_50 from different SFHs as a function of the t_50 obtained from our fiducial model. The differences refer to the difference between the t_50 obtained from different models (t_50,⋆,res, t_50,⋆,int,τ, t_50,⋆,int,non-flex) and the t_50 obtained from the fiducial model (t_50,⋆,int,flex). For the redshift range 0.5 < z < 1.3 (left panel), the t_50,⋆,res underestimates the t_50 for the galaxies by -0.14 dexwhen compared to the t_50,⋆,int,flex. However, for the redshift range 1.3 < z < 2.0 (right panel), overall the differences between the t_50,⋆,res and t_50,⋆,flex is the least (-0.04 dex) amongst the three described models. The average of median differences between the t_50,⋆,int,τ and t_50,⋆,int,flex is -0.49 dex and -0.34 dex for left and right panels respectively. Also, the average of median differences between the t_50,⋆,int,non-flex and t_50,⋆,int,flex are -0.3 dex and -0.32 dex for left and right panels respectively.For the galaxies at 0.5 < z < 1.3, the SFH shape favours the younger stellar population. It is due to the formation of massive stars population in actively SF galaxies at lower redshifts, which, in turn, outshines the older stellar population leading to the skewness of the SFH towards the late cosmic time. We will discuss this in detail in Section <ref>.To further look into the impact of outshining on the derived SFHs and the inferred galaxy properties, the following section compares the sSFR of the galaxies with the other galaxy properties.§.§ Correlating sSFRs with other galaxy properties <ref> plots the differences in sSFR measured over the last 100 Myr obtained from different SFHs as a function of the sSFRs obtained from our fiducial model. The differences refer to the difference between the sSFR obtained from the three models (sSFR_⋆,res, sSFR_⋆,int,τ, sSFR_⋆,int,non-flex) and the sSFR obtained from the fiducial model (sSFR_⋆,int,flex). The left and right panel plots the sSFRs in the two redshift ranges: 0.5 < z < 1.3 and 1.3 < z < 2.0. For galaxies with low sSFR values (log (sSFR_⋆,int,flex/yr) < -10) inferred from the fiducial model, the sSFR_⋆,res tends to provide somewhat higher estimates compared to those inferred from the fiducial model. In contrast, for galaxies with high sSFR values (log (sSFR_⋆,int,flex/yr) > -10) from the fiducial model, sSFR_⋆,res either slightly underestimates or traces the sSFR_⋆,int,flex quite well. The average median differences between sSFR_⋆,res and sSFR_⋆,int,flex are -0.1 dex for 0.5 < z < 1.3 and -0.02 dex for 1.3 < z < 2.0. This overestimation of sSFR for galaxies with little or no SF and the underestimation of sSFR for the actively star-forming galaxies when compared to sSFR_⋆,int,flex can be attributed to the SFH model choice for each of the pixels. We will discuss this later in detail in Section <ref>. On the other hand, the average median differences between sSFR_⋆,int,non-flex and sSFR_⋆,int,flex are 0.1 dex and 0.22 dex for low and high redshift galaxies, respectively. Furthermore, the sSFR_⋆,int,τ shows a different trend, where the average median differences between sSFR_⋆,int,τ and sSFR_⋆,int,flex are -0.08 dex for low redshift galaxies (left panel) and 0.16 dex for high redshift galaxies (right panel). Overall, the average median differences in our results show that the galaxy properties inferred from the SFH_⋆,res are in better agreement with those inferred with the flexibleSFH model (the fiducial model) when compared to the properties inferred from the other two approaches. § DISCUSSIONIn this section, we discuss the implications of how the assumed SFH models and the spatial resolution affect the inferred physical properties of the galaxies. Overall, our analysis supports that the SFHs obtained from spatially resolved scales (SFH_⋆,res) provide insights into the galaxies' internal structure and assembly history better than the SFHs obtained from simple parametric forms on the integrated scales (SFH_⋆, int,τ and SFH_⋆, int,non-flex).We will conclude this section by discussing the limitations of this work. §.§ The need of flexibility in SFH Section <ref> and Figure <ref> illustrate a strong correspondence between the SFH_⋆,res and the flexible SFH model of(SFH_⋆, int,flex; our fiducial SFHs) using integrated photometry. Additionally, it points out the challenge of SFH_⋆, int,τ and SFH_⋆, int,non-flex to match the amplitude of the fiducial SFH_⋆, int,flex, especially at large lookback times for most galaxies. The pixel-by-pixel SED fitting model allows for variations in the best-fit values of stellar mass, stellar age, and timescale (τ) for each pixel, resulting in spatially resolved colour gradient; mass,age, and τ maps, as shown in Figure <ref> (Broader Parameter Space). This, in turn, leads to a more stochastic SFH_⋆,res compared to SFHs constructed using a single best-fit value of stellar mass, stellar age and τ from the integrated scales (SFH_⋆, int,τ and SFH_⋆, int,non-flex). The burstiness in SF represents the ISM physics and feedback processes, acting on spatial and temporal scales within galaxies, that outline the galaxy's evolutionary pathways <cit.>. Figure <ref> demonstrates how SFH_⋆,res accurately captures the complex SFH with burstiness in SF activity occurring on kpc scales rather than galaxy-wide, represented by the peaks and lows of SFR. Furthermore, the observed stochasticity of the SFHs reveals the older population of stars that otherwise remains hidden due to the outshining effects <cit.>.Our findings agree with the work of <cit.>, which demonstrated that a spatially resolved analysis could reveal the existence of older underlying stellar populations that are otherwise outshined in integrated analyses, significantly impacting our understanding of these galaxies' nature <cit.>. This is because the domination of strong emission lines drives the need to fit the integrated light with extremely young stellar populations. This explains the shift of the peaks in SFH_⋆, int,τ and SFH_⋆, int,non-flex towards the late cosmic times and the extension of SFH_⋆,res over a wider age range of galaxies (see Figure <ref>). Therefore, SFH_⋆,res can better trace the fiducial SFH_⋆, int,flex than SFH_⋆, int,τ and SFH_⋆, int,non-flex. Moreover, it can provide a deeper understanding of the physical mechanisms that govern SF within galaxies. Besides shedding light on the SF activity within galaxies, we aim to investigate whether spatially resolved observations can enable us to infer galaxies' properties consistently, when compared to inferred properties from the SFH_⋆, int,τ and SFH_⋆, int,non-flex. For this, the following section explores the reasons for the consistency and biases observed in the galaxy properties inferred from different model SFHs presented in Section <ref>. §.§ Impact of assumed spatially resolved SFHs on galaxy properties In Section <ref>, we presented a systematic discrepancy in inferred galaxy properties from different SFH models when compared to the fiducial SFH_⋆, int, flex. However, this discrepancy is within the uncertainties of galaxy properties' estimates due to stellar synthesis modelling <cit.>. In this section, we will try to understand the reason for this discrepancy in the inferred properties of the galaxies from different SFH approaches. <ref> summarises these biases, which shows that the SFH_⋆,res have the least and closest to 0 offsets for stellar mass, t_50, sSFR and τ_SF from those inferred using fiducial SFH_⋆, int,flex.§.§.§ Recovered Stellar Masses Section <ref> presented in detail the biases introduced in stellar mass estimates by different assumed SFHs on spatially resolved and unresolved scales when compared with those inferred from the fiducial SFH_⋆, int, flex.To understand the bias, we plot the stellar mass differences against other physical properties to determine any trends we might be missing. Figure <ref> plots the stellar mass differences against the stellar masses, redshifts, sSFR, half-light mass radius (R_50), t_50 and τ_SF. In the study by <cit.>, the observed inconsistency was attributed to the outshining effect <cit.>.We investigate this by focusing on the stellar mass differences against the redshifts z, sSFR, and t_50. We observe that the discrepancy in the inferred masses from SFH_⋆,res almost vanishes for actively star-forming galaxies (sSFR > 10^-10 yr^-1), i.e., the galaxies with younger stellar populations. However, the SFH_⋆, int,τ and SFH_⋆, int,non-flex still underestimate the inferred stellar mass estimates for these galaxies. On the other hand, the galaxies with the lower t_50, implying the galaxies which recently assembled 50% of the stellar masses are expected to be dominated by younger stellar populations. In these younger galaxies, the observed discrepancy between the SFH_⋆,res and fiducial SFH_⋆, int, flex almost vanishes. However, the underestimation of masses from SFH_⋆, int,τ and SFH_⋆, int,non-flex remains. This is because the young stellar population outshines the older population, resulting in an omission of a significant portion of the older stellar masses formed in the galaxies.Similar reasoning can be applied for the observed underestimation of masses from SFH_⋆, int,τ and SFH_⋆, int,non-flex with the redshift of the galaxies. The underestimation of masses can be attributed to the dominance of young stellar populations and hence outshining effects. Furthermore, we observe no noticeable trend of the mass differences with stellar masses, τ_SF and R_50. Therefore, the work highlights the importance of the spatially resolved scales to recover the older stellar masses in galaxies otherwise obscured due to the outshining effects. §.§.§ Recovered mass-weighted ages and sSFRs As shown in Section <ref>, according to our analysis, all three SFH models, including SFH_⋆,res and parametric SFHs obtained from integrated scales (SFH_⋆, int,τ and SFH_⋆, int,non-flex), tend to provide lower estimates of t_50 when compared to the fiducial SFH_⋆, int, flex. It is important to note that in these comparisons, the flexible Prospector model, SFH_⋆, int, flex, serves as our reference rather than the ground truth. Specifically, for redshifts 0.5 < z < 1.3, thet_50 estimates inferred using SFH_⋆,res are lower by 0.14 dex, while t_50 estimates from SFH_⋆, int,τ and SFH_⋆, int,non-flex are lower by>0.25 dex relative to the t_50 inferred using SFH_⋆, int, flex. However, we find that for redshifts 1.3 < z < 2.0, the differences between the t_50 values obtained from SFH_⋆,res and the ones from the fiducial SFH_⋆, int, flex almost vanish (∼0.04 dex). On the other hand, the differences remain larger than 0.25 dex for SFH_⋆, int,τ and SFH_⋆, int,non-flex with SFH_⋆, int, flex <cit.>. Our study suggests that the significant lower estimates of the t_50 distributions in lookback time with the t_50 estimates from SFH_⋆, int, flex is due to the skewed SFHs towards the late cosmic time for SFH_⋆, int,τ and SFH_⋆, int,non-flex. This skewness, primarily due to outshining effects, obscures the extended process of early galaxy formation and affects the estimated t_50 by several orders of magnitudeFurthermore, these lower estimated values of t_50 agree with the findings of <cit.>, who also reported extremely young inferred ages of the stellar population from the SFHs obtained on integrated scales.Moreover, the higher estimated values of sSFR over the last 100 My by sSFR_⋆,int,τ and sSFR_⋆,int,non-flex when compared to the sSFR_⋆,int,flex could also be due to the problem of outshining as stellar ages are intricately linked to the determination of sSFRs. As discussed above, the priors that inform the stellar age distribution can lead to the shifted peak of SFH towards the late cosmic time. The skewed peak, in turn, results in the overestimation of sSFR (See Figure <ref>). However, from SFH_⋆,res, we observe the sSFRs over the last 100 My are underestimated for the actively star-forming galaxies (log (sSFR_⋆,int,flex) > -10) according to the sSFR_⋆,int,flex. This could be because of the trade-off between accurately inferring stellar age and sSFR for a declining SFH defined by the tau model. When adding the SFHs of all the pixels, the incorrect position of the SFHs' peak, determined by the τ parameter of the model, prevents the recovery of the correct sSFR for recent times. The simple tau model adopted for defining the SFH of each pixel does not consider the increasing SFR. This could be one reason for the inaccuracy of the SFH_⋆,res to infer the accurate sSFRs when compared to the fiducial sSFR_⋆,int,flex for the highly star-forming galaxies.On the other hand, for lower estimates of fiducial sSFR_⋆,int,flex, our analysis reveals that even for galaxies with minimal or no SF, SFH_⋆,res still infers a significant SFR. This is due to the uncertainties associated with the stellar ages of a few of the pixels. The stellar age is one parameter that defines the SFR of the pixels, hence, contributing to the overall SFH of the galaxy. These uncertainties in stellar ages can be attributed to the overestimation of sSFR for galaxies with little or no SF.To address these issues, incorporating other SFH parametrisations, such as the delayed tau model, to define the SFH of each pixel can be tested. This would allow for a rising SFH for both early and late cosmic time for each pixel's SFH. However, testing these parametrisations is outside the scope of this paper. The key takeaway here is that for actively star-forming galaxies (those with log (sSFR_⋆,int,flex) > -10), sSFR inferred from SFH_⋆,res are not overestimated when compared to those obtained using the fiducial SFH_⋆,int,flex, indicating that the SFHs obtained from the pixel-by-pixel SED fitting method could counteract the outshining effects.In summary, spatially resolved SFHs offer a more effective approach to counteract the outshining effects when determining the physical properties of galaxies.§.§ Limitations and future outlookAlthough this work's pixel-by-pixel SED fitting approach demonstrates the consistency of inferred galaxy properties, it is only the first step in gaining new insights into the galaxy evolution and formation process. We can use the inferred galaxy properties to estimate and compare the growth in the center versus outskirts of galaxies. This can further shed light on inside-out growth pattern of galaxies. Here are a few considerations that should be kept in mind for additional future work:* Different assumptions such as additional random burst on top of a constant or delayed SFH can alter the estimated stellar masses and other galaxy properties <cit.>. Incorporating these other SFH parametrisations to define the SFH of each pixel can be tested to better constrain the inferred physical properties of the galaxies.* When analyzing data with a certain pixel scale, the best level of detail is achieved by examining the resolved SEDs within each pixel. However, it is important to consider the signal-to-noise within these pixels, particularly towards the outskirts. In some cases, the signal-to-noise may be too low, leading to unreliable estimations of resolved stellar population properties. To address this issue, we can apply the Voronoi binning method <cit.> that groups pixels based on reaching a desired S/N threshold in multiple resolved filters.* Additionally, it is worth noting that the signal in the data exhibit a correlation between adjacent pixels. This effect is particularly important when the spatial resolution of the instrument, represented by the point spread function (PSF), is larger than the size of the individual pixels. In such cases, neighboring pixels may share some level of information, potentially impacting the accuracy of derived galaxy properties. * Having an absolute truth against which we could compare our derived quantities would be ideal. A possible approach is to conduct our analysis of pixel-by-pixel SED fitting on mock observations from 3D radiative transfer calculations from hydrodynamical simulation <cit.>. § CONCLUSIONSWe present detailed measurements of SFHs both on global and spatially-resolved scales for a sample of ∼970 distant galaxies with redshifts z = 0.5-2.0 to better understand the systematics involved when estimating galaxy properties. On spatially resolved scales, we derive the SFH of a galaxy by summing the SFHs of individual pixels obtained using pixel-by-pixel SED fitting adopting iSEDfit (SFH_⋆, res). On global scales, we fit the integrated photometry using (i) a simple tau model within iSEDfit (SFH_⋆, int,τ), (ii) a simple tau model within(SFH_⋆, int,non-flex), and (iii) a flexible, non-parametric model within(SFH_⋆, int,flex), which we adopted as our fiducial model for the comparison.Our main findings and conclusions are following:* Both SFH_⋆, res from spatially resolved scales and SFH_⋆, int,flex from the flexiblemodel lead to a large diversity of inferred SFHs. Importantly, as shown in Fig. <ref> (see also Fig. <ref>), SFH_⋆, res and SFH_⋆, int,flex agree well with each other, while more simplistic, tau-based models (SFH_⋆, int,τ and SFH_⋆, int,non-flex) are not able to capture this large diversity: they are only consistent with SFH_⋆, res and SFH_⋆, int,flex in recent lookback times, while missing early star formation. * This has direct consequences on the inferred stellar population parameters, in particular the stellar masses (Fig. <ref>), stellar ages (Fig. <ref>) and sSFR (Fig. <ref>). Specifically, we find a median stellar mass difference of ∼ 0.1-0.4 dex (∼ 25 % -152 %) between the masses obtained from unresolved, tau-model SFHs (SFH_⋆, int,τ and SFH_⋆, int,non-flex) and the fiducial SFH_⋆, int,flex, which reduces to only ∼ 0.07 dex (∼ 18 %) when using SFH_⋆, res and SFH_⋆, int,flex. Similarly, mass-weighted ages are lower by 0.3-0.5 dex (∼ 99 % - 217 %) in the case of SFH_⋆, int,τ and SFH_⋆, int,non-flex in comparison with SFH_⋆, int,flex, while this difference reduces significantly (to 0.1 dex; ∼ 25 %) when comparing the ages from SFH_⋆, res and SFH_⋆, int,flex. * These differences are connected: the limited flexibility of the SFH shape of the tau model captures only the recent SFH, thereby missing early star formation and hence underestimates the stellar age and stellar mass. The tau-model mainly captures the recent SFH because the young stellar populations dominate the SED, meaning that the younger stellar populations are outshining the older stellar populations. Both the SFH_⋆, res from spatially resolved scales and the SFH_⋆, int,flex from the flexiblemodel are less affected from outshining because outshining typically only affects certain spatial regions and the prior in the non-parametric SFH approach weights against very young stellar populations, respectively.In summary, the SFHs on spatially resolved scales motivate flexible SFHs on global scales. In light of JWST and high-redshift galaxies, in which SFHs are bursty and outshining is a crucial factor, detailed studies of the SFH on spatially resolved scales in connection with flexible SFHs on global scales are needed in the future. § DATA AVAILABILITY Data available on request.mnras§ RECOVERING STAR-FORMATION TIMESCALES<ref> plots the differences in τ_SF from different SFHs as a function of the τ_SF obtained from our fiducial model. The differences refer to the difference between the τ_SF obtained from different models (τ_SF,⋆,res, τ_SF,⋆,int,τ, τ_SF,⋆,int,non-flex) and the τ_SF obtained from the fiducial model (τ_SF,⋆,int,flex). For the redshift range 0.5 < z < 1.3 (left panel), the τ_SF,⋆,res on an average overestimates the τ_SF for the galaxies when compared to the τ_SF,⋆,int,flex. However, for both the redshift ranges 0.5 < z < 1.3 (left panel) and 1.3 < z < 2.0 (right panel), overall the differences between the τ_SF,⋆,res and τ_SF,⋆,int,flex is the least among the three described models. The average of median differences between τ_SF,⋆,res and τ_SF,⋆,int,flex is ∼ 0.06 dex for both the redshift ranges. However, the average median differences between the τ_SF,⋆,int,τ and τ_SF,⋆,int,flex is 0.17 dex and 0.16 dex for left and right panels respectively. Also, the average median differences between the τ_SF,⋆,int,non-flex and τ_SF,⋆,int,flex are 0.13 dex and 0.14 dex for left and right panels respectively.The τ_SF is a crucial parameter for understanding how galaxies evolve through SF activity. However, accurately determining this timescale from SFHs obtained using integrated scales can be challenging due to the limited prior parameter space. The estimated τ_SF by the simple parametric SFHs on the integrated scales may only be able to account for a few of the peaks associated with different SF phenomena. As a result, these SFHs can either underestimate or overestimate the timescale.In particular, in our case, the overestimation of the τ_SF by τ_SF,⋆,int,τ and τ_SF,⋆,int,non-flex when compared to τ_SF,⋆,int,flex occur due to the outshining effect. This effect causes most of the SF to occur later in cosmic time, which can skew the SFH towards later times and lead to an overestimation of τ_SF. However, when we consider τ_SF,⋆,res, this overestimation almost disappears, reducing to an average of 0.06 dex. Therefore, it is crucial to consider the limitations of simple parametric SFHs when estimating the τ_SF. Additionally, the results highlight that the SFH on spatially resolved scales can better recover the τ_SF and hence, we can better understand the timescale at which galaxy evolution occurs through SF activity.§ RELIABILITY OF THE PROPOSED FLEXIBILITYIn Figure <ref>, we present the distribution of inferred galaxy properties from different model SFHs in our study. The solid orange bars show the distribution of the fraction of galaxies for each bin/range of the galaxy properties inferred from the fiducial model. The galaxy properties from the other three models we compare include the spatially resolved model (red), a simple tau model within iSEDfit (blue), andmodel that assumes tau-SFH (green).We find that the physical properties inferred form SFH_⋆,res exhibits the highest level of consistency with the fiducial SFH_⋆,int,flex. This is evident as the majority of galaxy properties inferred from SFH_⋆,res trace the stellar mass, t_50, and τ_SF bins of those inferred using fiducial SFH_⋆,int,flex, with only ∼ 2%, ∼ 17%, and ∼ 33% galaxies falling outside these boundaries. In contrast, when considering SFH_⋆,int,τ, a larger proportion of galaxies (∼ 12%, ∼ 34%, and ∼ 72%) are unable to trace the stellar mass, t_50, and τ_SF bins inferred using fiducial SFH_⋆,int,flex. Similarly, SFH_⋆,int,non-flex results in ∼ 28%, ∼ 30%, and ∼ 46% of galaxies falling outside these bins. For sSFR, the percentage of galaxies falling outside of the galaxy properties' bins from the fiducial model are ∼ 26%, ∼ 11%, and ∼ 11% for the three models, respectively. As discussed in Section <ref>, this can be attributed to the trade-off between the estimation of mass-weighted age and sSFR. | http://arxiv.org/abs/2310.18462v1 | {
"authors": [
"Shweta Jain",
"Sandro Tacchella",
"Moein Mosleh"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231027201522",
"title": "The motivation for flexible star-formation histories from spatially resolved scales within galaxies"
} |
Engineering the Kitaev spin liquid in a quantum dot system Sankar Das Sarma January 14, 2024 ==========================================================Generalist robot manipulatorsneed to learn a wide variety ofmanipulation skills across diverse environments. Current robot training pipelines rely on humans to provide kinesthetic demonstrations or to program simulation environments and to code up reward functions for reinforcement learning. Such human involvement is an important bottleneck towards scaling up robot learning across diverse tasks and environments.We propose Generation to Simulation (), a method for scaling up robot skill learning in simulation by automating generation of3D assets,task descriptions, task decompositions and reward functions using large pre-trained generative models of language and vision. We generate 3Dassets for simulation by lifting open-world 2D object-centric images to 3D using image diffusion models and querying LLMs to determine plausiblephysics parameters.Given URDF files of generated and human-developed assets, we chain-of-thought prompt LLMs to map these to relevanttask descriptions, temporal decompositions, and corresponding python reward functions for reinforcement learning.We showsucceeds in learning policiesfor diverse long horizon tasks,where reinforcement learning with non temporally decomposed reward functions fails.provides a viable path for scaling up reinforcement learning for robot manipulators in simulation, both by diversifying and expanding task and environment development, and by facilitating the discovery of reinforcement-learned behaviors through temporal task decomposition in RL.Our work contributes hundreds of simulated assets, tasks and demonstrations, taking a step towards fully autonomous robotic manipulation skill acquisition in simulation. § INTRODUCTION Scaling up training data has been a driving force behind the recent revolutions in language modeling <cit.>, image understanding <cit.>, speech recognition <cit.>, image generation <cit.>, to name a few. This begs the question: can we scale up robot data to enable a similar revolution in robotic skill learning? One way to scale robot data is in the real world, by having multiple robots explore <cit.> or by having humans providekinesthetic demonstrations <cit.>.This is apromising direction; however, safety concerns and wear and tear of the robots hinder robot exploration in the real-world, andcollecting kinesthetic demonstrations scales poorly as it is time-consuming and labor-intensive <cit.>. Another way to scale robot data is in simulation, by developing simulated environments, defining tasksand their reward functions, and training robot policies with reinforcement learning, augmenting visuals and physics parameters to facilitate transfer of policies to the real world <cit.>. Such sim2real paradigm has seenrecent successesinrobot locomotion <cit.>, object re-orientation <cit.>,and drone flight <cit.>.These examples,though very important and exciting, are still fairly isolated.A central bottleneck towards scaling up simulation environments and tasks is the laborious manual effort needed for developing the visuals and physics of assets, their spatial arrangement and configurations, the development of task definition and reward functions, or the collection of programmatic demonstrations. Tremendous resources have been invested in developing simulators for autonomous vehicles <cit.>, warehouse robots, articulated objects <cit.>, home environments <cit.>, etc., many of which are proprietary and not open-sourced.Given these considerations, an important question naturally arises: How can we minimize manual effort in simulation development for diverse robotic skill learning?In this paper, we explore automatingthe development of simulation environments, manipulation tasks and rewards for robot skill learning, by building uponlatest advances in large pre-trained generative models of images and language. Our system strives to automate all stages of robot learning: from generating 3D assets, textures, and physics parameters, to generating task descriptions and reward functions, leading to automated skill learning in diverse scenarios, as shown in Figure <ref>. This generative pipeline was first proposed in a recent position paper <cit.>, described as a promising pathway towards generating diverse data for generalist robot learning. In this paper, we present , the first attempt and realization of such a generative paradigm. We automate 3D object asset generation by combining image diffusion models for 3D mesh and texture generation, and LLMs for querying physical parameters information. We showcase how LLMs and image generative models can diversify the appearances and behaviors of assets by producing plausible ranges of textures, sizes and physical parameters, achieving “intelligent" domain diversification.We automate task description, task decomposition and reward function generation by few-shot prompting of LLMs to generate language descriptions for semantically meaningful tasks, concerning affordances of existing and generated 3D assets, articulated or not, alongside their reward functions.is able to generate numerous object assets and task variations without any human involvement beyond few LLM prompt designs. We successfully train RL policies using our auto-generated tasks and reward functions. We also demonstrate the usefulness of our simulation-trained policies, by constructingdigital-twin environments from given real scenes, allowing a robot to practice skills in the twin simulator and deploying it back to the real world to execute the task. In summary, we make the following contributions: * We show how pre-trained generative models of images and language can help automate 3D asset generation and diversification, task description generation, task decomposition and reward function generation that supports reinforcement learning of long horizon tasks in simulation with minimal human involvement.* We deploy our method to generate hundreds of assets, and hundreds of manipulation tasks, their decompositions andtheir reward functions, for both human-developed andautomatically generated object assets. For code, videos andqualitative video results, please visit our project website: <https://gen2sim.github.io/>. § RELATED WORKLarge Language Models for task and motion planning in robotics Large language models (LLMs)map instructions to language subgoals <cit.> or action programs <cit.> with appropriate plan-like or program-like prompts. LLMs trained from Internet-scale text have shown impressive zero-shot reasoning capabilities for a variety of downstream language tasks <cit.> when prompted appropriately, without any weight fine-tuning <cit.>.LLMs were used to generate task curricula and predict skills to execute in Minecraft worlds <cit.> Following the seminal work of Code as Policies, many works map language to programs over given skills <cit.> or hand-designed motion planners <cit.>.Our work instead maps task descriptions into task decompositions and reward functions, to guide reinforcement learning in simulation, to discover behaviours thatachieve the generated tasks. Work of <cit.> also uses language for predicting reward functions for robot locomotion, but does not consider task generation and decomposition or interaction with objects. Our work is the first to use LLMs for task decomposition and reward generation, as well as asset generation.Automating 3D asset creation with generative modelsThe traditional process of creating 3D assets typically involves multiple labor-intensive stages, including geometry modeling, shape baking, UV mapping, material creation,texturing and physics parameter estimation, where different software tools and the expertise of skilled artists are often required. It is thus desirable to automate 3D asset generation to automatically generate high-quality assets that support realistic renderingunder arbitrary views and have plausible physical behaviours during force application and contacts.The lack of available 3D data and the abundance of 2D image data have stimulated interest in learning 3Dmodels from2D image generators <cit.>.The availability of strong 2D image generative models based on diffusion led to high-quality 3D models from text descriptions <cit.> or single 2D images using the diffusion model as a 2D prior <cit.>. In this work, instead of a text-conditioned model, we use a view and relative pose conditioned image generative model, which we found to provide better prior for score distillation.Some methods attempt to use videos of assets and differentiable simulations to estimate their physics parameters and/or adapt the simulation environment, in an attempt to close the simulation to reality gap <cit.>. Our effort is complementary to these works. Procedural demonstration generation using symbolic plannersMany recent works procedurally generatescenes and demonstration trajectories using planners that have access to privileged information to solve the task, and distill the demonstration solutions into learning-based policies that operate directly from pixel or point-cloud input <cit.>. Task and motion planners<cit.> use predefined symbolic rules and known dynamics models, and infer discrete task plans given instruction with lookahead logic search <cit.>. These methods predominantly rely on manually-specified symbolic transition rules, planning domains, and grounding, which limits their applicability. Indeed, works of <cit.> demonstrate their results on relatively simple multi-object box stacking tasks.Scene procedural generation in the aforementioned works <cit.> entails randomizing locations and number of given 3D models under weak supervision from a human that defines the task and the possible location candidates.In contrast, we unleash the common sense knowledge and reasoning capabilities provided by LLMs and use them to suggest task descriptions, task decompositions, and reward functions. We then use reinforcement learning to discover solution trajectories instead of TAMP-based search. Simulation environments for robotic skill learning In recent years, improving simulators for robot manipulation has attracted increasingly more attention.Many robotic manipulation environments and benchmarks <cit.> are built on top of either PyBullet <cit.> or MuJoCo <cit.> as their underlying physics engines, which mainly support rigid-body manipulation <cit.>.Recently, environments supporting soft-body manipulation (<cit.>) provide capabilities for simulating deformable robots, objects and fluids. Our automated asset and task generation are not tied to any specific simulation platforms and can be used with any of them. § generates 3D assets from object-centric images using image diffusion models and predicts physical parameters for them using LLMs (Section <ref>). It then prompts LLMs to generate language task descriptions and corresponding reward functions for each generated or human-developed asset,suitable to their affordances (Section <ref>). Finally, we train RL policies in the generated environments using the generated reward functions. We additionally show the applicability of the simulation-trained policy by constructing digital twin environment in simulation, and deploy the trained trajectory in the real world (Section <ref>). See Figure <ref> for our method overview. §.§ 3D Asset Generationautomates 3D asset generation bymapping 2D images of objects to textured 3D meshes with plausible physics parameters.Theimages can be 1) real images taken in the robot's environment, 2) real images provided by Google search under relevant category names, e.g., “avocado", or 3) images generated bypre-trained text-conditioned diffusion models, such as stable diffusion <cit.>, prompted appropriately to generate uncluttered images of the relevant objects, e.g., “an image of an individual avocado".Wequery GPT-4 <cit.> for a list of object categories relevant for manipulation tasks to search online for or to generate, instead of manually designing it. Please, visit our project site for a detailed list of the objects we generated.Given a real or generated 2D image of an object, we lift it to a 3D model by minimizing re-reprojection errorand maximizinglikelihood of its image renderings using a diffusion model <cit.>.We provide background on image diffusion models below,before we describe our 3D model fitting approach.§.§.§ Image diffusion modelsA diffusion model learns to model a probability distribution p(x) by inverting a process that gradually adds noise to the image x.The diffusion process is associated with a variance schedule {β_t ∈ (0,1)}_t=1^T, which defines how much noise is added at each time step. The noisy version of sample x at time t can then be written x_t = √(α̅_t) x + √(1-α̅_t)ϵ where ϵ∼𝒩(,), is a sample from a Gaussian distribution (with the same dimensionality as x), α_t = 1- β_t, and α̅_t = ∏_i=1^t α_i. One then learns a denoising neural network ϵ̂= ϵ_ϕ(x_t;t) that takes as input the noisy image x_t and the noise level t and tries to predict the noise component ϵ.Diffusion models can be easily extended to draw samples from a distribution p(x|) conditioned on a prompt , wherecan be a text description, a camera pose, and image semantic map, etc <cit.>. Conditioning on the prompt can be done by addingas an additional input of the network ϵ_ϕ. For 3D lifting, we build on Zero-1-to-3 <cit.>, a diffusion model for novel object view synthesis thatconditions on an image view of an object and a relative camera rotation around the object to generate plausible images for the target object viewpoint, = [I_1,π]. It is trained on a large collection 𝒟' = {(x^i, ^i)}_i=1^N of images paired with views and relative camera orientations as conditioning prompt by minimizing the loss:ℒ_diff(ϕ;𝒟') = 1|𝒟'|∑_x^i, ^i∈𝒟' || ϵ_ϕ(√(α̅_t) x^i + √(1-α̅_t)ϵ, ^i , t) - ϵ ||^2.§.§.§ Image-to-3D Mesh using Score Distillation SamplingGiven an image and relative camera pose 2D diffusion model p(I|[I_0,π]), we extract from it a 3D rendition of the input image I_0, represented by a differential 3D representation using Score Distillation Sampling (SDS) <cit.>.We do so by randomly sampling a camera pose π, rendering a corresponding view I_π, assessing the likelihood of the view based on a diffusion model p(I_π|[I_0,π]), and updating the differentiable 3D representation to increase the likelihood of the generated view based on the model. Specifically, the diffusion model is frozen and the 3D model is updated as: ∇(θ)ℒ_SDS(θ; π, , t)=𝔼_t,ϵ[w(t) (ϵ_ϕ(a_t I + σ_tϵ;t, ) - ϵ) ·∇_θ I ],where I = R(θ, π) is the image rendered from a given viewpoint π.The loss we use to backpropagate tothe 3D model parameters θ includes an image re-projection loss for the camera viewpoint of the input image, and score distillation for the other views, using a pre-trained view and pose conditioned image diffusion model of <cit.> to measure 2D image likelihood.We use a two-stage fitting, wherein the first stage an instantNGP NeRF representation <cit.> is used, similar to RealFusion <cit.>, and in the second stage a mesh-based representation is initialized from the NeRF and finetuned differentiably, similar to Fantasia3D <cit.>. More information of our score distillation sampling can be found in our website.§.§.§ Texture generationWe augment the textures of our generated assets using the method of TEXTure <cit.> which iteratively edits a mesh's texture by rendering the mesh from different viewpoints and updating the rendered 2D images. While domain randomization <cit.> randomly re-textures simulated assets, TEXTure produces diverse yet plausible texture augmentations. §.§.§ Generating plausible physical properties The visual and collision parameters of an asset are generated from the Image-to-Mesh pipeline discussed above. To define 3D sizes and physics parameters for the generated 3D meshes, we query GPT-4 regarding the range of plausible width, height, and depth for each object, and the range of its mass given itscategory. We then scale the generated 3D mesh based on the generated size range. We feed the mass and 3D mesh information to MeshLab <cit.> to get the inertia matrix for the asset.Our prompts for querying GPT for mass and 3D object size can be found on our website.We wrap the generated mesh information, its semantic name, as well as the physical parameters into URDF files to be loaded into our simulator. §.§ Task Generation, Temporal Decomposition and Reward Function PredictionGiven either generated assets or assets obtained from publically available datasets, we prompt LLMs to generate meaningful manipulation tasks considering their affordances, to decompose these tasks into subtasks when possible, and to generate reward functions for each subtask. We train reinforcement learning policies for each (sub)task using the generated reward functions, and then chain them together to solve long horizon tasks.Our LLM promptscontain the following sections:1. Asset descriptions. We use combinations of assets we generate using the method of Section <ref>, as well as articulated assets from PartNet Mobility <cit.> and GAPartNet dataset <cit.>.We populate our simulation environment with randomly sampled assets. Then, we extract information from the URDF files including link names, joint types and limits using automated scripts. For example, an assethas parts [, , and ], and joint [] of typewith a joint position range [0, 1]. We then describe the extracted configurations of the assets to the LLM, as shown below:[language=Python]code/asset_desc.py2. Instructions.Theseinclude function APIs that can be used by the LLM to query the pose of the robot end-effector, as well as different assets in the given environment:[language=Python]code/sim_api.py3. Examples of task descriptions and decompositions. These are question-answer pairs that demonstrate task descriptions and their temporal decompositions.[language=Python]code/task.py4. Examples of reward functions. These are task to reward function pairs that present demonstrations of how tasks can be translated to reward functions, as shown below:[language=Python]code/reward.py For the example above, thereward function iscomprised of 1) distance between the end-effector and the target part, and 2) distance between the current and the target pose of an articulated asset, link, or joint. We provideour prompts on our website. We show in Section <ref> that our method can generalize across assets, suggest diverse and plausible tasks, decomposition and reward functions automatically, using a single in-context example in the prompt, without any additional human involvement.§.§ Sequential Reinforcement Learning for Long Horizon Tasks We train policies usingProximal Policy Optimization (PPO) <cit.> maximizing the generated reward functions for each subtask. We train RL for each generated subtask in temporal order. Once policy training for a subtask converges, we proceed to the next subtask, by sampling the initial state of the end-effector and theenvironment close to the terminal states of the previous subtask. This ensures policies can be temporally chained upon training.Our policies are trained per environment using privileged information of the simulation state to accelerate exploration. Such learned policies can be used as demonstration data and distilled into vision-language transformer policies, similar to <cit.>; we leave this for future work. § EXPERIMENTSOur experiments aim to answer the following questions: 1. Cangenerate plausible geometry, appearance, and physics for diverse types of objects and parts, without human expertise and with minimal human involvement? 2. Cangenerate task language goals and reward functions for novel object categories, novel assets with different part configurations, and a combination of multiple assets in an environment? 3. Can the generated environments and reward function lead to successful learning of RL policies? §.§ Asset GenerationWe compare our image-to-3D lifting with two baselines: 1. RealFusion <cit.>, which uses textual inversion of <cit.> to learn a word embedding for the depicted object concept in an image, and uses text-conditioned diffusionwith this text embeddingduring score distillation. 2. Make-It-3D <cit.>, which uses the same NeRF and textured mesh two-stage fittingas , but does not use a view and pose conditioned generative model, rather a text-based image diffusion model, similar to <cit.>.We show comparisonsin Figure <ref>, with images rendered from 4 different views. Our model generates more plausible 3D model as our image diffusion prior comes from an image and pose-conditioned model in comparison to approaches like Fantasia3D or RealFusion which uses text conditioning. We show generated values for 3D sizes and mass for a number of example objects in Table <ref>. We see thatthe common sense knowledge encoded in LLMs can produce reasonable physical parameters.§.§ Automated Skill Learning generates diverse task descriptions, task decompositions and reward functions automatically for hundreds of assets, with different category labels and number of joints, given only a single in-context prompt example regarding the task decomposition and reward function of the task “putting a cup in a Microwave” . Then, the model can generalize to different scenes, asset articulated structures and task temporal lengths. We show some examples of such generated task descriptionsin Figure <ref> and more on our website. We show examples oftask decompositions in Figure <ref>.We provide our prompts in our project website, alongside examples of the LLM's responses. We learn policies that optimize LLM generated rewards with PPO, an off-the-shelf model-free RL algorithm <cit.>.We make use of GPU-parallel data sampling in IsaacGym <cit.> for reinforcement learning.Our robotic setup uses a Franka Panda arm with a mobile base. It is equipped with a parallel-jaw gripper. Our state representation for PPO includes the robot's joint position q ∈ℝ^11, velocity q̇∈ℝ^11 (7-DoF arm, x and y for the mobile base and 2 extra DoFs from the gripper jaws), orientation of the gripper r ∈ S O(3), and poses and joint configurations of the assets present in the scene. We use position control and at each timestep t our policy produces target gripper pose and configurations which isconverted to target robot configurations through inverse kinematics. A low-level PID torque controller provided by IsaacGym isused to produce low-level joint torque commands. We can successfully learn useful manipulation policies, and the polices are able to solve the tasks upon convergence. We show videos of such policies on our website. §.§ Twin environment construction and sim-to-real world transferIn order to validate the usefulness of the policies trained in simulation, we construct a twin simulated environment of our lab's real-robot setup (Figure <ref>). Wedetect, segment, and estimate the poses of the objects in the scene. For non-articulated assets, we use our model to lift the detected object image to corresponding 3D models; for articulated objects, we select the most similar asset from the <cit.>, and populated the simulated environment.We train RL policies in simulation and transfer the joint space trajectory back to our real-world setup. Our method allows successful execution of the generated tasks. For more videos of the trained policies and their task executions in simulation, as well as the sim2real transfer, please refer to our website.§.§ Limitationshas currently the following two important points to address towards materializing into a platform for large-scale robot skill learning that are deployable in real-world: 1. Sim2real transfer of closed-looppolicies: Our current real-world experiments transfer open loop trajectories optimized in the constructed twin environment. For closed-loop policies to transfer to the real world and consumerealistic sensory input, we would need to generate large-scale augmentations for both visual appearances and dynamics for each task and sub-task, and then distil the state-based RL policies to a foundational vision-language policy network. This is a direct avenue for our future work. 2. Beyond rigid asset generation: The assets we can currently generate are rigid or mostly rigid objects, which do not deform significantly under external forces. For articulated assets, we are using existing manually designed and labelled datasets (<cit.>). To generate articulated objects, deformable objects and liquids, accurate fine-grained video perception is required in combination with generative priors to model the temporal dynamics of their geometry and appearance. This is an exciting and challenging direction for future work.§ CONCLUSIONWe have presented , a method for automating the development of simulation environments, tasks and reward functions with pre-trained generative models of vision and language. We presented methods that create and augment geometry, textures and physics of object assets from single images, parse URDF files of assets, generate task descriptions, decompositions and reward python functions, and train reinforcement learning policies to solve the generated long horizon tasks.Addressing the limitations including generating diverse assets with more complex physical properties, and transfering trained policies to real world are direct avenues for our future work.We believe generative models of images and language will play an important role in automating and supersizing robot training data in simulation, and in crossing the sim2real gap, necessary for delivering robot generalists in the real world.takes one first step in that direction. § ACKNOWLEDGMENT This work is supported by Sony AI, NSF award No 1849287, DARPA Machine Common Sense, an Amazon faculty award, and an NSF CAREER award. IEEEtran | http://arxiv.org/abs/2310.18308v1 | {
"authors": [
"Pushkal Katara",
"Zhou Xian",
"Katerina Fragkiadaki"
],
"categories": [
"cs.RO",
"cs.AI",
"cs.LG"
],
"primary_category": "cs.RO",
"published": "20231027175532",
"title": "Gen2Sim: Scaling up Robot Learning in Simulation with Generative Models"
} |
[ [ January 14, 2024 ====================empty emptyDespite considerable performance improvements, current conversational AI systems often fail to meet user expectations. We discuss several pragmatic limitations of current conversational AI systems. We illustrate pragmatic limitations with examples that are syntactically appropriate, but have clear pragmatic deficiencies. We label our complaints as "Turing Test Triggers" (TTTs) as they indicate where current conversational AI systems fall short compared to human behavior. We develop a taxonomy of pragmatic considerations intended to identify what pragmatic competencies a conversational AI system requires and discuss implications for the design and evaluation of conversational AI systems. § INTRODUCTION Advances in deep learning and large language models have enabled the development of high performing NLP and conversational applications <cit.>. This work hasyielded conversational AI applications that appear to reflect the characteristics of human dialogue and follow user instructions <cit.>. Performance improvements have prompted new empirical work on evaluation (i.e., <cit.>).In that spirit, we illustrate several current challenges for conversational AI systems. We illustrate these limitations with examples from conversational AI systems in the literature <cit.> and author interactions with currently fielded conversational AI systems [Chatbots: https://openai.com/OpenAI's chatGPT, https://www.amtrak.com/homeAmtrak's Julie, https://woebothealth.com/try-woebot/WoebotHealth's Woebot] and voice assistants[Voice assistants: https://www.apple.comApple's Siri, https://alexa.amazon.comAmazon's Alexa]. These examples are syntactically appropriate, but have clear pragmatic deficiencies compared to human behavior. This discrepancy triggers the Turing Test criterion- competent human speakers and users would not produce such constructions. We draw on traditional (i.e., travel, personal assistants) and more recent (i.e., LLM chatbot interfaces, mental health applications). Chatbots clearly emphasize some of our concerns and, particularly for medical applications, require highly refined performance. We structure documented general user frustrations with conversational AI systems that highlight complaints about conversational skills separately from other usability concerns <cit.>. In so doing, we synergize applied and basic research endeavors that address language in use. Users, particularly in consequential task domains, are less tolerant of limitations than researchers. We frame pragmatic limitations (and resulting user frustrations) of current conversational AI systems using socially-inspired pragmatic theory of relevance<cit.>. We articulate two sub-themes for understanding and addressing these limitations: preserving local meaning and incorporating context. The resulting taxonomy informs pragmatic criteria for designing and evaluating conversational AI systems, and integrates insights from social and behavioral sciences with computational science.§ USER RELEVANCEConversations preserve relevance. When people engage in conversation, they expect their partners will make relevant contributions that are consistent with the accepted purpose of the conversation. Users have similar expectations for interactions with conversational AI applications (i.e., <cit.>). <cit.> initially proposed that the expectation of relevance is due to a cooperative principle and that the expectation of related utterances is due to a maxim of relation. <cit.> revised this explanation and proposed the search for relevance as a basic feature of human cognition <cit.>. Input is relevant if the processing it generates a worthwhile change in a recipient's representation of the world <cit.>. Relevance depends on context. Information must be worth the recipient's processing effort and be the most relevant information available consistent with their goals <cit.>.Content from a conversational AI system that is incorrect, difficult to understand, or missing important information reduces relevance. Users must expend additional comprehension effort or search elsewhere. Consider this example (originally proposed by <cit.>, repeated on chatGPT May 24 2023 Version): Prompt: You need flour to bake bread. You have a sack of flour in the garage. When you get there, you find that the flour is at the bottom of the sack, but that somebody emptied a couple of boxes of dishwashing detergent on top of it. So you have to...(TTT) Response: carefully remove the boxes of dishwashing detergent from on top of the sack of flour, making sure not to spill or mix any of the detergent with the flour. You can gently lift the boxes, taking caution to avoid any potential spills or mess. Once the boxes are safely removed, you can access the flour at the bottom of the sack and proceed to use it for baking your bread, ensuring it remains uncontaminated by the dishwashing detergent.The response acknowledges that you should not make bread with flour contaminated by dishwashing detergent. However, the response fails to account for the fact that the dishwashing detergent has been emptied into the flour, violating the expectation of providing all relevant implications, and resulting in a dangerous recommendation.The example above, and related empirical work in HCI, illustrates two dimensions of relevance; local meaning and contextual awareness. Pragmatically sensitive responses require integration with these dimensions. Next, we examine each dimension with particular focus on subcategories of contextual awareness. § MEANING AND INFERENCE To generate relevant content, conversational AI systems must respond to all aspects of a user's meaning. A response that addresses one part of a user’s intent may omit other related information. This requires recovering the complex relationships within an utterance <cit.>. Systems that are unable to account for these relationships compromise coherence and require additional user effort that impairs comprehension <cit.>. Consider this voice assistant example: User: I want to go to Cleveland, is there any construction that would slow down my trip?(TTT) Assistant: Getting directions to Cleveland [does not provide traffic information].Pragmatically-appropriate assistant: Traveling to Cleveland will take 3 hours. There are currently no traffic delays. [Pulls up directions] The assistant answers the first aspect of the request, directions to a given city. The second aspect, travel delays, is ignored. Pragmatically appropriate responses require a representation of both the requested information and inference. The user is concerned about travel time- construction is merely one example. § CONTEXTA broader category of pragmatic failures concerns the failure to address other context <cit.>. The first is conversational context, which addresses relationships between utterances and the overall conversation. The second concerns factors external to the conversation. Psychologists invoke the construct of memory to explain context-related processing. Semantic memory contains general knowledge while episodic memory addresses specific events and associated details <cit.>. Both memory sources influence conversational and external context.§.§ Conversational ContextGenerating relevant content requires systems that can incorporate the meaning of an utterance with respect to the entire conversation. Specific topics, words, and phrases create a conversational context that influences later comprehension and production. People represent this context and recursively combine and update it throughout the conversation <cit.>. Results from HCI research demonstrate that users want systems that maintain conversational context <cit.> and consider conversational abilities more important than other features <cit.>. The ideal collaborative system demonstrates awareness of the user's status and intentions in responding <cit.>. Pragmatic failures arise when conversational AI systems fail to incorporate these aspects of human conceptual representations. §.§.§ Awareness of Dialogue Topics Speakers in conversation create, modify, and update expectations about what all participants know <cit.>. These representations include information from personal experience, community membership, and conversational context. Conversational AI applications either do not create these representations (i.e., <cit.>) or do so in a limited fashion <cit.>. Failure to account for this history generates pragmatically inappropriate responses. Consider this patient interview system:(TTT) Interviewer: Do you still go to therapy now?Patient: I don't, I never went to therapy. The word still presupposes previous therapy, without having previously established that fact. An appropriate exchange would first establish whether the patient has attended therapy and, conditional on an affirmative response, ask if the patient still attends therapy. §.§.§ Propose Relevant Next TurnsSimilarly, systems must account for the prior conversation to generate responses. This challenge is particularly apparent for extended interactions. The example below is from a mental health chatbot. The chatbot poses questions intended to help the user process a thought. Bot: Does this thought assume others are thinking negatively about you?User: NoBot: Does your thought place too much blame on others?User: No it doesn'tBot: Does it place too much blame on you?User: Nope(TTT) Bot: Can you rewrite the thought without any distortions?User: It doesn't have any(TTT) Bot: How did you do?The last two questions ignore the history of the conversation. An appropriate response could introduce a new strategy or propose an alternative action.§.§.§ Correction of Prior Content Dialogue is a collaborative process that requires updating and correcting information <cit.>. Despite their pervasiveness, conversational AI systems struggle with these phenomena: User: I want to book a ticket to Newark, sorry New York(TTT) Bot: There are multiple destinations, please select from the list below. [list includes Newark, NJ, Newark DE, and Newark NJ.] This response does not address the user’s self-correction. Appropriate responses would suggest New York, as intended, or request clarification. Previous work has examined clarification generation (i.e., <cit.>. The challenge is identifying when clarification is required.§.§ External Context Failure to account for external information can generate pragmatic failures. Speakers in conversation expect that their partners possess certain background knowledge and awareness of the physical environment <cit.>. People exploit context to provide appropriate detail and tailor their messages <cit.>. Users want conversational AI systems to account for external context and find it frustrating when systems cannot <cit.>. Where prior work has incorporated context, the focus has been on the conversation (i.e., <cit.>). We suggest this approach is insufficient for truly collaborative conversational AI systems. While certainly a challenging goal, systems that incorporate external context will be easier to use and more consistent with users’ expectations.§.§.§ Episodic FeaturesSpeakers regularly use words and expressions that require context to be understood (i.e., today). These features are easy to articulate and interpret in conversation- the current context or non-verbal cues suggest the intended meaning. This information is traditionally represented in different groups <cit.>- facts a third party could observe about the conversation, information all speakers know, and facts that could verify or falsify aspects the conversation. Our taxonomy incorporates these contextual features that are often omitted from conversational AI systems.Imprecise language creates an interpretation challenge for conversational AI systems. Systems often rely only on conversational content and neglect relevant information from the environment and external knowledge sources. Conversational AI systems struggle with conceptual and commonsense knowledge <cit.>, domain specific content <cit.>, attribute information <cit.>, and prior information about the user. Systems that account for certain aspects require users to explicitly state what information should be considered <cit.>. For example, when interacting with chatGPT or GPT-4, users must explicitly state their preferred response style.A lack of external context awareness generates several problems for conversational AI. Consider the following voice assistant example: User: Is there a heat warning today?(TTT) Assistant: I found this on the web [Provides news article about heat wave in the UK when the user is in the US.]Pragmatically-appropriate assistant: Yes, there is a heat warning effect in [area] until [time]. As is typical for conversation, the user does not specify their location. The voice assistant fails to incorporate this information and generates a response inconsistent with the user's intent. The second response is more appropriate- it provides relevant and tailored information. Indeed, users have expectations for information that conversational AI systems should have and want systems to use that information <cit.>.§.§.§ Conceptual KnowledgeInteractions between a lack of external awareness and relevant semantic knowledge can generate pragmatic failures: User: Do I need gloves today?(TTT) Assistant: Not much sun in the forecast today.Pragmatically-appropriate assistant: The temperature is X degrees today. An appropriate response requires knowledge of the current situation- the user’s location, current day, and weather forecast <cit.>. Moreover, the question requires conceptual knowledge. An appropriate answer accounts for the purpose of gloves (i.e., commonsense reasoning <cit.>). The response demonstrates awareness of the current situation, but omits required conceptual knowledge. An appropriate answer would include the forecasted low temperature. Users want systems that can account for these intentions and respond accordingly <cit.>.§.§.§ Default ReasoningIncomplete information often requires the ability to draw conclusions based on general principles or identify when new information invalidates old conclusions <cit.>. Conversation regularly invokes these abilities <cit.>, yet these situations pose problems for conversational AI applications. Consider a modification of a prior example:User: I want to go to Cleveland, are there any traffic delays?(TTT) Assistant: Getting directions to Cleveland [does not provide information about delays].Pragmatically-appropriate assistant: Traveling to Cleveland will take 3 hours. There are no current delays. [Pulls up directions] A pragmatically appropriate response would acknowledge all likely sources of traffic delays. While construction is the most prototypical, an appropriate response would account for other potential delays (i.e., a high probability snowstorm). Similarly, an appropriate response accounts for the probability that a situation will become relevant. Warnings about minor slowdowns several hours ahead would be pragmatically inappropriate. §.§.§ Inconsistent Details Similarly, conversations often require reasoning with inconsistent details <cit.>. Inconsistent details requirethe identification of inconsistent information and determination of what to disregard.Humans resolve inconsistent details effectively <cit.>, but they create challenges for conversational AI systems. Systems that lack these abilities create pragmatic errors: User: Remind me on Friday August 4th at 5:00 to order groceries. [Friday is August 5th, not August 4th](TTT) Assistant: Done [creates reminder for Thursday August 4th at 5:00]Pragmatically-appropriate assistant: Did you mean Thursday August 4th or Friday August 5th? This requires detecting the inconsistency between the Friday and the 4th and resolve what the user intended. An appropriate response requires the ability to request clarification. Failure to detect and resolve inconsistent information results in conversational breakdown <cit.>. Inconsistent information is compounded in situations where dialogue accompanies real world activity (such as in meetings). Previous work has proposed methods for generating clarification requests when conversational AI systems are unsure of a user’s intent (i.e., <cit.>). Given that discrepancies have been adequately identified, similar methods could be used to resolve inconsistencies created by inconsistent details.§.§.§ Expert Knowledge Domain specific applications are not immune from external context pragmatic failures. These applications require conversational AI systems with appropriate background knowledge that generate appropriate responses for the intended audience <cit.>. For example, defining new anatomy terms is appropriate for automated tutoring systems, but unnecessary in a personal assistant for physicians. Similarly, conversational AI systems need an awareness of domain content when intended for domain-specific applications (i.e., <cit.>).§ DISCUSSIONWe have shown that several limitations of current conversational AI systems are symptomatic of a more general problem: a lack of attention to pragmatics. We propose pragmatic failures are captured by relevance theory <cit.>, and suggest two key limitations for conversational AI systems: preserving meaning and awareness of external context. We compile our concerns into a guide (Table <ref>) designed to assess pragmatic requirements for a given application and the sufficiency of proposed strategies. Some of the ideas here have been examined in cooperative responding (i.e., <cit.>). However, these issues are not resolved with respect to modern deep learning based conversational AI systems. Previous work examining pragmatics has primarily investigated specific pragmatic features independently for specific applications (i.e., <cit.>. Treating pragmatics as a decentralized process ignores the interdependent nature of many pragmatic limitations. While resolving one of these issues may improve performance, truly context sensitive systems require the ability to address multiple issues. Some of the limitations we discuss are more glaringly obvious than others (i.e., systems that fail to recover local propositional content). However, all contribute to the design of truly cooperative and context-sensitive conversational AI systems. We suggest that the greatest challenge to creating pragmatically appropriate conversational AI systems is designing centralized systems that address multiple pragmatic limitations.Recent research is addressing some of the issues we discuss here. The success of several recent models <cit.> has prompted increased interest in reinforcement learning with human feedback <cit.>. While these models have improved performance on some pragmatic factors (i.e., following instructions) opportunities for pragmatic improvements remain. Such systems burden the user to specify what information should be considered. Furthermore, their performance notably differs from humans and the lack of transparency around these reasoning and language differences impairs their pragmaticsufficiency (i.e., <cit.>). We take an integrated approach designed to taxonomize recurrent themes, motivate a theoretical framework, and coordinate research efforts. We suggest that a unified framework facilitates integration with applied work on human expectations for conversational AI applications <cit.>. Our framework integrates these issues with theoretical and empirical work in pragmatics.§.§ Limitations and Ethical Considerations This type of work inherits several limitations and ethical concerns related to the development of large models <cit.> and privacy concerns common to conversational AI systems. Many external context features require information outside the lexical content of a conversation.Some, but not all, users want systems to use this information <cit.>, requiring customizable sharing settings. Moreover, we must avoid creating sub-optimal systems for users who share less information <cit.>. Systems that request specific information may overcome this limitation. Second, our position couldsuggest an endorsement of larger models with high monetary and energy costs <cit.>. However, pre-existing knowledge sources <cit.>, modular designs <cit.>, and approaches that address dialogue phenomena (i.e., <cit.>) are promising alternatives. Larger models alone will not resolve pragmatic limitations. While chatGPT improves on some tests posed by <cit.>, clear limitations remain.Truly pragmatically-appropriate systems will require coordinated approaches that address multiple deficits. § CONCLUSION Several types of pragmatic challenges recur across current, disparateconversational AI applications. We use examples from fielded conversational AI systems that are syntactically correct but have clear pragmatic deficiencies. These results contribute to a better understanding of the current pragmatic limitations of conversational AI systems. Moreover, they emphasize the importance of connections between general knowledge and the external environment in developing future conversational AI systems that better meet the pragmatic expectations of users. IEEEtran-1cm | http://arxiv.org/abs/2310.18435v1 | {
"authors": [
"S. M. Seals",
"Valerie L. Shalin"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20231027192150",
"title": "Expanding the Set of Pragmatic Considerations in Conversational AI"
} |
Shift of frameworkInstitut für Theoretische Physik und Astrophysik, Christian-Albrechts-Universität zu Kiel, Leibnizstraße 15, 24118 Kiel, Germany. Corresponding [email protected] Monte Carlo radiative transfer (MCRT) simulations are a powerful tool for determining the appearance of astrophysical objects, analyzing the prevalent physical conditions within them, and inferring their properties on the basis of real observations. Consequently, a broad variety of codes has been implemented and optimized with the goal of solving this task efficiently. To that end, two distinct frameworks have emerged, namely, the extinction and the scattering framework, which form the basis of the path determination procedures of those codes. These procedures affect the step length of simulated photon packages and are used for determining flux estimates. Despite the fact that these simulations play an important role at present and thus require significant computational resources, little attention has been paid to the benefits and the drawbacks of both frameworks so far. In this study, we investigate their differences and assess their performance with regard to the quality of thereby obtained flux estimates, with a particular focus on the required computational demand. To that end, we use a testbed composed of an infinite plane-parallel slab, illuminated from one side, and we determine transmitted intensity using MCRT simulations for both frameworks. We find that there are vast differences between the frameworks with regard to their convergence speed. The scattering framework outperforms the extinction framework across all considered optical depths and albedos when solving this task, particularly in the regime of high optical depths. Its implementation can therefore greatly benefit all modern MCRT codes as it has the potential to significantly reduce required computation times. Thus, we highly recommend its consideration for various tasks that require MCRT simulations.Improving Monte Carlo radiative transfer simulations:A shift of frameworkAnton Krieger 0000-0002-3639-2435 Sebastian Wolf 0000-0001-7841-3452Received 9 June 2023 / Accepted 26 October 2023 =============================================================================================§ INTRODUCTION Radiative transfer (RT) simulations provide a crucial tool in astrophysics across various areas of application. These simulations allow us to predict and analyze the wavelength-dependent appearance of astrophysical objects, linking it to the underlying physical properties of these systems (e.g., protoplanetary disks, exoplanet atmospheres, filaments). To enable the simulation of complex environments, modern RT simulations often apply the Monte Carlo (MC) approach, which relies on the probabilistic simulation of so-called photon packages (PPs) and their randomly generated paths through a model space. Within the last two decades, various MCRT codes have been developed, including MC3D <cit.>, MCFOST <cit.>, MCMax <cit.>, RADMC-3D <cit.>, Mol3D <cit.>, POLARIS <cit.>, and SKIRT 9 <cit.>. These codes have been optimized with regard to their computation time by using such methods as a locally divergence free continuous absorption <cit.>, immediate reemission scheme according to a temperature corrected emission spectrum <cit.>, partial diffusion approximation <cit.>, modified random walk <cit.>, biasing techniques <cit.>, precalculated sphere spectra <cit.>, and an extended peel-off method <cit.>. Nonetheless, the analyses adopting these methods may remain limited due to a lack of required computational resources, in particular, the computation time. This is especially the case for the simulations of the RT in systems of high optical depth, for which we have to consider high numbers of simulated interactions a PP undergoes before leaving the model space and, consequently, the computation time rises. For such systems, it has been found that an overly low number of simulated PPs or an insufficient number of simulated interactions may even lead to the so-called scattering order problem <cit.>,resulting in severely underestimated flux values <cit.>. We note that the underestimation is a consequence of the needed restriction of simulated PPs and a consequence of very high MC noise combined with non-Gaussian statistics. In that regard, the scattering order problem generally states that the calculation of a reliable flux estimate requires a proper representation of simulated scattering orders, with the latter being the number of interactions a PP has undergone prior to its detection. However, the corresponding scattering order distribution has been shown to widen and shift toward larger scattering orders as the transverse optical depth of a system increases, quickly leading to infeasible demands for MCRT simulations and computational limits that can be difficult to overcome. Additionally, the problem of unreliable flux estimates has also been reported for systems with optical depths between 10 and 30 <cit.>. As a result, it can even affect simulations of embedded radiation sources, such as young accreting planets <cit.>. Moreover, three-dimensional (3D) simulations of fainter sources can be extremely challenging, if the number of simulated sources is high and the feasible number of simulated PPs per source is small and becomes a limiting factor. In particular, the simulation of flux and polarization maps of protoplanetary disks resulting from self-scattering (i.e., dust grains thermally emitting photons that scatter off other dust grains) is a computationally extremely demanding task. However, its effect can be crucial in terms of the inference of disk properties based on observations <cit.>. Therefore, it is generally highly desirable to develop the tools and understanding that would allow us to expand the scope and complexity of simulated objects, while maintaining the expected reliability of the results. Motivated by this goal, we explore the difference of two fundamental MCRT frameworks. Even though both frameworks are generally well known, we find that the most popular MCRT codes currently stick to either one of them with no consideration of the other. We show that these frameworks, however, may differ significantly with regard to their required computation speed and quality of flux estimates. To that end, we use a setup composed of an infinite plane-parallel slab, illuminated from one side, as a testbed, and we compare the results of the MCRT simulations of the transmitted intensity performed in both frameworks. In Sect. <ref>, we briefly summarize the frameworks and introduce the setup. Subsequently, we perform MCRT simulations to compare the performance of the frameworks in Sect. <ref>. Lastly, we discuss implications regarding the choice of framework depending on the albedo and optical depth of the system in Sect. <ref>, before presenting our final conclusions. § METHODS In this section, we describe two frameworks, which we call the extinction framework (EF) and the scattering framework (SF), which are both widely used in modern MCRT codes when estimating flux values. The underlying procedure of both frameworks is based on the simulation of PPs on randomly generated paths, which follow certain probability density functions. On its path, a PP may interact multiple times with the medium before eventually leaving the model space. The interactions usually include absorption and scattering, both of which are a cause for extinction. As a result, the weight carried by a PP, which corresponds to an intensity or energy, may be reduced prior to its detection by an observer. To properly simulate this process, the optical properties of the interacting medium have to be taken into account. These include the cross-sections for scattering, C_ sca, absorption, C_ abs, and extinction, C_ ext = C_ sca + C_ abs, as well as the (single scattering) albedo, A=C_ sca/C_ ext. These quantities additionally determine the optical depth τ along a straight path of length Δ l through the medium with τ = C_ intρΔ l, where C_ int is either of the aforementioned cross-sections and ρ the (average) number density along the path. During a MCRT simulation, the path lengths between two consecutive events of interaction are determined on the basis of randomly chosen optical depths, which follow an exponential distribution.§.§ MCRT frameworks The considered frameworks differ with regard to the procedure of path determination. In the EF, the path length between two consecutive events of interaction is determined on the basis of extinction optical depths, τ_ ext, and the weight of an interacting PP is reduced at each point of interaction by a factor that equals the albedo of the medium. At its core, this procedure first randomly selects a location for the subsequent interaction event and then selects its interaction type. For improved performance, the PP is split at the location of interaction into an absorbed and a scattered part, of which only the latter is traced during the flux determination, carrying a weight that is reduced by a weighting factor that corresponds to the albedo, A. In the SF, on the contrary, the process of absorption is assumed to occur continuously along the PP path, which can be interpreted as leaving out the probabilistic determination of the absorption process. In other words, this framework allows us to “passify” the absorption process. As a consequence, only the scattering process is actively simulated in a probabilistic manner, such that the scattering optical depth is used as a basis to determine the path length between two consecutive scattering events and the weight is reduced passively. We note that MCRT simulations, which contrary to this study trace the absorbed radiation energy, have been shown to significantly benefit from the usage of a continuous absorption procedure <cit.>. However, throughout this study, we only consider the effect of its choice on derived flux estimates. In particular, the implementation of these frameworks encompasses three aspects. First, there is the determination of a pseudo random number r∈[0, 1) according to a uniform distribution. Second, there is the calculation of the (corresponding) optical depth, τ=-ln(1-r), and, third, the framework-dependent determination of the thus resulting interaction-free length, Δ l, and weighting factor, w. In the EF (SF), the latter are given by Δ l = τ/ρ/C_ ext (Δ l = τ/ρ/C_ sca) and w=A (w=e^-Δ l ρ C_ abs). We note that these equations only apply if the PP experiences another interaction at its subsequent location of interaction. If, on the contrary, it leaves the interacting medium after traversing a distance of only Δ l'<Δ l, the weighting factor is not applied within the EF and a changed weighting factor of w=e^-Δ l' ρ C_ abs is instead used in the SF. This procedure ensures that the PP leaves the medium with a proper weight. In the limit of infinite simulated PPs, both frameworks yield the same result in terms of estimated flux values. However, their differences may show in the form of: 1) different convergence speeds in terms of the number of required PPs or simulation time;2) the computation time per PP; and 3) the level of variance of flux estimates (MC noise) as a function of the number of simulated PPs.We note, that (in principle) there are many more conceivable frameworks that could be used, however, the EF and SF are the most prominent. Moreover, these frameworks can be transformed into each other by introducing the correct constant stretching factor. As a result of this biasing procedure, the weighting factor would be affected, as the path determination and weighting scheme are interdependent. Eventually, whether the photons are removed along the photon path (SF) or only at the location of the scattering event (EF) depends solely on the chosen stretching factor. A brief derivation of the transformation that links both frameworks is presented in Sect. <ref> in the appendix. §.§ Setup To explore the differences of these frameworks, we adopted the setup of <cit.>, which is composed of an infinite plane-parallel 3D homogeneous slab with a total transverse (extinction) optical depth of τ_ max, which is embedded in a vacuum. This setup is often used as a testbed for MCRT simulations, in which the slab is illuminated by an isotropic (monochromatic) radiation source from one side and the intensity transmitted through the slab is measured, as a function of μ = cosθ, by a simulated observer on the other side of the slab <cit.>. Here, the quantity θ describes the penetration angle of the transmitted PPs with θ = 0 corresponding to the direction perpendicular to the surface of the slab. In total, we use M_ bin=41 detector bins, which are linearly sampled regarding the direction μ∈[0,1 ]. The transmitted intensity of the j-th detector bin is then given by:I_ trans(μ_j) = ∑_i=1^Nw_i, j/NM/2μ_j,where N is the number of simulated PPs and w_i, j is the total weight that the ith PP contributes to the detector bin, corresponding to the direction μ_j. Unless mentioned otherwise, we assume an albedo of A=0.5 and simulate scattering events using an isotropic phase function. For this setup, the result for the transmitted intensity can also be calculated on the basis of a non-probabilistic method <cit.>, which we use to assess the quality of the obtained MCRT-based results.§ RESULTSTo compare the performance of the SF with that of the EF (see Sect. <ref>), we performed MCRT simulations to obtain the transmitted intensity for the slab setup (see Sect. <ref>). We assumed an (extinction) optical depth of τ_ max=20 and ran each simulation until a total of N_ trans=10^4 PPs were successfully transmitted through the slab. Consequently, the required number of simulated PPs N for both frameworks may differ. For each of these simulations, we additionally calculated the corresponding transmission curve that would emerge if the peel-off method <cit.> was applied, which is a commonly used MCRT method, that has shown to significantly boost the performance of MCRT simulations <cit.>. The obtained transmission intensity is shown in Fig. <ref>, together with the result of the non-probabilistic reference solution. We find, that MCRT simulations that use the peel-off method (labeled “peel-off”) generally lead to better results compared to simulations that use the basic approach (labeled “original”), which are prone to higher levels of noise and show a stronger underestimation of flux values, typical of simulations in the regime of high optical depths <cit.>. Furthermore, we find that the SF clearly outperforms the EF for this task, as the estimated transmission is much closer to the reference solution in terms of shape and magnitude. It is important to note that these differences occur despite the fact that both simulations were performed for a fixed number of transmitted PPs. In fact, the EF required N=157,738 simulated PPs of which only N_ trans=10^4 were transmitted, which is ∼83 % higher compared to the SF. However, simulation times scale with the number of simulated interactions, N_ int, rather than the number of simulated PPs, which in the case of the EF amount to N_ int=6,476,737, exceeding the case of the SF by ∼258 %. Consequently, using the SF rather than the EF results in a reduction in the computation time of ∼72 %, while providing better results. The superiority of the SF over the EF regarding this task can be explained as follows. Since the PP path determination takes place in optical depth coordinates and the albedo satisfies A<1, the experienced optical depth of the system is smaller within the SF. In this case, the step length of simulated PP path segments is larger in the SF. As a result, PPs are transported with a faster pace through the model space in terms of traversed units of the extinction optical depth per simulated scattering event. This, in turn, leads to a wider spatial spread of simulated scattering locations, which seems to be crucial for the improvement of flux estimates. Additionally, the complexity of the problem is reduced, indicated by the fact that fewer simulated scattering events lead to better flux estimates. In other words,switching to the SF has practically lowered the number of PPs that need to be simulated for a sufficiently well-sampled representation of the distribution of scattering orders <cit.>. Here, the SO of a PP describes its total number ofscattering events, n, that have occurred. However, we note in passing that in the limiting case of A → 1, these differences among both frameworks disappear.§.§ Impact on scattering order distributionFor the detector bin of the direction μ≈ 0, Fig. <ref> shows the SO dependent number of counts C^(n) with the corresponding scattering order distribution (SOD) for detected PPs. The SOD is shown in terms of Δ I^(n)_ trans / Δlog_10(n), where I^(n)_ trans is the transmission intensity for SO n. These results stem from simulations, in which the peel-off method was applied. As a consequence, the distributions of the number of counts are the same for all detector bins, since during each simulated interaction one peel-off PP is sent to every detector bin. The SODs, however, vary between different detector bins. The plot clearly shows that within the SF less interactions were simulated, which is reflected by the smaller area below the SF-based count distribution. This is especially the case for lower SOs. Despite that, the corresponding SOD exceeds that of the EF, namely, paths that were generated in the SF are overall of greater relevance for the task of determining transmitted intensity values. Moreover, in this framework, PPs often leave the model space with a lower SO, which is beneficial, since the SOs far from the peak of the SOD (n≫ n_ peak≈ 6) barely contribute to the transmission intensity. However, these SOs make up the largest portion of the invested computation time. Comparing the SODs, we find that in the EF the transmission intensity is underestimated across all relevant scattering orders n>0, suggesting that the simulation of highly contributing PP paths is less likely in this framework; furthermore, this even fails for the first simulated scattering event. The difference in performance between both frameworks at τ_ max=20 is striking and can be expected to, firstly, already exist at lower optical depths, and secondly, to increase at higher optical depths. §.§ Onset of flux underestimation In the following, we investigate the conditions that lead to the onset of flux underestimation, meaning that we analyze its dependence on the optical depth, albedo, and number of simulated PPs. In contrast to our previous simulations, here we keep the number of PP send out per simulation fixed rather than the number of transmitted PPs. Furthermore, from now on we make use of the peel-off method (unless mentioned otherwise), as it has shown to reliably and significantly boost the quality and performance <cit.>, see for instance Fig. <ref>.For this purpose, we analyze the resulting bias and MC noise of estimates of the transmitted intensity depending on different key parameters: the extinction optical depth 2 ≤τ_ ext≤ 20, the albedo A ∈{0.1,0.5,0.9 }, and the number of simulated PPs 10 ≤ N ≤ 10^6. Additionally, every simulation was repeated 100 times each, allowing for the determination of medians and interquartile ranges (IQRs) to estimate the bias and MC noise of the obtained intensity values, respectively.Figure <ref> shows the result of the 100 performed simulations assuming τ_ max=10, A=0.5, and N=10^4. Dashed lines represent the median transmission intensities for both frameworks, the hatched areas mark the regions in the plot within which all determined transmission curves of the corresponding color lie, and the shaded regions account for the central 50% of simulated values for each direction, μ.The displayed median curves are suitable for determining the bias of expectable intensity estimates, which are associated with the usage of a limited number of simulated PPs, when performing a MCRT simulation. Similarly, the IQR, which here represents the central 50 % of simulated intensity values, can be used as a measure for the expectable spread of obtained intensity estimates caused by the inherent MC noise of the simulations. Comparing the results between the SF and EF, we find that the SF outperforms the EF in the task of determining both with regard to the bias, as its median curve I_ trans^ median is a better estimator for the reference solution (purple solid line) and the MC noise (vertical width of shaded area) is significantly smaller for all directions, μ.§.§.§ Dependence on optical depthFurthermore, Fig. <ref> shows the bias (upper plot) and MC noise (lower plot) as a function of the extinction optical depth of the slab exemplarily for the direction μ=0.5, which corresponds to radiation leaving the slab with a penetration angle of θ=60^∘. In particular, the bias shows in the plot in the form of underestimated intensity values that lead to I_ trans^ median/I_ trans^ ref<1, where I_ trans^ ref represents the (linearly interpolated) reference solution. This plot suggests that for τ_ ext≲ 8 both frameworks lead to a rather small underestimation, however, for higher optical depths, the bias associated with the EF lead to a significant underestimation. On the contrary, we find that the bias of the SF, originating from the limitation of PPs, is much smaller. Hence, it results in significantly better intensity estimates, especially for τ_ ext > 8, and shifts the onset of the problem toward larger optical depths of τ_ ext≳ 14. The shape of the displayed shaded areas indicates that the spread of expectable intensity values highly depends on the optical depth. In fact, in the case of the EF, the range of the central 50 % of simulated intensity values does not even include the reference values for τ_ ext≳ 12. To better illustrate the MC noise as a function of the extinction optical depth of the slab, the shaded region in the lower plot in Fig. <ref> shows the central 50 % data points for the ratio I_ trans/I_ trans^ median as a function of τ_ ext. Here, the vertical width of the shaded region is a measure of the spread of the distribution of intensity estimates. We find, that for increasing optical depth, the noise generally increases in both frameworks. However, this plot clearly suggests that the MC noise associated with the SF is much smaller for all simulated values of τ_ ext. As a result, it provides a much more reliable estimate for the transmitted intensity than the EF.§.§.§ Dependence on albedo We performed the same analysis for a lower (A=0.1, Fig. <ref>) and a higher (A=0.9, Fig. <ref>) albedo and find that the differences between both frameworks increase for lower values of the albedo. This results in an earlier onset of the problem of intensity underestimation when using the EF. In particular, we find that as low values as τ_ ext > 6 may lead to errors at the order of ∼ 10% (see Fig. <ref> in the appendix). Considering the amplification of the bias induced underestimation at lower values of μ (e.g., see Fig. <ref>), even higher deviations can be expected. On the contrary, the SF performs reasonably well for all considered optical depth values up to τ_ ext=20 even with regard to the associated MC noise, which is significantly lower than for the same slab simulations using the EF. Conversely, for A=0.9 the performance of both frameworks improves both with regard to the bias and the MC noise (see Fig.<ref>). For this case, we report only a small benefit from the SF compared to the EF, as can be expected, since both frameworks become more similar as A→ 1. §.§.§ Dependence on number of photon packagesMoreover, the number of send out PPs N was varied for a simulated slab of thickness τ_ ext=10 and its effect on the bias and MC noise of thereby obtained intensity estimates was analyzed. The results of these simulations are displayed in Fig. <ref>, assuming μ=0.5. As expected, the bias in the upper plot decreases for higher values of N, which is indicated by the ratio I_ trans^ median/I_ trans^ ref approaching a value of 1 and, simultaneously, the noise shown in the lower plot decreases. Additionally, the onset of the problem of flux underestimation therefore shifts toward smaller optical depths as N decreases. A comparison of the results of both frameworks clearly suggests that the number of simulated PPs that is required to achieve a low bias is much smaller for the SF than for the EF. In particular, the number of simulated PP that is needed to achieve a median intensity of ∼ 80 % of the reference solution is about 10 to 100 times higher for the EF than for the SF in this example. Considering the fact that the SF furthermore leads to a smaller number of simulated interactions per PP, this corresponds to a significant potential for saving computation time, already at optical depths as small as τ_ ext=10, which can be expected to further increase in the regime of higher optical depths. § DISCUSSION AND CONCLUSIONSIn the previous section, we compare the results of MCRT simulations that use different frameworks, the EF or the SF, for determining the transmitted intensity through an infinite plane-parallel slab that is illuminated from one side. Our findings clearly suggest that the SF is generally better suited for this task than the EF, as the shape and magnitude of those obtained transmission curves are a better match to the non-probabilistic solution, while at the same time requiring a significantly smaller computation time. The benefits not only entail a smaller bias associated with the usage of the SF, but also greater reliability of transmitted flux estimates due to a smaller MC noise. The observed superiority of the SF may stem from a reduced problem complexity, which seems to be determined in this case by the scattering optical depth rather than the extinction optical depth, with the former being smaller by a factor of A. Generally, the albedo plays a crucial role in determining the complexity of a MCRT problem. For a source that is deeply embedded inside a region of high optical depth, for instance, the number of interaction events scales approximately quadratically with the optical depth <cit.>. In the SF, the computation time can therefore be expected to also scale with ∼ A^2 when simulating deeply embedded sources of radiation (e.g., protostars, protoplanets, visous heating in a protoplanetary disk, etc.).Considering the results for the SODs shown in Fig. <ref>, the improved quality manifests itself for all n>0, suggesting, that the SF is promoting highly contributing paths at a much higher rate than the EF. Moreover, it is known that for higher values of A; firstly, the SOD shifts toward larger scattering orders and, secondly, broader SO intervals have to be simulated to ensure a reliable transmitted intensity estimation <cit.>. The path determination in the SF, interestingly, follows exactly these trends, while the EF path determination utilizes a procedure that is completely independent of the albedo. Therefore, it can be expected that the SF outperforms the EF for all values of A, which is supported by our results (presented in Sect. <ref>). Our analysis of the onset of flux underestimation (Sect. <ref>) has shown that this issue can manifest itself already at optical depths as small as τ_ ext≈ 6 or lower, depending on the number of feasibly simulated PPs per source. Such an early onset of this problem may very well explain the high level of MC noise present in MCRT simulations of embedded planets <cit.> or of the process of self-scattering <cit.>, which were both performed within the EF. Meaning, a shift to the SF may significantly benefit such simulations in the form of a reduced simulation times and less MC noise.To evaluate the performance of MCRT simulations in the regime of high optical depths, we additionally performed simulations for a slab of transverse optical depth τ_ max=75 for A ∈{0.1, 0.5, 0.9 } and determined the transmitted intensity (see Fig. <ref>). Here, we analyzed the interplay between the framework, the albedo, and an additional usage of composite-biasing <cit.>, a prominent method that affects the path determination. These simulations provided further support for the thus far presented conclusions (for details, see Sect. <ref>). Overall, using the SF resulted in benefits for all tested albedos and optical depths, both independent of the additional usage of the composite-biasing method. Despite the benefits the SF clearly offers, there are cases, in which the EF may outperform the SF. For instance, the back-scattered part of radiation may be misrepresented in the optically thick case when A≪ 1. In this case, the corresponding flux values may be underestimated as a consequence of the less densely sampled distribution of scattering locations close to the illuminated side of the slab. Whether this is of concern, however, depends on the particular type of simulation and the specifics of the simulated system. In general, when deciding on the framework, it seems crucial to consider the distribution of simulated interaction points. If, for instance, a dense distribution close to the origin of radiation (in units of optical depth) is desired, the EF may be beneficial. Similarly, if the optical depth is very small, the EF may have benefits due to a denser sampling; however, this could be compensated within the SF by the usage of a method that enforces a minimum number of simulated scattering events per PP. A dedicated study that aims at addressing these topics would certainly be of great interest.However, apart from the path determination procedure, both frameworks also exhibit striking differences regarding their weight determination schemes, which has implications with respect to their benefits and shortcomings. In general, the contribution of a simulated PP path to a flux estimate equals the product of its probability (density) and weight. Considering that the means by which a deviation from some most contributing path is penalized by these frameworks differs. In particular, the SF penalizes a deviation by a weight associated with the spatial deviation from the most contributing path. The EF, on the other hand, penalizes by attributing a lower probability for traversing large extinction optical depths. In the EF, there is always a residual possibility for a simulated PP to traverse any optical depth without losing weight, while in the SF the weight always decreases as the PP traverses an absorbing medium. Therefore, a PP transmitted within the SF always carries information about its complete previous path, as its total length passively affected its (energy) state due to the application of a corresponding weighting factor. As a result, the final weight may vary significantly stronger within the EF, resulting in an overall higher MC noise of the flux estimates (see Sect. <ref>) even at low optical depths, and notably in the regime of high optical depths. As a result, choosing the SF over the EF likely results in a variance reduction, which has been supported by our results. In other words, "passifying" the absorption process allows us to reduce the number of simulated scattering events while simultaneously decreasing the bias and inherent MC noise. This raises the question of whether it is possible to passify other probabilistic procedures in MCRT simulations or improve them by utilizing machine learning techniques such as reinforcement learning to generate probability density functions for random variables that depend on the simulated environment. Based on the results of this study, we can conclude that the SF clearly outperforms the EF in the task of determining transmission intensities using MCRT simulations. We showed that this affects the quality of the results significantly in the form of a reduced bias, smaller MC noise, and an alleviation of the problem of underestimated flux values. Additionally, the computational demand to achieve a low bias within the SF can be multiple orders of magnitude lower, leading to an overall considerably faster convergence speed. Its implementation can therefore greatly benefit all modern MCRT codes that are aimed at determining transmitted flux values. We thank all the members of the Astrophysics Department Kiel for helpful discussions and remarks.We acknowledge the support of the DFG priority program SPP 1992 "Exploring the Diversity of Extrasolar Planets (WO 857/17-2)".aa§ FRAMEWORK TRANSFORMATION In the following, we demonstrate how the EF and the SF are linked via the application of a biasing technique. By doing so, we furthermore show, that the discrete nature of removing PPs at the location of scattering in the EF thereby transforms into the continuous absorption scheme that is applied in the SF. In the EF, the weighting factor is w=A and the traversed extinction optical depth is selected by the determination of a pseudo random number r∈[0, 1) according to a uniform distribution, followed by its calculation: τ_ ext=-ln(1-r).Therefore, the probability density function for the random variable, τ_ ext, is given by dr/ dτ_ ext = exp(-τ_ ext). Next, we assume that a PP is emitted and follows a straight path until it interacts at a distance l∈[l^*, l^*+ δ l ) with the medium, where δ l ≪ l^* is small. In the EF, this interval corresponds to an interval of random numbers r ∈[r_i, r_i+ δ r ) and to an optical depth interval τ_ ext∈[τ_e, τ_e+ δτ_e ), where τ_e = l^* ρC_ ext and δτ_e = δ l ρC_ ext. The expected portion of its initial energy f_ sca, that is scattered within this region, is then given byf_ sca = ∫_r_i^r_i+ δ r drA, = ∫_τ_e^τ_e + δτ_e dτ_ ext ( dr/ dτ_ ext) A, = ∫_τ_e^τ_e + δτ_e dτ_ exte^-τ_ ext A,=A δτ_e e^-τ_e.Similarly, in the SF, the interval corresponds to an optical depth interval τ_ sca∈[τ_s, τ_s+ δτ_s ), where τ_s = l^* ρC_ sca = Aτ_e and δτ_s = δ l ρC_ sca = A δτ_e. In this framework, the scattered portion of energy thus equals:f_ sca = ∫_τ_s^τ_s + δτ_s dτ_ scae^-τ_ sca e^-τ_ abs,where e^-τ_ abs is the weighting factor accounting for the continuous absorption of energy, which eventually yields the same result as the calculation in the EF. Next, we assume that the determined PP path is stretched in Eq. (<ref>) by a constant factor of α, namely, τ_ ext=-αln(1-r), such that:( dr/ dτ_ ext) = 1/α e^-τ_ ext/α.We can then use this result and re-write Eq. (<ref>) as follows:∫_τ_e^τ_e + δτ_e dτ_ exte^-τ_ ext A, = ∫_τ_e^τ_e + δτ_e dτ_ ext 1/α e^-τ_ ext/α( α e^-τ_ ext+τ_ ext/α A ),where the term in front of the bracket corresponds to the probability density function for stretched PP paths and the term inside the bracket is the resulting corresponding weighting factor.When switching from the EF to the SF, the stretching factor is α = 1/A, which, when plugging it into the previous equation, yields∫_τ_e^τ_e + δτ_e dτ_ exte^-τ_ ext= ∫_τ_e^τ_e + δτ_e dτ_ ext A e^-τ_ ext A( e^-τ_ ext+τ_ ext A)= ∫_τ_s^τ_s + δτ_s dτ_ scae^-τ_ sca e^-τ_ abs.Here, we used a coordinate transformation and substituted τ_ ext with τ_ sca/A to arrive at Eq. <ref>. As can be seen, this is the same expression as used in the SF (see Eq. (<ref>)). Meaning, by selecting τ_ ext values and stretching them to match τ_ sca, it is necessary to apply the weighting factor w=e^-τ_ abs, which can be interpreted as the continuous absorption of photons along the path. As a result, the EF, in which no stretching is applied (i.e., α=1), is the only case in which the weighting factor is independent of the selected path length and there is no continuous change of energy along the path. § REGIME OF HIGH OPTICAL DEPTHS In order to evaluate the performance of MCRT simulations in the regime of high optical depths, we additionally performed simulations for a slab of transverse optical depth of τ_ max=75 and albedo A=0.5, and determined the transmitted intensity, using N=10^6 send out PPs. However, since the obtained transmission curves exhibit a high level of noise, which is a consequence of the high optical depth, each MC-based simulation was repeated 100 times. Apart from the calculation of a non-probabilistic solution for the problem, we performed three types of MC-based simulations, which all included the peel-off method. One simulation type is based on the SF and does not apply any further MCRT methods (besides the peel-off method) to boost its performance. The other two types additionally use composite-biasing <cit.>, which is a MCRT biasing technique used for stretching PP paths and both types differ regarding their utilized frameworks. Stretching paths is particularly interesting, as it has the potential to greatly improve transmitted flux estimates in the regime of high optical depths, <cit.> and its performance has not previouslybeen tested in the SF. The results of these in total 300 MC-based simulations and the non-probabilistic solution are shown in Fig. <ref>. We find that simulations that do not use the stretching method fail to properly estimate the transmission curve for all penetration directions, μ. Therefore, using the SF does not by itself suffice to solve the problem of high optical depths, and further techniques are needed to efficiently sample highly contributing paths. When combining the stretching method with either of the two frameworks, the results significantly improve and both types of simulations seem to perform similarly well. This suggests that the usage of the composite-biasing method has practically alleviated the downsides of the EF enough, such that the performance of both frameworks appears to be equivalent. However, it must be stressed, that this is found to be the case for an albedo of A=0.5. In further tests, we found that the performance of these three types of simulations strongly depends on the albedo. In the case of a small albedo (A=0.1), it seems that switching to the SF suffices to properly estimate the transmission, such that no stretching method was needed. Using the EF without stretching, however, resulted in significantly underestimated intensities. This may be explained by the fact that the scattering optical depth of the system equals only 7.5, which is not particularly high. Therefore, the complexity of the MCRT simulations within the SF is much lower compared to the case of EF-based simulations. For an albedo of A=0.5, the SF without stretching has shown to improve the quality beyond similar simulations performed in the EF, which also further increases when additionally using the composite-biasing technique. Finally, if the albedo is relatively large (A=0.9), the SF still performs better than the EF when stretching is not used. However, the reduction of computation time was not particularly high in this case. As expected, though, we find that using the composite-biasing method degrades the quality of intensity estimates, which is likely caused by an insufficient coverage of the underlying SOD, in accordance with the findings of <cit.>.In summary, we conclude that using the SF rather than the EF resulted in overall lower computation times and consistently better transmitted intensity estimates. This is independent of the fact of whether the stretching method was applied or not.§ SUPPLEMENTARY MATERIAL | http://arxiv.org/abs/2310.18429v1 | {
"authors": [
"Anton Krieger",
"Sebastian Wolf"
],
"categories": [
"astro-ph.IM",
"physics.comp-ph"
],
"primary_category": "astro-ph.IM",
"published": "20231027190108",
"title": "Improving Monte Carlo radiative transfer simulations: A shift of framework"
} |
Carlos Jurado [email protected] 0009-0009-7568-8851]Carlos Jurado Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA0000-0002-9802-9279]Smadar Naoz Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USA Mani L. Bhaumik Institute for Theoretical Physics, Department of Physics and Astronomy, UCLA, Los Angeles, CA 90095, USA0000-0002-6406-1924]Casey Y. Lam Department of Astronomy, University of California, Berkeley, CA 94720, USA Observatories of the Carnegie Institution for Science, Pasadena, CA 91101, USA0000-0003-0992-0033]Bao-Minh Hoang Department of Physics and Astronomy, University of California, Los Angeles, CA 90095, USAMost galaxies, including the Milky Way, harbor a central supermassive black hole (SMBH) weighing millions to billions of solar masses. Surrounding these SMBHs are dense regions of stars and stellar remnants, such as neutron stars and black holes. Neutron stars and possibly black holes receive large natal kicks at birth on the order of hundreds of km s^-1.The natal kicks that occur in the vicinity of an SMBH may redistribute the orbital configuration of the compact objects and alter their underlying density distribution. We model the effects of natal kicks on a Galactic Center (GC) population of massive stars and stellar binaries with different initial density distributions. Using observational constraints from stellar orbits near the GC, we place an upper limit on the steepness of the initial stellar profile and find it to be core-like. In addition, we predict that 30-70 % of compact objects become unbound from the SMBH due to their kicks and will migrate throughout the galaxy. Different black hole kick prescriptions lead to distinct spatial and kinematic distributions. We suggest that the Roman Space Telescope may be able to distinguish between these distributions and thus be able to differentiate natal kick mechanisms.§ INTRODUCTION Nuclear star clusters (NSCs) are the dense regions consisting of stars and stellar remnants near the centers of most galaxies, including our Milky Way. Most NSCs surround a central supermassive black hole (SMBH) with a mass between 10^6 - 10^9M_⊙ <cit.>.Due to its proximity, our Galactic Center (GC) can serve as a unique place to investigate the conditions likely to occur at other galactic nuclei. While the star formation process in the vicinity of an SMBH still remains a mystery, in particular with respect to the prevalence of binary formation, some studies indicate similarities to the field, where most massive stars (OBA spectral type) reside in a binary or higher order configuration <cit.>. Specifically, there are already three confirmed eclipsing binaries in the inner ≃ 0.2 pc of the GC <cit.>, with possibly even more candidates <cit.>. Observations of the inner 0.02 pc find a dearth of young few million year old binaries, consistent with dynamical interactions <cit.> and suggesting a binary fraction close to 100% at birth for massive S-cluster stars <cit.>. Furthermore, X-ray observations have detected a large number of X-ray sources, implying a population of X-ray binaries or cataclysmic variables <cit.>. On the theoretical side, <cit.> suggested that as many as 70% of binaries survive after a few million years of dynamical evolution at the GC. The dynamical interaction includes both frequent flybys from single passing stars that tend to unbind the binary <cit.>, as well as interaction with the SMBH via the Eccentric Kozai Lidov mechanism <cit.>. Further, <cit.> suggested that the existence of binaries mayexplain the peculiar properties of the stellar disk in the GC <cit.>. Moreover, merging binaries were suggested to form the G2-like object population <cit.>.The evolution of massive binaries in the GC is affected by natal kicks that neutron stars (NSs), and possibly black holes (BHs), receive at birth <cit.>.Observations of pulsar motion have revealed that neutron stars receive significantly large kick velocities on the order of hundreds of km s^-1 <cit.>. It has been demonstrated that natal kicks can account for the misalignment between the orbital angular momentum and spin axes observed in pulsar binaries <cit.>. Studies have suggested that hypervelocity stars (HVSs) <cit.>, as well as extreme mass ratio inspirals (EMRIs) can be produced as a result of natal kicks disrupting massive binaries in the GC <cit.>.It is currently debated as to what the underlying stellar and stellar remnant distribution around SMBHs at the center of galaxies is. Theoretical arguments of a dynamically relaxed population yield, ρ(r) ∝ r^-α, with α = 3/2 - 11/4 <cit.>. However, detailed measurements of the stars in our GC suggest a shallower distribution of α = 1.1 - 1.4 <cit.>. The distribution of compact objects at the GC, also known as the “dark cusp”, has important implications for the dynamics in the vicinity of an SMBH. In particular, the compact object distribution strongly affects the rate of gravitational wave events, tidal disruption events, and the fraction of long-lived binaries in the GC <cit.>.In this work, we study the evolution of binary stars orbiting the Galactic Center's SMBH and the resultant distribution of NSs and BHs. In Section <ref>, we describe the methodology to form single and binary BH and NS systems from massive stellar binaries, as well as the different natal kick prescriptions. In Section <ref>, we show that varying the initial stellar distribution steepens the post-kick compact object distribution, and that observations of the unseen mass in the Galactic Center allow us to constrain the initial stellar density profile. We also find that numerous high-energy events will be produced in this environment. In Section <ref>, we study the spatial and velocity distribution of compact objects near the Galactic Center, and suggest that the Roman Space Telescope may be able to distinguish between different kick prescriptions. We close with discussion and conclusions in Section <ref>. § METHODOLOGY In <cit.>, Monte Carlo simulations of massive stellar binaries within 0.1 pc of the GC's SMBH were implemented to explore the effects of natal kicks on the binaries. In this work, we expand on these earlier simulations and explore the effects that varying the initial stellar distribution has on the overall compact object density profile within the central parsec of the GC. See Figure <ref> for a schematic of the methodology.§.§ Birth ConfigurationsEach system begins as a hierarchical triple, comprising an inner binary of two main sequence stars (m_1 and m_2) and an outer binary consisting of the orbit around an SMBH. The frame of reference is selected to be the invariable plane and we define the orbital parameters of the inner (outer) binary using the Keplerian elements for the semimajor axis, a_1 (a_2), eccentricity, e_1 (e_2), inclination, i_1 (i_2), argument of periapsis, ω_1 (ω_2), longitude of the ascending node, Ω_1 (Ω_2), and true anomaly, f_1 (f_2). The inner and outer orbits are inclined to each other by a mutual inclination, i_tot = i_1 + i_2.We define m_1 to be the more massive stellar binary member, such that it is always the first to undergo a supernova explosion. The mass distribution of m_1 is chosen from a Kroupa IMF ranging from 8 - 100 M_⊙ <cit.>. The mass ratio, defined as q = m_2/m_1, is chosen from a uniform distribution ranging from 0.1 - 1. We set the mass of the SMBH at m_∙ = 4 × 10^6M_⊙<cit.>.The eccentricity distribution for the inner binary e_1 is uniformly distributed between 0 and 1, while the outer orbit eccentricity e_2 is taken from a thermal distribution <cit.>. The mutual inclination i_tot between the inner and outer orbit is distributed isotropically. The argument of periapsis, true anomalies, and the inner binary longitude of ascending node are selected from a uniform distribution between 0 and 2 π. We choose the outer semimajor axis a_2 to follow a power-law density cusp, n ∝ r^-α, with a minimum semimajor axis of 500 au and a maximum of 1 pc. We vary α across the range of 0 to 3, in half integer increments and for each value of α, we run 1.5 million Monte Carlo simulations of the stellar binary orbiting around the SMBH. The semimajor axis of the inner binary a_1 is determined from the period distribution dn/dP ∝ log(P)^-0.45 <cit.>, with the minimum and maximum value for a_1 selected for each system according to the following conditions:* First, we require that the stellar binaries' orbit pericenter be greater than two times the Roche limit of the system to ensure the stellar binary is not disrupted prior to the first natal kick: a_1(1-e_1) > 2a_Roche .The Roche limit of the stellar binary defined as a_Roche, ij = R_j/μ_Roche, ji ,where R_j is the radius of the star at mass m_j and μ_Roche, ji is the approximation of the Roche lobe radius <cit.>:μ_Roche, ji = 0.49 (m_j/m_i)^2/3/0.6(m_j/m_i)^2/3 + ln(1 + (m_j/m_i)^1/3). * The upper limit for the a_1 distribution comes from ensuring that the system is hierarchically stable <cit.>:a_1/a_2e_2/1-e^2_2 < 0.1.* Finally, each triple system must also satisfy the following criteria of the stellar binary system not crossing the Roche limit of the SMBH before m_1 undergoes a supernovae explosion:a_2(1-e_2) > a_1(1+e_1)(3m_∙/m_1 + m_2)^1/3 .§.§ Binary Destruction The initial stellar binaries can be destroyed either before or after the supernova. We track merged and unbounded stellar binary members in our simulation. Therefore, our simulations consist of a population of binary and single-star systems orbiting the SMBH. There are three paths to destroying the binary before either star has gone supernova:* SMBH Roche limit crossing. 25 - 40%, from α=0-3, respectively, of the initial stellar binaries distribution (see Figure <ref>) did not meet Equation (<ref>) criterion. These evolve independently as single stars orbiting the SMBH. In the statistical analysis below, we incorporate both the single star population and the binary star population.* Stellar mergers induced by EKL. A fraction of stellar binaries will experience eccentricity oscillations induced by the eccentric Kozai-Lidov mechanism <cit.> and can become a merged stellar product before the first natal kick <cit.>. Following <cit.>, we incorporate a simplified condition for which systems that have an EKL timescale shorter than general relativity (GR) precession may merge (or at least undergo mass transfer). We find that roughly 1 - 7%, from α=0-3, respectively, of the initial stellar binaries fall into this category and are excluded from undergoing supernovae explosions in our simulations. * Unbinding via neighboring scattering interactions (evaporation). Weak gravitational interactions with nearby stars can unbind the binary over an evaporation timescale <cit.>: t_evap = √(3) σ(r)/32 √(π) G ρ(r) a_1 ln(Λ)m_1 + m_2/m_p ,where ln(Λ) = 15 is the Coulomb logarithm, m_pis the average mass of the perturbing star, σ(r)=√(Gm_∙/r(1+α))and ρ(r), is defined below. Note that for simplicity, we ignore the eccentricity of the binary about the SMBH, since it will only change the timescale by a factor of a few <cit.>. We point out that we are testing a wide range of density profiles, α=0-3, see Equation (<ref>). However, observations of the galactic center suggest a shallow, core-like profile <cit.>. Thus, following <cit.> and <cit.>, we adopt the evaporating population distribution to be with α=1.3.In this case, most binaries have an evaporation timescale longer than the supernova timescale for a range of separations about the SMBH <cit.>. Only about 20 - 25% of the remaining stellar binaries, from α=0-2, respectively, will evaporate before the first supernova. A significant amount of initial binaries are distributed closely to the SMBH for a steeper distribution such as α=2.5 and 3. As a result, 36% and 55%, respectively, of the stellar binaries will be evaporated before the first supernovae[Assuming that the profile of all the stellar components in a nuclear star cluster follows the adopted density profile. In this case, 30 - 74% of the remaining stellar binaries from α=0-3, respectively, will evaporate before the first supernova. ].The remaining inner binaries can also be destroyed at a later time due to natal kicks or close encounters with the SMBH. Because m_1 is the more massive companion, it will undergo a supernova explosion first. The first natal kick can disrupt the binary, leading to the formation of two separate orbits around the supermassive black hole (m_1 - SMBH and m_2 - SMBH). If the binary survives m_1's natal kick, then we are left with a binary consisting of a compact object (CO) and star orbiting the SMBH. This scenario may result in the formation of X-ray binaries (XRB, Section <ref>). m_2's natal kick provides an additional way of destroying the binary and for the creation of Gravitational wave mergers (GW mergers, Section <ref>). Either natal kick can also push the binary onto an orbit inside the SMBH Roche limit, resulting in the destruction of the binary.§.§ Applying supernovae KicksWe assume instantaneous supernovae kicks that are isotropically distributed. Supernovae kicks for NSs are selected from a normal distribution with an average of 400 km sec^-1 and standard deviation of 265 km sec^-1 <cit.>. We adopt two different BH kick prescriptions due to observational uncertainties. In the fast BH kick prescription, BHs have the same kick distribution as NSs. In the slow BH kick prescription, the BHs receive the same linear momentum kick as NSs <cit.>.Using the rapid single stellar evolution code SSE <cit.>, we determine the time that each star becomes a CO and the corresponding mass prior to and following this event. We then apply a supernova kick vector to the CO and calculate the resulting orbital parameters.§.§ Interaction with the SMBH If the separation of the inner binary (either progenitor or post-kick binary) is larger than the SMBH's Roche limit, Eq. (<ref>), then the binary is disrupted and we follow the individual star's evolution. Further, binaries disrupted by natal kicks form two separate orbits around the supermassive black hole (m_1 - SMBH and m_2 - SMBH). If the binary is disrupted by m_1's natal kick, then it is possible that m_2 will be on an orbit that will create a tidal disruption event (TDE, Section <ref>). On the other hand, if the binary is disrupted after the second kick, the result may lead to an extreme mass ratio inspiral (EMRI, Section <ref>). §.§ NormalizationThroughout this paper, we normalize the density distribution by the M-σ relation <cit.>:ρ(r) = 3-α/2πm_1/r^3(G√(m_1 M_0)/σ_0^2 r)^-3+α ,where M_0=10^8 M_⊙, and σ_0=200 km sec^-1. In the rest of this paper, we refer to the numbers of NSs and BHs as expected from this normalization process. Here, we can also recognize a notable quantity called the “sphere of influence,” which signifies the radius at which the gravitational potential is dominated by the SMBH's. Equation (<ref>) implies that this value is: r_h=G√(m_1M_0)/σ_0^2, in our own GC. § DARK CUSP AND HIGH ENERGETIC PHENOMENA PREDICTIONS§.§ The Relationship between Dark Cusp and stellar density distributionThe various dynamical processes described in the Section above disrupt a significant fraction of binaries before the first supernova. The natal kicks disrupt the majority of the remaining binaries, and by the end of the simulations, only a small fraction of all initial binaries remain bound to their companion (see Table <ref> for details). The majority of the systems are single COs that are either orbiting the SMBH or unbound from the SMBH. In general, the COs do not remain in their initial position and are scattered.There are two significant outcomes for a single or binary configuration post-kick. One is if the binary or single remains bound to the SMBH, meaning the configuration post-kick has Keplerian energy smaller than zero. The other is to become unbound to the SMBH; in other words, the Keplerian energy is larger than zero. Out of these ≃ 20% are on a trajectory to escape the galaxy. A schematic description of this result is depicted in Figure <ref>, where we show an example for α=1.5, which is a core-like distribution similar to the one observed in our GC <cit.>. Although only 3% of the NS progenitor population is formed within 0.1 pc, natal kicks move NSs that were originally located at a distance >0.1 pc toward the GC, and ultimately 9.2% of NSs end up within 0.1 pc.At r = 0.107pc, the average kick velocity (≃ 400 km/s) is equal to the circular orbital velocity and serves as a critical point for differentiating the behavior of the NS population in the two regions. 26.7% (64%) of the NSs initially formed with a semimajor axis less (greater) than 0.1 pc are unbound from the SMBH. The combination of this, along with a steepening of the NS number density within the 0.1 pc threshold, leads to a dense concentration of NSs within the 0.1 pc radius and a scarcity beyond it. Below we highlight a few observational tests that can be used to constrain the CO progenitors' stellar distribution due to the unique nature of the GC and the natal kicks. Natal kicks efficiently move COs closer to the SMBH. Thus, observational constraints of the dark cusp may be used to constrain the initial stellar distribution. Future observations can be used to constrain the dark cusp. The separation of a young binary at the inner 0.1 pc of the GC is sensitive to the underlying density profile and measurements of such systems could be used to place constraints on the dark cusp <cit.>.§.§ The effect of progenitor distribution on the post-kick density and eccentricity distribution Below we provide a detailed analysis of the NS distribution. The fast kick BH distribution follows the NS distribution (only with a different normalization). The slow BH kick results are described in Appendix <ref>. In Figure <ref>, we show the changes in the bound NSs semimajor axis due to the kicks for three different density profiles, from extremely shallow (α=0, left), extremely steep (α=3, right), as well as core-like distribution closer to the observed distribution (α=1.5, middle). As depicted, NS progenitors formed near 1 pc can move orders of magnitudes away from their birth positions while those formed in the nearby vicinity of the SMBH are scattered by only an order of magnitude or so. The shallowest initial density profiles (i.e., α = 0, 0.5) contain the majority (≃ 99%) of the NS population outside of 0.1 pc and so are significantly perturbed by the NS kicks and steepen dramatically within a = 0.107 pc. As the value of α increases, a larger fraction of NSs are initially within 0.1 pc of the SMBH, and so the increase in steepness is less susceptible to natal kicks, as further demonstrated in Figure <ref>.In Figure <ref>, we show the NS progenitor (left panel) and bound NS (right panel) density distributions after the natal kicks. The bound NS density profiles are all steeper than their corresponding progenitor profiles. As the initial progenitor profiles increase in steepness, the corresponding amount of steepening in the bound profile decreases. The post-kick density profile can be estimated analytically from the number of systems that become unbound to the SMBH. Conservation of particles implies that the main driver of the post-kick distribution is the fraction of systems remaining. We provide the details in Appendix <ref>We apply a density criteria on the NS density profiles to constrain the expected initial stellar profile from observations of the precession of S0-2s caused by the unseen mass within S0-2s orbit <cit.>. The upper limit is derived by assuming that all of the enclosed mass is NSs. In this case, an initial stellar profile with α < 3 is consistent with this constraint. However, if there are also white dwarfs and stellar-mass black holes in this vicinity, assuming the typical population fraction of 0.26:0.014:2.3× 10^-3 of WD:NS:BH <cit.>, means that about ≃ 5% of the unseen mass is in NSs. Then, an initial stellar profile of α > 2 is incompatible with mass constraints. Further observational measurements may be able to disentangle the mass fraction of NSs within S0-2's orbit and provide a more stringent test on the initial stellar profile.Lastly, the kicks may also significantly affect the NS eccentricity distribution, especially for extremely cuspy density profiles. Initially, all CO progenitors begin on a thermal eccentricity distribution. Note that a thermal distribution may not accurately describe the eccentricity distribution at the GC <cit.> but is used here as a proxy.In Figure 5, we display the changes in NS eccentricity due to the kicks for three different density profiles with a shallow (α = 0), intermediate (α = 1.5), and steep (α = 3) distribution. For shallow initial stellar distributions (α = 0, 1.5), the post-kick eccentricity distribution follows the initial thermal distribution at lower eccentricities and drops slightly when e > 0.7. When considering the steeper distribution near α = 3, the orbits tend toward circularization, resulting in a higher proportion of orbits characterized by low eccentricities. §.§ Extreme Mass Ration Inspirals (EMRIs) Extreme Mass Ratio Inspirals (EMRIs) are GW emission events that take place when stellar mass COs inspirals onto SMBHs. They are one of the prime science motivators of the future Laser Interferometer Space Antenna (LISA) and other mHz detectors <cit.>. Natal kick can drive a CO into the SMBH <cit.>. To estimate if a kick resulted in an EMRI we compare two timescales. One describes the characteristic GW decay timescalet_ GW, EMRI≃5/64c^5a^4/G^3m^2_∙m(1-e^2)^7/2 ,where c is the speed of light, e is the eccentricity of the object around the SMBH, post-kick, and a is its semimajor axis <cit.>. The other timescale is two-body relaxation t_ relx which is the result of weak kicks with other neighboring objects. On one hand, these kicks can result in EMRIs by changing the angular momentum of the orbit and driving it into the lost cone. On the other hand, the kicks can increase the angular momentum, yielding a more circular orbit and thus suppressing the formation of an EMRI. Following <cit.>, we classify an orbit to be an EMRI if t_ GW, EMRI<(1-e) t_ relx is satisfied. We convert the number of EMRIs in our simulations to the number of EMRIs within the sphere of influence, as expected from the M-σ relation. As shown in Figure <ref>, we find that EMRI formation is sensitive to the initial stellar distribution surrounding the SMBH. In particular, the expected number of EMRIs range from nearly 0 EMRIs for a shallow cusp (α = 0) to 150 EMRIs for a steep cusp (α = 3). Considering a stellar profile that closely resembles the one observed in the GC <cit.>, we expect less than 10 EMRIs driven by natal kicks. For all initial stellar profiles, the majority of EMRI progenitors are formed within 10^-1 pc and are the result of NSs inspiraling onto the SMBH. We find that 98% (92%) are NS EMRIs and 2% (8%) are BH-EMRIs, for alpha = 0 (3). We note that for α≤2 the expected number of EMRIs from this channel is lower than the expected number of EMRIs from two body relaxation <cit.>, and orders of magnitude lower than the expected number of EMRIs in SMBH binaries <cit.>. For the extreme cusp case, i.e., α≥ 2.5, the expected numbers combined NS and BH EMRIs are comparable to the lower limit of the SMBH binary case. We suggest that extreme cusp profiles may also contribute to therevised stochastic background estimations presented in <cit.>. We reserve this calculation for future studies. §.§ X-Ray Binaries Inner binaries that survive m_1's natal kick can have their orbital separation decrease. We classify systems as X-ray binaries (XRBs) if the inner binary post-kick pericenter drops below a_Roche. We find that 3.3 · 10^-3 NS-XRB form per NS and 1.2 · 10^-3 BH-XRB form per BH in our simulations for all values of α other than α = 3. There is a decrease in the XRB fraction for α = 3 because there is a significant decrease in the number of initial stellar binaries (see Table <ref>). We find that the formation of XRBs is related to the properties of the inner binary and is independent of the binary's outer orbital parameters, such as distance away from the SMBH. From Figure <ref>, we expect nearly 400 to be formed within the sphere of influence due to natal kicks. From this, 94% of the XRBs are NS-XRBs and 6% are BH-XRBs. §.§ Tidal Disruption Events (TDEs) Tidal disruption events (TDEs) occur when m_1's natal kick disrupts the stellar binary and pericenter of the m_2 - SMBH orbit drops below the SMBH tidal radius r_t ∼ R_*( m_∙/m_*)^1/3 ,where R_* is the radius of the star and m_* is its mass. We further require that r_t is greater than the SMBH Schwarzschild radius and that m_2 passes within the tidal radius before its own natal kick to classify the system as a TDE. TDEs are a rare outcome of natal kicks acting on binaries. We find that no TDEs driven by natal kicks are expected to occur within the sphere of influence of the SMBH. TDEs are expected to result via two-body relaxation processes <cit.>, an in SMBH binaries <cit.>. §.§ Inner Binary GW mergers The natal kicks can also direct the surviving inner binaries into regions of the parameter space where GR effects trigger a gravitational wave (GW) merger within a timescale shorter than the evaporation timescale at the GC. The inner binary gravitational wave merger timescale due to GR effects is <cit.>:t_GW∼5/265c^5 a_1^4/G^3 (m_1 + m_2)m_1m_2(1-e^2)^7/2 .We label a system as a GW merger if t_GW < t_evap.In some cases, the EKL-induced eccentricity oscillations play a significant part in inducing a GW merger. If the EKL timescale is shorter than the GR precession timescale, we describe the EKL-induced GW merger timescale as t_GW_EKL∼5/265c^5 a_1^4/G^3 (m_1 + m_2)m_1m_2(1-e^2_1,max)^3 ,where e_1,max is the maximum EKL-induced eccentricity and is estimated following <cit.>. GW mergers are weakly dependent on the assumed initial stellar distribution and will result in ∼ 10 GW mergers within the sphere of influence of the SMBH. § PREDICTIONS FOR THE ROMAN SPACE TELESCOPE §.§ Compact Object Distribution beyond 1 pc Consider a 3 Gyr star formation episode within 1 pc of the SMBH <cit.> [Note that a young stellar population at the GC is estimated to have an age of few Myrs <cit.>, and while this population is interesting for its own merit, it provides negligible predicting power to the Roman Space Telescope. ]. Within 1 pc, all NS and BH progenitors are initially orbiting the SMBH, but the natal kicks unbind a significant fraction of COs from the SMBH potential, as described above (see Table <ref>). As expected, the percentage of COs that remain bound to the SMBH increases for a steeper initial stellar distribution. As a test case, we focus on the α = 1.5 distribution. This density distribution is close to the GC observed stellar distribution <cit.>, and agrees with the constraints in Figure <ref>. With α = 1.5, 91% of unbound systems are single COs (average speed of ≃ 575 km s^-1) and 2% are CO binaries (average speed of ≃ 300 km s^-1). The remaining 7 % are ejected during their stellar lifetime due to their companion's natal kick and will undergo their own supernova explosion outside the sphere of influence. These hyper velocity stars (average speed of ≃ 600 km s^-1) can briefly be observed for 10^6 - 10^7 years before becoming COs and contributing to the CO distributions. The combined gravitational potential of the Milky Way (MW) will be significant enough to slow down the majority (∼ 70%) of systems unbound from the SMBH but bound to the MW potential with orbits scattered around the Galactic plane. Here we focus on those COs that remain bound to the MW after 3 Gyr, and their potential detection using the Roman Space Telescope (Section <ref>). We utilize the publicly available Python package for galactic dynamics galpy <cit.>, to model a simple Milky Way potential. We follow the orbits of all (bound and unbound to the MW) COs beyond the inner ∼ 1 pc. In Figure <ref>, we present the 3D distribution of a sample of COs ejected from the central parsec of the GC in a galactocentric coordinate frame. We display the position of COs and a few selected orbits 100 Myr after the star formation event and within a radial distance of 500 pc and 5 kpc, respectively. In both panels, the orbits cross within the inner regions of the GC, consistent with what is expected for being expelled from this region and falling back into the MW potential. The slow BH kick prescription results in the BHs being concentrated closer to the GC than the NSs. The fast BH kick prescription results in the same density distribution of BHs and NSs, since by definition the fast BH kick prescription is matched to the observationally determined NS kick distribution. §.§ The Relation between Galactic Latitude and Kick Prescription<cit.> recently analyzed the distribution of COs, including natal kicks, from the entire Galactic population (thin disk,thick disk, the stellar halo, and bulge). As suggested in Figure <ref>, the COs originating from the GC may also reach large distances. Below, we compare the GC population to the full Galactic population.In Figure <ref>, we depict the Galactic latitude distribution of compact objects ejected from the central parsec of the GC after 3 Gyr and within a galactocentric cylindrical radius of 8 kpc. Nearly 70% of NSs (left panels) and fast kick BHs (right panels) are located at least a degree off the Galactic plane, whereas only 20% of slow kick BHs exhibit the same characteristic. Due to the strong natal kicks, the distribution of neutron stars and fast kick black holes peaks near 3 off the galactic plane. Notably, there is a subset of objects (∼ 21%) expelled from the central parsec that that arecompletely unbound from the Milky Way. The distribution of slow kick BHs from the central parsec is concentrated within 1. The decline beyond a few degrees is attributed to the comparatively lower velocities of natal kicks. We propose that the GC population can be differentiated from the rest of the Galactic population. In Figures <ref> and <ref>, we compare our results to the publicly available simulation data in <cit.>, note that the COs from the galactic population are ∼ 10^4 times more numerous [Note that <cit.> COs were integrated up to the present day for continuous star formation in the Milky Way, while our COs were integrated to present day from a single star formation episode in the GC 3 Gyr ago].As shown in Figure <ref>, the galactic population's distribution of NSs and BHs are preferentially located at higher galactic latitudes compared to the GC's population. Thus allowing for the potential differentiation of these populations.In Figure <ref>, we display the spatial and velocity distributions of the two BH populations ejected from the central parsec of the GC. As expected, the slow kick BHs are more concentrated towards the GC and remain closer to the galactic plane compared to the fast kick BHs population (see left panel). We note that the galactic population of NSs (and slow kick BHs) in <cit.> extends well beyond the GC distribution in both the x and z directions. This is because the natal kicks are occurring throughout the galaxy and are not localized to the GC.The right panel shows that the galactic population can reach higher velocities (max. ∼ 870 km sec^-1) while the GC population attains slightly lower velocities (max. ∼ 730 km sec^-1). §.§ COs unbound to the Milky WayCOs with velocities exceeding the escape velocity of the Milky Way are unbound to the Milky Way. 21% of all NSs within the central parsec are unbound to the Milky Way by 3 Gyr (average speed of ≃ 800 km sec^-1, at 100 kpc from the center). As expected, the percentage of BHs unbound from the Milky Way depends on the underlying kick prescription. The fast kick BHs follow the NS percentage, while slow kick BHs only result in 2% of BHs being unbound to the Milky Way (average speed of ≃ 1650 km sec^-1, at 100 kpc from the center). §.§ Distinguishing between kick prescriptions with gravitational microlensingThe different BH natal kick prescriptions predict different distributions of compact objects as a function of Galactic latitude. Fast kicks result in an increasing number of BHs at increasing latitudes up to about 2 - 3^∘ off the Galactic Plane, while slow kicks result in a decreasing number of BHs at increasing latitudes (Figure <ref>). Thus, if the number density of BHs as a function of latitude can be mapped, it would provide a way to determine the type of natal kicks BHs receive.Gravitational microlensing can be used to measure the masses and velocities of dark massive objects in our Galaxy; for a detailed explanation, please see <cit.>. In brief, when a foreground object (such as a BH) aligns by chance with a background star along an observer's line of sight, the gravitational field of the foreground mass deflects the background star's light. The observer sees a transient brightening (photometric microlensing) and positional deflection (astrometric microlensing) of the background star. These two signals can then be used to measure the mass, velocity, and distance of the unseen lens. Gravitational microlensing has been proposed as a method to measure the mass distribution of compact objects toward the Galactic Bulge <cit.>.An isolated stellar-mass BH has recently been detected and characterized with microlensing, using ground-based survey photometry and Hubble Space Telescope follow-up astrometry <cit.>. This BH lens has been used to constrain the properties of natal kicks <cit.> as well as whether the progenitor system was binary or single <cit.>.The Nancy Grace Roman Space Telescope (Roman Space Telescope), NASA's next flagship mission scheduled to launch by 2027, will conduct several wide-field infrared surveys. Its Galactic Bulge Time Domain Survey (GBTDS) is designed to discover thousands of cold exoplanets via gravitational microlensing <cit.>. The notional design of the GBTDS[Referred to as “WFIRST Cycle 7" in <cit.>.] will observe an area of ≃ 2 deg^2 around 1.5^∘ off the Galactic Plane, avoiding regions within a degree of the GC. In addition to exoplanets, the Roman Space Telescope could also detect and characterize hundreds of BHs via photometric and astrometric microlensing, as well as a comparable number of neutron stars if the astrometric precision is sufficient <cit.>. With its photometric precision, the Roman Space Telescope could also be used to study the population of compact objects in a statistical manner with photometric microlensing <cit.>. A detailed study is beyond the scope of this work, but we suggest that the Roman Space Telescope has the ability to study BH natal kicks and distinguish between slow and fast kicks. In particular, including an additional pointing toward the GC in the GBTDS would enable the measurement of the BH density as a function of latitude, and enable the determination of BH kick speed. We note that a broad range of other science cases would also be enabled by a field at the GC <cit.>.§ DISCUSSION AND CONCLUSIONNeutron stars and perhaps even black holes receive large natal kicks during birth, with an expected average speed of 400 km sec^-1 <cit.>. Here we consider a GC population of massive stars (both single and binary), with different initial density distributions ρ∼ r^-α, with α∈[0-3]. The GC offers a unique opportunity to study the conditions surrounding SMBHs that probably take place in other galactic nuclei. Focusing on the post-kick density distribution and comparing it to observations allows us to infer the initial stellar distribution at our GC.The kicks in the vicinity of the SMBH may redistribute the orbital configuration of the COs around the SMBH, as well as unbind the binary itself. Adopting a kick distribution with an average kick velocity of 400 km sec^-1 implies that at ∼ 0.107 pc from the SMBH, the velocity dispersion around the SMBH is similar to that of the average kick magnitude.Thus, overall, we expect that kicks beyond this distance will more likely be unbound COs from the SMBH (see Figure <ref>), while those that remain bound <cit.>, will migrate closer to the SMBH potential.The natal kick at the central parsec significantly affects the CO density distribution, i.e., the dark cusp. Here, we find that natal kicks steepen the resulting compact object density profiles, with most of the steepening occurring within 0.1 pc for NSs and fast BH kicks. The natal kicks are efficient at driving stellar remnants from an initial semimajor axis beyond 0.1 pc, where the majority of the progenitor population is located, to bound orbits within 0.1 pc from the SMBH (Figures <ref> and <ref>). Even when considering slow black hole kicks, the resulting black hole distribution still exhibits a steepening trend, although to a lesser extent (see Appendix <ref>, Figure <ref>).Using the predicted post-natal kick CO distribution, we constrained the initial stellar profile from limits on the unseen mass within S0-2’s orbit. Specifically, observations suggested that about ∼ 4000 M_⊙ reside inwards to S0-2's orbit <cit.>. Assuming that this unseen cusp is composed of stellar remnants such as stellar mass BHs and NSs, we infer the initial stellar density distribution.Considering the standard population proportions of 0.26:0.014:2.3× 10^-3 for white dwarfs, neutron stars, and black holes <cit.> within S0-2's orbit, an initial stellar profile with α≥ 2leads to a compact object density distribution that is incompatible with the mass constraints, as depicted in Figure <ref>. This result is consistent with current observations of the stellar density distribution as close to unity. We note that if we adopt the unseen mass to be smaller than ∼ 3000 M_⊙ inwards to S0-2's orbit <cit.>, we find a stronger constrain of the initial stellar density to be α≤ 1.5. The relation between the initial and final distribution is possible because two-body relaxation and collision effects have negligible effects on the final distribution at these stages <cit.>. Also, note that some theoretical arguments suggested that the unseen mass inwards to S0-2's orbit is consistent with the existence of intermediate-mass BH <cit.>. In this case, the inferred initial stellar distribution may be even shallower. In addition to the steepening of the CO density profiles, natal kicks naturally lead to the creation of EMRIs, X-ray binaries, TDEs, and binary GW mergers. From these, EMRIs are the most sensitive to the initial stellar profile, with a few hundred EMRIs expected for the steepest stellar profiles, as depicted in Figure <ref>. TDEs and binary GW mergers are less sensitive to the initial stellar profile, and we'd only expect a handful of them. The number of EMRIs and TDEs expected from natal kicks is largely negligible compared to two body relaxation processes around a single SMBH <cit.>, both are much lower compared to the expectation in SMBH binaries <cit.>.Unsurprisingly, X-ray binaries are unaffected by their distribution around the SMBH because the orbital properties of the inner binary directly affect the occurrence rate of X-ray binaries.A significant fraction of compact objects are unbound from the SMBH due to their natal kicks and may be potential microlensing events detectable by the the Roman Space Telescope. As a proof of concept, we follow the unbound compact objects formed from an initial distribution of α = 1.5. This distribution is consistent with our aforementioned findings as well as with the observed GC stellar distribution <cit.>. We follow these COs as they migrate throughout the galaxy for 3 Gyr (see Figure <ref>). The adopted kick prescription is reflected in the spatial distribution of the compact objects in the galaxy. Specifically, slow-kick BHs ejected from the GC are concentrated closer toward the Galactic Plane, while fast-kick BHs and NSs are preferentially located at higher galactic latitudes.Lastly, we compared the GC COs distribution to the expected galactic COs distribution and found that these two populations are potentially distinguishable. Particularly, the GC population is slightly slower (Fig. <ref>) and presents a longer tail towards low galactic latitude (Fig. <ref>). The GBTDS expected field of view for the Roman Space Telescope is located in a galactic latitude range to possibly untangle the true underlying kick prescription for BHs. § ACKNOWLEDGEMENTSWe thank David Sweeney for useful discussions.S.N. acknowledges the partial support from NASA ATP 80NSSC20K0505 and from NSF-AST 2206428 grant as well as thanks Howard and Astrid Preston for their generous support. C.Y.L. acknowledges support from NASA FINESST grant No. 80NSSC21K2043 and a Carnegie Fellowship.§ CO DENSITY DISTRIBUTION The total number of COs at any given time is conserved because no COs are destroyed or added to the initial population.Therefore, dN_t(r)/dr = dN_b,0(r)/dr + dN_u,0(r)/dr ,where dN_t(r) is the number of CO progenitors that are initially formed, dN_b,0(r) is the number of bound CO progenitors before applying the effect of their natal kick, and dN_u,0(r) is the number of CO progenitors that will be unbound due to their natal kick, all of which are in a bin of width dr at a radius (r) away from the SMBH.After the natal kicks, the COs will be scattered to different values of r and in some regions, there will be an overabundance of COs and in others a dearth. We can determine what the new slopes for the bound and unbound population will be. At a given value of r, we can compute the number of COs that now inhabit the region over the initial number of CO progenitors to determine the new slope. Dividing Equation <ref> by dN_t(r)/dr yields,1 = dN_b(r)/dN_t(r) + dN_ub(r)/dN_t(r) .In the case that dN_ub(r)/dN_t(r)≪ 1 and the fact that dN = 4 π r^2 n dr for a spherical distribution, where n is the power-law density cusp n = n_0 r^-α, gives 1 = f_br^α_t - α_b ,where f_b = n_b(r)/n_t(r) is the relative number density between the initial population and the bound population that are in a bin of width dr at a radius r away from the SMBH. Equation <ref> can be rearranged to calculate thepost-kick alpha value of the resulting CO distribution:α_b = α_t - log(1/f_b)/log(r) . To determine the resulting CO density slopes, we generate a post-kick histogram distribution of COs in log space. For each bin where the fraction of unbound COs is less than 5 %, we apply Equation <ref> to determine the post-kick value of α. The steepest initial profiles have a larger unbound fraction closer to the SMBH and provide less measurements for the value of alpha at each point. In the cases where alpha has noticeable variations, we determine the mean value for alpha to generate the slope lines in Figure <ref>.§ THE SLOW KICK BLACK HOLE DENSITY DISTRIBUTION In Figure <ref> we show the BH progenitor and BH density distributions after the natal kicks (left panel, right panel). The post-kick slopes are estimated using the same analytical method applied to the NS distributions (see Appendix <ref>). The resulting distribution of BHs becomes steeper, with the degree of steepening being less pronounced for initially steep distributions. By applying a density criterion to the BH density profiles determined from the unseen mass within the orbit of S0-2, we can establish constraints on the expected initial stellar profile in the GC.<cit.>. The conservative upper limit is determined by assuming that the entire enclosed mass is composed of stellar-mass black holes. This limit is represented as the highest vertical black line in Figure <ref>. From this we can conclude an initial stellar profile of α < 3 is consistent with this criteria. Note that the NS density profile provides a more stringent constraint because the resulting CO profiles are steepened due to the stronger kicks. If there are also white dwarfs and neutron stars that make up a portion of the mass fraction within S0-2s orbit, with the typical population fraction from <cit.>, then the upper limit is denoted by the lower vertical black line in Figure <ref>. Here an initial stellar profile with α < 2 are allowed from the mass constraint. With a mixed population of COs, both the NS and BH density profiles converge on an upper limit, regardless of the kick distribution. | http://arxiv.org/abs/2310.17707v1 | {
"authors": [
"Carlos Jurado",
"Smadar Naoz",
"Casey Y. Lam",
"Bao-Minh Hoang"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231026180816",
"title": "Natal Kicks from the Galactic Center and Implications on their Environment and the Roman Space Telescope"
} |
Nanostructured Superconductors [ January 14, 2024 ==============================Transformed Gaussian Processes (s) are stochastic processes specified by transforming samples from the joint distribution from a prior process (typically a ) using an invertible transformation; increasing the flexibility of the base process.Furthermore, they achieve competitive results compared with Deep Gaussian Processes (s), which are another generalization constructed by a hierarchical concatenation of s. In this work, we propose a generalization of s named Deep Transformed Gaussian Processes (s), which follows the trend of concatenating layers of stochastic processes. More precisely, we obtain a multi-layer model in which each layer is a . This generalization implies an increment of flexibility with respect to both s and s. Exact inference in such a model is intractable. However, we show that one can use variational inference to approximate the required computations yielding a straightforward extension of the popular inference algorithm <cit.>. The experiments conducted evaluate the proposed novel s in multiple regression datasets, achieving good scalability and performance.§ INTRODUCTIONAlthough neural networks present highly accurate results on classification and regression tasks, they do not offer uncertainty estimations associated with the predictions made, which are mandatory in some fields. For example, in medicine, a typical problem is cancer detection using histopathological images, where given an image (or a set of images), the models try to determine if the patient has cancerous tissue or not <cit.>. In this case, the model must offer an interpretable (well-calibrated) output for the pathologists, who may use the output of the model for diagnosis. Bayesian learning offers a solution for this problem as it automatically outputs an estimation of the associated prediction uncertainty. However, it comes at a cost: computational tractability. The posterior distribution is most of the time intractable, so it has to be approximated using techniques such as variational inference. Gaussian Processes (s) are a very powerful, non-parametric model that allows inference in the function space by placing a prior distribution on the target latent function <cit.>. These models have been studied extensively in the literature <cit.>, leading to different generalizations. The most popular are Deep Gaussian Processes (s) <cit.>, which use the output of aas the input to another , increasing the expressiveness of the resulting model. Also, the usage of transformations on the prior and likelihood of thehas been explored. Using Normalizing Flows <cit.>, the Transformed Gaussian Process () <cit.> extends standard s by transformingthe prior distribution of the , which is no longer Gaussian, using an invertible transformation. In this work, Deep Transformed Gaussian Processes (s) are introduced as a novel form of extending Transformed Gaussian Processes by concatenating the output of a to another . This model aims to improve the performance and the uncertainty estimation of previous models by being more flexible than them and thus being able to model more complex data. Inference in s is intractable, so the Evidence Lower Boundis approximated using Monte Carlo samples, using a straightforward extension of the algorithmproposed for inference in s by <cit.>. The usage of normalizing flows between each layer, leads to an increment in flexibility, adding a small term to the total computational cost.To validate the proposed model, we conduct extensive experimentation with s. With this goal, we employ a specifically designed toy dataset and eight real datasets from the repository, which have been used to test the performance of s in comparison with other state-of-the-art models. Our results show that s obtain better or comparable results in almost all datasets.§ PRELIMINARIES The problem we aim to solve is to infer an unknown function f: ^D → given noisy observations = (y_1⋯,y_N)^T at locations = (1,⋯, N), where in our problem ∈⊆ℝ^D. Gaussian Processes place a prior on f such that the distribution of all function values is jointly Gaussian, with a mean function μ: 𝒳→ and covariance function K: 𝒳×𝒳→ <cit.>. Since the computational cost of exact inference in s scales cubically with N, a set of inducing locations = (1,⋯,M) with M << N is considered <cit.>, aiming to reduce this cost. We denote = f() and = f(). The joint distribution of , andisp(, , ) = ∏_i=1^N p(y_i | f_i)_likelihoodp(|;, ) p(;)_ prior,whereis supposed to follow the same prior as : p() = 𝒩 (| m(), K(, )). Some authors have explored the possibility of transforming the prior or the likelihood distributions <cit.>.In this sense, using two mappings ,, the whole modelling process can be generalized as∼𝒢𝒫 (μ(), K(, )); = ()() =+ ϵ;ϵ∼𝒩(0, Σ) In this work, we are interested in transforming the prior usingand maintainingas the identity. For this we consider to be given by an invertible transformation (Normalizing Flow), as in <cit.>, followed by a hierarchical concatenationas in <cit.>. However, from now on will just denote the invertible transformation. Since the composition of invertible transformation remains invertible and differentiable, we defineto be the composition of K invertible transformations = 0∘1∘⋯∘K-1. This composition helps to increase the flexibility of the flow as much as we want. Also, in each of the steps of the flow the parameters θ_k may depend on the input via a transformation such as a Neural Network (), giving rise to Input-Dependent () normalizing flows, which yield a non-stationary process with well-inductive biases <cit.>. In this case, parameters are given by functions θ:×→ℝ which we denote as . The Transformed Gaussian Process () is defined by the generative process given by composing a sample 0 of a with a normalizing flow:∼𝒢𝒫(μ( ), K( , )),= ().Following <cit.> we consider element-wise mappings, which produce diagonal Jacobians. This, together with the application of the inverse function theorem and change of variable formula, leads to the distribution: p(| , ) = p(|) k.We denote a= a. Unlike in standard s, inference in s is intractable. The posterior distribution is efficiently approximated using a variational distribution q(,) = p(|) q(), which is chosen to contain a factor equal to the exact conditional prior p(|) and a marginal variational distribution q() = 𝒩(|, ). A cancellation of both the conditional prior and the Jacobianis achieved in the variational giving: ℒ_ =q()log p(| () -q()p().This provides a bound that can be evaluated in NM^2 + M^3 + NLK, where L is the computational cost of the normalizing flow. When flows are used, a Bayesian treatment can be considered by assigning a prior distribution p() to the weights of the , see <cit.>.§ DEEP TRANSFORMED GAUSSIAN PROCESSESIn this section, we present Deep Transformed Gaussian Processes (s) which generalize s through their hierarchical composition. Following the fashion of s, the output of the is used as the input of another , recursively defining a stochastic process:A Deep Transformed Gaussian Process () is a collection of random variables {Kh,l}_h=1,l=1^H^l,L with a hierarchical dependency such that Kh,l({Kh,l-1}_h=1^H^l-1). Each of those functions follows the generative process:h,l∼ (μ(·), K(·, ·));h,l∼ p^l_λ(h,l) = c^l(, h,l);Kh,l|,0h,l = h,l(0h,l)where 0 =, c is a transformation that generates coefficients for the normalizing flow h,l, and h=1,…,H^l is the depth of each layer. By this construction, the joint distribution is given by:p({h,l}_h=1,l=1^H^l,L)= l=1Lh=1H^lh,l|μ({h,l-1}_h=1^H^l-1),K({h,l-1}_h=1^H^l-1,{h,l-1}_h=1^H^l-1)×kh,lh,l<ref> shows the structure of a . Clearly, it can be shown that the proposed architecture is very flexible and contains as special cases a , a , and a , which can all be recovered by selecting an appropriate number of layers and normalizing flows. The usage of normalizing flows between the layers of the is expected to allow the model to encode prior expert knowledge about the problem, as it already occurs in s <cit.>. From now on,we will consider H^l = 1 for all l to simplify the notation. The subindex ^l_h will also be omitted as it can be understood from the context. With these assumptions, the joint prior distribution of ais given by:p(,) = n=1N p(y_n |L (0,nL))_Likelihoodl=1L p(l|l) l|l p(l) l_ Prior,where the diagonal Jacobian appears in each of the layers, representing the transformation of the prior as the novelty from 's prior definition. Since the posterior distribution is intractable, a variational distribution that maintains the exact conditional prior <cit.> is chosen following <cit.>:q()= l=1L p(l|l) l|l q(l)l. This choice of variational distribution allows for term cancellations in the which implies a gain in computational cost. In virtue of the Law of the Unconscious Statistician (), the expression of the is the following:ℒ_ = n=1Nq(L|L-1)l=1L-1q(l|l-1) log p (y_n |^L (L))_ELL + l = 1L q(l) q(l)_. Since q(l) and p(l) are both Gaussian, the divergence can be computed analytically. The Expected Log Likelihood () term must be approximated since the expectation under the variational distribution is intractable. We approximate this term using Monte Carlo samples from l=1L q(l|l-1, ). Thus, we achieve computational tractability using two sources of stochasticity as it was done in <cit.>. Firstly, the ELL is a sum across data points, so it can be evaluated using mini-batches. Also, Monte Carlo samples are used to approximate this term. This amounts a computational cost of (NM^2 + M^3 + NLK)(H^1 + ⋯ + H^L) which, compared to the cost of s, adds the cost of computing the normalizing flow NLK. As it will be shown later, the inference algorithm (Alg. <ref>) is a straightforward extension of the one presented in <cit.>. §.§ Using Bayesian Priors on Flows Input Dependent flows compute the parameters of the normalizing flow using a neural network that outputs a set of parameters for each of the flows ^l_h = (, l). A Bayesian treatment can be given to these parameters l. Firstly, it is assumed that the distribution of the parameters of the flows are independent between the layers, that is:p() = l=1L p(l) Now, following the observations in <cit.>, the new joint prior model and the variational posterior factorizes as:p(,, ) = p(,|) p(), q(, ) = q(|) q(). Using these expressions, the in Equation (<ref>) slightly changes. A new term appears, acting as a regularizer for the distribution of the flow parameters. The complete notation ^L_^L(,l) is recovered to remark the dependence of the coefficients obtained by . Then, expression takes the form:ℒ_ =q( , )p(|L (L))_- l = 1L q(l) p(l)__1 - l = 1Lq(l)p(l)__2 .§.§ Predictions In this work, we considered a that always uses an identity flow in the last layer L(L) = L. First, this allows us to compare directly the modeling advantage that the flows provide in the inner layers compared to identity flows (). Note that since both inference algorithms rely on the same assumptions, we can attribute the performance difference to additional expressiveness provided by the flows, and not to an improved approximation through a better inference algorithm (using <cit.>). The aforementioned simplification has another important advantage: the latent function values at the last layer L remain Gaussian as a consequence of the linear transformation. This simplifies computing expectations w.r.t. the likelihood. More precisely, it leads to a model that has more flexibility than the s due to the normalizing flows between the layers, but that allows closed-form marginalization of the latent function values L at the last layer (given L-1), as in s. Importantly, note that the only difference between the proposed inference algorithm for s and the algorithm for s relies on the fact that the samples of each of the hidden layers are passed through a non-linearity. Algorithm <ref> shows the evaluation, where the remarked part is the added difference with respect to the . Having an input , this input is propagated through the layers and S Monte Carlo samples are used to approximate q(KL), which has the form:q(KL) ≈1/Ss=1S𝒩(KL|_qf(K,(s)L-1), _qf(K,(s)L-1,K,(s)L-1)),where, if we name α() = K^L(L, L)^-1K^L(, ), _qf() = m^L() + α()^T(L - m(L)) _qf(i,j) = K^L(i,j) -α(i)^T( K^L(L,L) - L) α(j),where m^L, K^L, L, L, L denote the mean function, kernel, inducing locations, variational mean and variational variance of the layer L, respectively.The same happens with the predictive distribution for the labels y. We can now assume a Gaussian likelihood to obtain:p(y |) = ∫ q(KL|KL-1) p(y |KL)d KL,= 1/S∑_s=1^S 𝒩(KL|_qf(K,(s)L-1), _qf(K,(s)L-1,K,(s)L-1) + σ^2 I),where σ^2 is the variance of the likelihood and I is the identity matrix.0.66§ RELATED WORK The sparse approaches to Gaussian Processes have allowed these models to be computationally tractable when the number of training data points becomes large <cit.>. Some works have also studied the usage of Harmonic Features in s and their relation to deep models has been studied by <cit.>.Deep Gaussian Processes <cit.> have been extensively studied in recent literature as a generalization of the s that increase their flexibility.Also, the usage of Normalizing Flows <cit.> to transform probability density functions has become an active research field <cit.>. Some works warp thelikelihood to increase its flexibility <cit.>. In our case, we are more interested in the models that warp the prior distribution of the , leading to an improvement inthe performance of transformed model <cit.>. Efficient Transformed Gaussian Processes (s) also increase the speed of standard sparse s methods in multi-class classification tasks <cit.> so they could be also considered as the base model for concatenation, which we will do in future work. Our work builds on <cit.> and extends it by applying a concatenation of the Transformed Gaussian Process following the fashion of the previously presented s. This same idea has been used in different works that use different generalizations of s to define more complex models. In <cit.>, the concatenated model is the Implicit Processes (), and in <cit.> the way the covariance is computed is changed, leading to a more efficient model than s. Our work differs from the previous ones in the form of extending the s by using Normalizing flows and also having the possibility of using input-dependent transformations. Since inference remains intractable in our model, we follow a similar approach to <cit.>, taking samples from the posterior that in our case are warped using the normalizing flow. § EXPERIMENTSs have been carefully examined in an environment where s do not perform at their best. To this end, <cit.> presents a toy dataset with a step-wise function where the s have difficulties due to the smoothness that the RBF kernel imposes in the prior. <ref> shows the visual comparison of s versus s in this toy dataset. s use the Steptanh flow, given by a linear combination of the hyperbolic tangent function <cit.>. We observe that the proposed model shows not only a better mean prediction of the data but also a good uncertainty estimation where the function value changes. Also, the functional form of the normalizing flowin <ref> confirms that using expert information about the problem in the normalizing flows (in this example, a flow that induces a step) can be very helpful for s.The last experiments using s have been conducted using 8 real datasets from the UCI repository. In particular, the datasets used in <cit.> have been chosen, exchanging the Naval dataset by Yacht. The goal of these experiments is to compare the s with the s, again with the objective of testing if adding normalizing flows in between the layers results in better performance. As per usual, a 10% test size has been chosen and each experiment has been performed using 20 different random seeds, averaging the results. The initializations of the variational and kernel parameters have followed the ones in <cit.>, to replicate their experimental environment.The number of layers used is 2, 3, 4 and 5 for each of the models, using M = 100 inducing points in each . All the models are trained for 80.000 iterations using a fixed batch size of 200 and a learning rate of 10^-2. Regarding normalizing flows, it must be remarked that in this work all the layers share the same functional form with different trainable parameters in each layer. Also, we have fixed noninput dependent arcsinh flow as the used normalizing flow, since it performed well in some initial experiments. Lastly, H_l = 1 for every l=1,…,L. Using the configuration mentioned above, the results obtained in terms of Negative Log Likelihood () for each method are shown in Figure <ref>. The dot indicates the mean of the 20 splits, and the bars indicate the standard deviation divided by √(20). It can be observed thats achieve better scores in two of the datasets (Kin8nm and Power), slightly better results in mean in other two (Boston, Protein), comparable results in three datasets (Energy, Redwine and Yacht) and slightly worse results (in some cases comparable) in Concrete. Numeric values are shown in Table <ref>. These results also can give some more insight on s. While most of the time increasing the number of layers improves the performance in s, in s sometimes the performance becomes worse. This points out again the difficulty of training the s since the performance should normally be maintained as the number of layers is increased. Also, the increase in expressiveness may lead to data overfitting, which was already observed in s <cit.>. Further work includes Bayesian flows as a way to prevent this overfitting.Another aspect that must be carefully examined is the time used by each method. To measure this, we measure the elapsed time in the first 1000 iterations. The results are shown in Figure <ref>. We observe that the training time in seconds becomes more significant as the number of layers grows.Specifically, awith 4 layers needs almost the same computational time as a 5 layer .§ CONCLUSIONSWe have presented Deep Transformed Gaussian Processes (s), a model based on the concatenation of Transformed Gaussian Processes (s). s increase the expressiveness of both s and s, by transforming the predictive distribution in each layer. Also, we have derived a further extension of the model where Bayesian priors are placed on the transformation that computes the flow's parameters.s inherit the intractability of their base models. Due to this variational inference is used to find an approximation to its true posterior distribution. The derivation of the Evidence Lower Bound for s has been presented. Furthermore, we have shown how to evaluate this lower bound using Monte Carlo samples. In the performed experiments in the toy data, the improvements that the nonlinearities of the s offer have been shown using different types of normalizing flows. This improvement is reflected in the real datasets, where the proposed implementation of the model achieves better or at worst comparable results to the model. § ACKNOWLEDGMENTS The authors gratefully acknowledge the use of the facilities of Centro de Computacion Cientifica (CCC) at Universidad Autónoma de Madrid.The authors also acknowledge financial support from the Spanish Plan Nacional I+D+i, PID2019-106827GB-I00 and PID2022-139856NB-I00, and from the Autonomous Community of Madrid (ELLIS Unit Madrid).§ DTGP DERIVATIONSThis is a complementary section in which the derivations of the Deep Transformed Gaussian Process model are presented. § DTGP PRIOR The DTGP prior in Equation (<ref>) is obtained as follows:p(,) = p(|) p( ) Conditional independence + layer factorization = p(|L) l=1L p(l, l) Change of Variable = p(|L (L))l=1L p(l, l) l, l = n=1N p(y_n |L (0,nL))l=1L p(l|l) p(l) l, l = n=1N p(y_n |L (0,nL))_Likelihoodl=1L p(l|l) p(l) l, l_ DTGP Prior § EVIDENCE LOWER BOUND IN DTGPSRecalling that when using marginal flows = ,, the ELBO in Equation (<ref>) is derived as follows: ℒ() = q( )logp(, )/q() =q( )logp(|L (L)) l=1Lp(l|l) p(l)l, l/l=1Lp(l|l )q(l) l,l = q( )log p(|L (L) ) + l=1Lq(l, l)logp(l) / q(l) LOTUS = q( )log p(|L (L)) + l=1Lq(l, l)logp(l) / q(l)= q( )log p(|L (L) )_Expected Log Likelihood - l = 1L q(l) p(l)_ The Expected Log likelihood term (ELL) requires further development. We operate with the expectation as an integral. Recalling that p(|L(L)) factorizes over the data, we can observe that, naming =Qlog p(|L (L) )= ∫ q() logn=1Np(_n |L (L)) 1,…,L d 1,…,L = ∫log( n=1N p(_n |L (L)) ){∫l=1L p(l|l; l-1, l) q(l) d 1,…,L}1,…,L = ∫log(n=1N p(_n |L (L)) ){∫l=1L p(l|l; l-1, l) q(l) 1,…,L}d 1,…,L = ∫log(n=1N p(_n |L (L)) ){l=1L∫p(l|l; l-1, l) q(l) l}d 1,…,L = ∫log(n=1N p(_n |L (L)) ){l=1L∫q(l,l|l-1, l)d 1,…,L}1,…,L = ∫log(n=1N p(_n |L (L)))l=1Lq(l|l-1, l) d 1,…,L = n=1N∫log p(_n |L (L) ) l=1Lq(l|l-1, l) d 1,…,L = n=1Nl=1L q(l|l-1, l)log p(_n |L (L)) Where, to get from the second to the third equations we have applied the LOTUS rule and, from the third to the fourth equation it is used that each integral only depends on one of the factors.§ EXPECTED LOG LIKELIHOOD IN DTGP USING BAYESIAN PRIORSWe can expand the ELL term in Equation (<ref>) following the same procedure done in the previous case. In this case, since the flow depends on the sampled weights, we will denote it as l with l = 1,…, L. The derivations proceed as follows:= ∫ q(, ) logn=1Np(_n |L(L)) 1,…,L d 1,…,L d 1,…,L = ∫ q(|) q() logn=1Np(_n |L (L)) 1,…,L d 1,…,L d1,…,L = ∫ q()logn=1N p(_n |L (L))(∫q(|) d 1,…,L)1,…,Ld1,…,L = ∫ q()logn=1N p(_n |L (L))( l=1Lq(l|l-1, l))d 1,…,Ld1,…,L = ∫log(n=1Np(_n |L (L)))l=1L q(l) q(l|l-1, l) 1,…,Ld1,…,L = n=1N∫log( p(_n |L (L)))l=1L q(l) q(l|l-1, l) 1,…,Ld1,…,L = n=1Nl=1L q(l) q(l|l-1, l)log p( _n |L (L))(1)≈1/Ss=1Sn=1Nl=1Lq(l|l-1, l_s)logp( _n |(L_s,)^L(L))where the approximation in (1) is done using S Monte-Carlo Samples _s ∼ q(). | http://arxiv.org/abs/2310.18230v2 | {
"authors": [
"Francisco Javier Sáez-Maldonado",
"Juan Maroñas",
"Daniel Hernández-Lobato"
],
"categories": [
"cs.LG",
"stat.ML"
],
"primary_category": "cs.LG",
"published": "20231027160939",
"title": "Deep Transformed Gaussian Processes"
} |
Machine Learning Infused Distributed Optimization for Coordinating Virtual Power Plant Assets Meiyi Li, Student Member, IEEE, Javad Mohammadi, Senior Member, IEEE2023-10-25 ===================================================================================================plain plain Neural Radiance Fields (NeRFs) have proven to be powerful 3D representations, capable of high quality novel view synthesis of complex scenes. While NeRFs have been applied to graphics, vision, and robotics, problems with slow rendering speed and characteristic visual artifacts prevent adoption in many use cases. In this work, we investigate combining an autoencoder (AE) with a NeRF, in which latent features (instead of colours) are rendered and then convolutionally decoded. The resulting latent-space NeRF can produce novel views with higher quality than standard colour-space NeRFs, as the AE can correct certain visual artifacts, while rendering over three times faster. Our work is orthogonal to other techniques for improving NeRF efficiency. Further, we can control the tradeoff between efficiency and image quality by shrinking the AE architecture, achieving over 13 times faster rendering with only a small drop in performance. We hope that our approach can form the basis of an efficient, yet high-fidelity, 3D scene representation for downstream tasks, especially when retaining differentiability is useful, as in many robotics scenarios requiring continual learning.§ INTRODUCTION Neural rendering techniques <cit.> continue to grow in importance, particularly Neural Radiance Fields <cit.> (NeRFs),which achieve state-of-the-art performance in novel view synthesis and 3D-from-2D reconstruction.As a result, NeRFs have been utilized for a variety of applications,not only in contentcreation <cit.>, but also for many robotics tasks, including 6-DoF tracking <cit.>,pose estimation <cit.>,surface recognition <cit.> or reconstruction <cit.>,motion planning <cit.>,reinforcement learning <cit.>,tactile sensing <cit.>, anddata-driven simulation <cit.>.However, slow rendering and the qualitative artifacts of NeRFs impede further use cases in production. To render a single pixel, one major bottleneck is the need for multiple forward passes of a multilayer perceptron (MLP).Replacing or augmenting the MLP with alternative representations (e.g., voxel grids <cit.> or feature hash-tables <cit.>) has been used to improve both training and inference speed.Baking NeRFs into other primitive representations has also been a popular approach <cit.> for faster rendering.To reduce artifacts (e.g., “floaters” <cit.>),different sampling methods <cit.>,radiance models <cit.>, andscene contraction functions <cit.>have been proposed.Despite these advancements,NeRFs still suffer from visual flaws and low rendering frame-rates. Importantly, such issues hamper the use of NeRFs for downstream tasks,If rendering is too slow, agents will be unable to apply NeRFs as an internal 3D representation of the scene. Further, the solutions considered (often aimed at applications in computer graphics, for instance) may not be compatible with the requirements of other tasks. For example,meshification <cit.> enables fast rendering, but makes further online learning of the geometry significantly more difficult,due to topological constraints <cit.> and additional optimization complexity (e.g., to handle self-intersections and other unnatural geometries) <cit.>. We also do not wish to sacrifice too much representational fidelity (e.g., not including view-dependent effects <cit.>) for speed,as less accurate visual output can limit downstream opportunities forscene analysis.We therefore require a model that is capable of fast rendering and intra-task optimization (i.e., learning during an ongoing task), without sacrificing visual quality. In this paper, we propose an approach for solving these challenges that is orthogonal to existing methods. By leveraging convolutional autoencoders (AEs),we can define a “NeRF” operating in latent feature space(rather than colour space), such that low-resolution latent renders can be decoded to high-resolution RGB renders (see Fig. <ref>). This offloads expensive MLP-based rendering computations to the low-cost AE, greatly improving efficiency. Thus, we extend the standard NeRF architectureto return point-wise latent vectors,in addition to densities and colours (the latter used only in training).Since the decoder is simply another differentiable neural network, the ability to optimize the underlying 3D NeRF field is largely unchanged. As it is used for scene reconstruction, we denote the resulting combined field a Reconstructive Latent-Space NeRF (ReLS-NeRF). Beyond improving rendering speed,the AE can also act as an image prior,fixing some of the artifacts associated with direct NeRF renders, and actually improving representational fidelity.However, we also observe that the use of the AE inReLS-NeRF can introduce unique temporal artifacts, which existing image and video do not capture; hence, we define a novel metric that takes advantage of the geometric structure of the NeRF to detect them.Overall, by fine-tuning a powerful pretrained AE, our model is able to render views several times faster,while empirically improving in multiple image and video quality metrics.Further, we demonstrate a tradeoff between visual quality and rendering efficiency: by reducing the AE size,we obtain a 13-fold speed-up,with only a minor drop in quality. In summary, we contribute(i) a novel approach to reconstructive 3D scene representation, via a latent-space NeRF that both improves rendering efficiency and outperforms existing work on standard image and video quality metrics; (ii) a new evaluation metric, designed to detect temporal artifacts due to view inconsistencies, which existing metrics do not appear to capture; and (iii) the ability to trade-off image quality and rendering speed via varying the AE architecture. § RELATED WORK§.§ Improving NeRF efficiencyWhile NeRFs produce results of extraordinary quality,the speed of fitting (training) and rendering (inference) remains a bottleneck for adoption in a variety of applications(e.g., <cit.>). This has prompted a myriad of approaches to increasing their efficiency. Feature grid architectures have proven effective in expediting fitting convergence (e.g., <cit.>). Other approaches include utilizing depth <cit.>, better initializations <cit.>, andpretraining conditional fields (e.g., <cit.>). Such improvements can be readily utilized in our own framework. Similarly, a number of methods have been proposed to enhance the efficiency of the volume rendering operation, which relies on an expensive Monte Carlo integration involving many independent neural network calls per pixel. These include architectural modifications <cit.>,spatial acceleration structures <cit.>,“baking” (precomputing and storing network outputs) <cit.>,improved sampling strategies <cit.>, oraltering the integration method itself <cit.>.Finally, several works eschew volume rendering itself.A number of representations <cit.> use only a single sample per pixel, but struggle with geometric consistency and scalability. Similarly, one can move to a mesh-based representation and use rasterization instead <cit.>;however, this loses certain properties,such as amenability to further optimization or differentiable neural editing. Though our approach improves rendering efficiency,it is orthogonal to these methods, as it reduces the number of MLP calls per image by changing the output space of the NeRF itself.§.§ Feature-space NeRFsOther models have utilized neural feature fields (NFFs), as opposed to “radiance” fields,where rendering is altered to output learned features instead. Some NFFs <cit.> learn to produce the outputs of pretrained 2D feature extractors; similarly, several works have considered the use oflanguage-related features <cit.> and other segmentation signals <cit.>to embed semantics into the NFF. More closely related to our work aregenerative modelling NFFs that decode rendered features into images viagenerative adversarial networks <cit.> or diffusion models <cit.>. In contrast, this paper considers the scene reconstruction problem,using a latent representation potentially amenable to downstream tasks, and investigates issues related to view consistency. In particular, the artifacts of generative methods are similar to those detected by our novel quality metric (namely, appearance inconsistencies across close frames or camera viewpoints; e.g., see<cit.>). § METHODSAs in the standard NeRF scenario,we expect only a set of multiview posed images, S_I = { (I_i, Π_i) }_i. The goal is to learn a 3D scene representation in an autoencoder (AE) latent space, capable of novel view synthesis. Thus, our model includes two neural modules(<ref>):(i) a modified NeRF, f,which outputs a latent vector(in addition to its standard outputs), and (ii) an AE,with encoder and decoder networks, E and D. To fit the model, we apply a multi-stage process:training the AE, fitting the NeRF, and then fine-tuning D (see <ref>). §.§ ReLS-NeRF Neural ArchitectureWe first extend the standard colour-density field of NeRFto include a latent feature vector, z, via f(x,r) = (σ∈ℝ_+, c∈[0,1]^3, z∈ℝ^n), where x and r represent the input position and direction,and σ and c represent the output density and colour. We refer to the σ and c fields as an “RGB-NeRF”,to distinguish them from the latent component of the ReLS-NeRF. Note that the RGB-NeRF is used only in training, to learn the density field and produce renders to help train the latent component(see <ref>). Volume rendering is unchanged:for a single feature at a pixel position, p, we useZ(p) = ∫_t_min^t_max𝒯(t) σ(t) z(t)dt, to obtain the feature value at p,where 𝒯(t) is the transmittance <cit.>, and z(t)=z(x(t),r(t)) is obtained by sampling the ray defined by p. For camera parameters Π,we denote the latent image rendering function asℛ(Π|f) = I_Z(Π), where I_Z[p] = Z(p). Replacing z(t) with c(t), for instance, would render colour in the standard manner, giving a colour image, I_C(Π)(that does not use z). To obtain a colour image from I_Z, we simply pass it to the decoder, D;i.e., view synthesis isI_C(Π) = D(I_Z(Π)), which can be viewed as a form ofneural rendering (e.g., <cit.>). The benefit of using _C is that significantly fewer pixels need to be rendered,compared to I_C(Π); it also enables placing a prior on _C by choosing D appropriately. We considered two choices of AE: (i) the pretrained VAE from Stable Diffusion <cit.>,which we denote SD-VAE,and (ii) a smaller residual block-based AE<cit.>(R32, when using a 32D latent space) that is randomly initialized. Both encoders provide an 8× downsampling of the image.§.§ Fitting Process A ReLS-NeRF is optimized in three stages:(A) AE training, (B) joint NeRF fitting, and (C) decoder fine-tuning.AE training (A).The first phase simply trains (or fine-tunes) the AE to reconstruct the training images of the scenes, using the mean-squared error.Joint NeRF fitting (B).In the second phase, we train the RGB and Latent components of the NeRF in conjunction with the decoder, D. Our total loss function,𝔏_B =ℒ_r +λ_d ℒ_d +λ_grℒ_gr + ℒ_p,consists ofthe standard RGB loss on random rays,ℒ_r, the DS-NeRF <cit.> depth loss,ℒ_d, the geometry regularizing distortion loss <cit.>, ℒ_gr, and a patch-based loss for training the latent component, ℒ_p. Given a posed image, (I,Π), the latter loss is simply the error betweena sample patch, 𝒫∼ I, and the corresponding rendered then decoded patch,ℒ_p =𝔼_𝒫∼ I, (I,Π)∼ S_IMSE( 𝒫, D(I_Z(Π)) ). Decoder fine-tuning (C).Finally, we fine-tune D,utilizing a combination of the multiview posed images, S_I, andrenders from the RGB component of the ReLS-NeRF. First, we sample random renders,S_I = { (I_C(Π_s),Π_s)| Π_s∼Γ(S_Π) }_s, where Γ(S_Π) is the uniform distribution over camera extrinsics,obtained by interpolating between any triplet in S_Π. Optimizing 𝔏_C =γδ(S_I) +(1 - γ)δ(S_I), whereδ(S) = 𝔼_(I,Π)∼ SMSE(I, I_C(Π)) and γ∈[0,1] is a weighting hyper-parameter, distills information from the RGB-NeRF into latent renderer. See Fig. <ref>. Note that the real training images, S_I, are used; hence, the RGB-NeRF is not strictly a ceiling on performance(further, the presence of D implies different generalization properties). §.§ Implementation DetailsWe utilize the neural graphics primitives <cit.> architecture,via thelibrary <cit.>. All phases use Adam <cit.> for optimization. We remark that the loss gradient from the latent componentof the NeRF (i.e., from ℒ_p) is not back-propagated to the colour, c, and density, σ, fields. Further, we use separate features for the latent feature vector, z, and c,but render with the same σ. In other words, RGB-NeRF training is unaffected by z. For additional details,we refer the reader to our appendix.§ EXPERIMENTS§.§ Evaluation Metrics §.§.§ Pixelwise and perceptual distancesWe measure performance with novel view synthesis on held-out test views. In addition to the standard pixelwise peak signal-to-noise ratio (PSNR), we use perceptual losses to measure similarity,including LPIPS <cit.> and DreamSim <cit.>. LPIPS provides more human-like responses to low-level distortions (e.g., noise, small colour/spatial shifts),whereas DreamSim is designed to be “mid-level” metric, better capturing large-scale and semantic differences than LPIPS(without being as high-level as, e.g.,CLIP-based metrics<cit.>).§.§.§ Local consistencyWhen examining generative models of NeRFs that use decoders, we can qualitatively seea “shimmering” effect in time (e.g., <cit.>), which is also reminiscentof generative video model artifacts (e.g., <cit.>). This jittering appears related to local appearance inconsistencies:since each latent pixel corresponds to an RGB patch. As Π changes, interpolating in z-spacedoes not perfectly approximate the correct appearance changes. This behaviour is distinct from the artifacts observed in standard NeRFs and we devise a simple metric to detect it: the Reprojective Colour Consistency (RCC) metric. The RCC measures sudden changes in appearance as Π changes,relying on the NeRF geometry to obtain correspondences. Specifically,we reproject one render, I_i,into the reference frame of another, I_i+1, using the NeRF depth, D_i,soRCC = PSNR(𝔼_i[ MSE( I_i+1, Reproj_D_i,Π_i+1I_i )] ),where I_i and I_i+1 are adjacent video frames.Notice that occlusions and view-dependent lighting effects will confound the RCC; however, these effects will (i) be relatively minimal across adjacent frames and (ii) be shared for the same scene, enabling it to be a fair comparative metric.§.§.§ Video qualityAs noted above, adding a temporal dimension can make certain artifacts more perceptually detectable. We therefore applied a recent video quality metric,DOVER <cit.>,to NeRF-rendered videos. DOVER has two components:DOVER-aesthetic (DoA),which focuses on high-level semantics, and DOVER-technical (DoT),which detects low-level distortions(e.g., blur and noise). DOVER and the RCC are applied to 120-frame “spiral video” renders from the NeRF (as in LLFF <cit.>).§.§ Reconstruction Quality and TimingWe display our evaluation in Table <ref>,as well as timing measurements in Table <ref>, using eight LLFF scenes <cit.> (see also Fig. <ref> for qualitative examples)[Images in Figs. 1-4 available in https://drive.google.com/drive/folders/1M-_Fdn4ajDa0CS8-iqejv0fQQeuonpKFLLFF <cit.> under a https://creativecommons.org/licenses/by/3.0CC BY 3.0 License.], at 1008×756 resolution. We see that ReLS-NeRF(i.e., decoding a rendered latent feature map) with the SDVAE actually has superior novel view image quality,while having superior inference speed (three times faster). In particular, the low-level metrics,including PSNR, LPIPS, and DoT, all prefer ReLS-NeRF-SD over the standard colour NeRF. This is likely due to the fine-tuned decoder fixing artifactsincurred by the colour NeRF, as can be seen in Fig. <ref>. The higher-level, more semantic metrics are more mixed: DreamSim prefers the RGB-NeRF,while DoA slightly favours ReLS-NeRF-SD.Among reference-based metrics, the semantically-oriented DreamSim is the only one by which the RGB-NeRF outperforms ReLS-NeRF-SD.Since DreamSim is a single-image metric, it is insensitive to temporal artifacts; however, DreamSim is known to be more sensitive to foreground objects <cit.>. Interestingly, we qualitatively observe that ReLS-NeRF tends to improve image quality the most in scene areas far from the camera, where geometry is generally poorer quality – i.e., in the background (see Fig. <ref>). Thus, one might speculate that such improvements are simply going unnoticed for DreamSim, which tends to focus on foreground objects of greater semantic importance.In addition, we find that the RCC prefers the RGB-NeRF over ReLS-NeRF. Though it is hard to see in still images, ReLS-NeRF has slight temporal “jittering” artifacts, which the RCC is designed to detect. We remark that other algorithms show similar view-inconsistencies across close frames (e.g., 3D generative models <cit.> or video generators <cit.>), and could potentially benefit from RCC estimates. We illustrate this phenomenon with some examples in Fig. <ref>. Due to the learned decoder,unexpected appearance changes can occur across viewpoints.However, per-frame metrics, such as the traditionally applied LPIPS and PSNR, do not capture such inconsistencies; hence, ReLS-NeRF outperforms the RGB-NeRF on them (Table <ref>).Interestingly, even the video metrics (DoT and DoA) prefer ReLS-NeRF, suggesting such algorithms are more sensitive to the cloudiness and noise artifacts of the standard NeRF, compared to the small jitters incurred by the neural rendering process. In other words, by most metrics of quality (including the primary standard ones, PSNR and LPIPS), ReLS-NeRF is superior.Finally, we show that the trade-off betweenrendering efficiency and image qualitycan be controlled by changing the AE architecture.Using R32 reduces inference time by ∼92%, while decreasing test-view PSNR by only 0.15, compared to the RGB-NeRF rendering process. In contrast to ReLS-NeRF-SD, while ReLS-NeRF-R32 does sacrifice some image quality(e.g., ∼0.4 PSNR loss), it also reduces inference time by ∼76%. One can imagine choosing an architecture with the right level of trade-off for a given task. §.§ AblationsWe find that removing phase C is devastating to ReLS-NeRF,causing PSNR to drop to 22.85 (SD) and 20.87 (R32). Since the SDVAE is pretrained,ablating phase A has little effect on ReLS-NeRF-SD; however, doing so for ReLS-NeRF-R32 reduces PSNR by 0.1. Note that the latter case trains the decoder, D, alongside the NeRF and then alone, in phases B and C.§ DISCUSSION We have shown that ReLS-NeRF can improve image quality,while being several times faster to render.Inparticular, the SD-based ReLS-NERF is superior on the main metrics commonly used to evaluate NeRFs on test views (i.e., PSNR and LPIPS), as well as on a state-of-the-art reference-free video quality estimator.Empirically, we observed that current image and video evaluation metrics do not obviously capture temporal artifacts that are characteristic of ReLS-NeRF, caused by view-inconsistent appearance changes (due to the learned component within the rendering process). Hence, we introduced a simple metric for detecting such anomalies.Further, we have demonstrated a tradeoff between efficiency and quality, which can be controlled by the architecture of the AE. Importantly, to obtain its speedup, ReLS-NeRF does not “bake” the scene or transform to a mesh; hence, e.g., it could still be continually trained online in the standard fashion. In other words, it retains a number of useful properties of standard NeRFs (e.g., differentiability and access to an implicit 3D shape field), while gaining additional efficiency and image quality. For many robotics tasks, fast differentiable rendering is a key component for online learning of 3D scene representations.This includes simultaneous localization and mapping, navigation, and modelling the dynamics of the environment (i.e., ensuring the internal representation is up-to-date, given perceptual inputs). We feel that ReLS-NeRF is well-suited for such situations,as it retains differentiability, while improving rendering efficiency and even image quality as well. Other promising future directions include utilizing different AEs to provide task-specific biases(e.g., for 3D scene editing, faster speed, or higher image quality), improving the AE architecture to suit this scenario (e.g., devising a geometry-aware decoder), and better customizing the volume rendering process to latent space rendering (e.g., using a learned mapping instead of volume integration).§ APPENDIX §.§ Additional Implementation DetailsWhen training, we usedλ_d = 0.1, γ = 0.7, andλ_gr = 10^-3 / 2. The NeRF architecture was the same as previous works based on Instant-NGP (see <cit.>). The LLFF scenes used were, , , , , , , and .§.§ Fitting Hyper-ParametersPhase A. The SDVAE/R32 NeRFs were optimized for 500/3000 iterations, using learning rates of 10^-4/4× 10^-4. The learning rates were halved at 150, 300, and 450 iterations (SDVAE) and every 500 iterations for R32. Patches of size 512^2 were used, with batch sizes of 3/5.Phase B. The joint optimization was run for 20K iterations. We used 4096 rays for the colour and DS-NeRF losses, each. The latent loss, ℒ_p, is computed via 32^2 latent-space patches. The learning rate (excluding the VAE)starts from 10^-2 andis decayed according to10^-2× (10^-1)^ t / τ, where t is the step iteration and τ=10^4. The VAE is optimized with a fixed learning rate of 10^-4.Phase C. Decoder fine-tuning proceeds for 3000/10000 iterations for the SDVAE/R32 architecture. A batch size of three was used (one from S_I and two from S_I). Note that we render 512 images from the RGB-NeRF to act as supervision (i.e., |S_I| = 512). The process starts from a learning rate of 10^-4,and is decayed by 0.5 every 1000/2500 iterations. §.§ R32 ArchitectureThe encoder, E, has the following structure: , , , , , , , , . The components are as follows:is a conv-5×5--elu block;is two residual blocks <cit.>,each using conv-3×3 and ;is a bilinear halving downscaler; andis just a conv-1×1.The encoder has layer sizes of .The decoder, D, has the following structure: , , , , , , , , , . Components are the same,except thatis a bilinear doubling upscaler. The decoder has layer sizes of .Both networks use theELU non-linearity <cit.> and instance normalization <cit.>as .IEEEtran | http://arxiv.org/abs/2310.17880v1 | {
"authors": [
"Tristan Aumentado-Armstrong",
"Ashkan Mirzaei",
"Marcus A. Brubaker",
"Jonathan Kelly",
"Alex Levinshtein",
"Konstantinos G. Derpanis",
"Igor Gilitschenski"
],
"categories": [
"cs.CV",
"I.2.10"
],
"primary_category": "cs.CV",
"published": "20231027035208",
"title": "Reconstructive Latent-Space Neural Radiance Fields for Efficient 3D Scene Representations"
} |
[NO \title GIVEN] [NO \author GIVEN] January 14, 2024 ====================== The Amazon rainforest (ARF) is threatened by deforestation and climate change, which could trigger a regime shift to a savanna-like state. Previous work suggesting declining resilience in recent decades was based only on local resilience indicators. Moreover, previous results are potentially biased by the employed multi-sensor and optical satellite data and undetected anthropogenic land-use change. Here, we show that the spatial correlation provides a more robust resilience indicator than local estimators and employ it to measure resilience changes in the ARF, based on single-sensor Vegetation Optical Depth data under conservative exclusion of human activity. Our results show an overall loss of resilience until around 2019, which is especially pronounced in the southwestern and northern Amazon for the time period from 2002 to 2011. The demonstrated reliability of spatial correlation in coupled systems suggests that in particular the southwest of the ARF has experienced pronounced resilience loss over the last two decades.§ INTRODUCTIONThe Amazon rainforest (ARF) is the most biodiverse region of our planet, and serves as a major carbon sink <cit.>. Yet, the efficiency of its carbon uptake has been declining over the last decades <cit.>, with the ARF becoming carbon neutral and even acting as a carbon source during the two one-in-a-century droughts in 2005 and 2010<cit.>. The ARF's important role in the global carbon cycle thus means that its existence and stability are crucial for climate change mitigation, especially as the planet continues to warm in the later parts of the century <cit.>.Studies suggest that there is a critical mean annual precipitation (MAP) value at which parts of the forest might irreversibly transition into a savanna-like state <cit.>.In such a scenario, forest dieback would likely be self-amplifying, i.e. the non-linearity of such an abrupt transition would result from positive feedback mechanisms operating in the region. Besides fire <cit.>, the main feedback mechanism that could amplify dieback in the ARF is related to moisture recycling<cit.>.Moisture is transported at low atmospheric levels via the trade winds from the tropical Atlantic to the Amazon basin, where it precipitates. A substantial fraction is taken up by the vegetation and transpired back to the atmosphere, or evaporates from the complex surfaces of plants. This evapo-transpirated water is then transported further west and south over the Amazon and towards the Andes by low-level jets, sometimes called atmospheric rivers <cit.>. The low-level circulation itself is amplified by condensational latent heating over the Amazon basin, strengthening the large-scale atmospheric heating gradient between ocean and land <cit.>.Two main mechanisms have been proposed that may activate positive feedback cycles and push the ARF towards a critical threshold <cit.>. On the one hand, anthropogenic global warming will cause increased temperatures over the Amazon basin, which could lead to increased evapo-transpirative demand without a corresponding increase in water supply via precipitation, especially during a potentially intensifying and prolonging dry season <cit.> and severe droughts <cit.>. This could additionally lead to decreased convection and a reduction of moisture inflow from the Atlantic <cit.>; moreover, models from the Coupled Model Intercomparison Project Phase 6 (CMIP6) project an overall drying in tropical South America in response to increasing atmospheric greenhouse gas concentrations. Hence, global warming could drive the system towards destabilization <cit.>. Furthermore, deforestation can lead to a critical decrease of evaporated moisture transported downstream, and to an additional reduction of moisture inflow due to a decreased heating gradient, further pushing the ARF toward a critical threshold <cit.>. The decrease in precipitation that would occur beyond such a threshold would also cause degradation outside the ARF region. Such vegetation carbon losses have been predicted by several CMIP6 models in parts of the Amazon basin, preceded by an increasing amplitude of seasonal temperatures <cit.>. Furthermore, observations have shown that the Amazonian dry season is increasing in length <cit.>, exacerbated by the three severe droughts that have occurred since 2005 <cit.>. In view of these projected and observed trends and the global relevance of the ARF, monitoring changes in its resilience is of great importance. As the data-driven monitoring of resilience changes and the anticipation of critical transitions is important for many parts of our climate, including the ARF, a considerable amount of research has focused on developing and applying such methods.Existing methods focus on the detection of distinct signs of resilience loss, where resilience is defined as a system's ability to recover from perturbations; the underlying mathematical concept is derived from dynamical system theory. Under the assumption that resilience loss can be dynamically represented by an approaching (codimension-1) bifurcation, the approach to the critical forcing value at which the bifurcation is accompanied by a weakening of the equilibrium restoring forces in the system, resulting in slower recovery from small perturbations. This is termed `critical slowing down' (CSD). During CSD, the variance and lag-1 autocorrelation (AC1) of the system state increase, as they are directly linked to the recovery rate from perturbations <cit.>. Yet, in the case of spatio-temporal data, calculating the variance and AC1 of individual point locations does not exploit the information that is potentially encoded in the spatial dimensions, and in particular misses the interactions between different locations. In <cit.> the authors argue that when a system consists of several coupled units, a decrease in the units' recovery rates causes increasing correlation between two coupled cells. Variance and AC1 have recently been confirmed to quantify resilience empirically at global scale, by comparing theoretical estimates of the recovery rates based on classical CSD indicators to direct estimates of the recovery rate from VOD time series sections showing recovery from abrupt disturbances <cit.>.Recent results in <cit.> have revealed that large parts of the ARF's vegetation biomass show increasing AC1, implying a loss of resilience that has been especially pronounced since the early 2000s. The studies in <cit.> as well as <cit.> used multi-satellite data to extend the period of time under study <cit.>. Yet, <cit.> subsequently showed that such merging of sensors can create spurious changes in higher-order statistics such as variance and AC1 when those are calculated from multi-instrument time series. The non-stationarity induced in the time series by the change of sensors with different orbits, intrinsic noise, and radiometric resolutions may result in statistical signals which can be misinterpreted as resilience changes. It is thus recommended to investigate the resilience changes in a system by using single-sensor instrument records when available.In this work we thus exclusively use single-sensor data: in particular, Amazon vegetation indices based on Vegetation Optical depth (VOD), see Fig. <ref>. While indicators of CSD could react differently to changes in the measurement process or to systemic changes that are not related to CSD, different indicators that are related to CSD via the recovery rate should behave consistently whenever resilience changes. We thus ensure the robustness of our results by comparing the calculated spatial correlation to corresponding estimates of the AC1 and variance. To further ensure robustness, we compare different data sources as recommended in <cit.>. It has been suggested that different areas of the ARF can be considered part of a coupled system connected by spatial interactions via evapo-transpiration, low-level winds, and moisture recycling <cit.>. Since the trade winds transport moisture from east to west, we assume that coupling cells uni-directionally is a reasonable simplification of the plant-water moisture transport system in the Amazon <cit.>. The almost laminar flows can be investigated as separate trajectories, thereby allowing a reduction to one average dimension. Based on this relationship, we set up a simple conceptual model with asymmetric interaction (see Fig 2.) to validatevariance, AC1, and spatial correlation as indicators of CSD and, hence, resilience loss in ARF vegetation. We then calculate all three resilience indicators for four single-sensor VOD satellite data sets, which quantify the ARF's vegetation proxy of above-ground water content in biomass.Finally, based on these results we discuss changes in resilience of the ARF.§ MATERIALS AND METHODS §.§ Description and parametrization of the conceptual model To demonstrate the relevance of the three indicators of CSD in a setting like the ARF,we slightly modify a previously introduced conceptual model of atmosphere-vegetation interaction<cit.>. The vegetation dynamics of the model are inspired by the global dynamical vegetation model VECODE <cit.>, which is based on an empirical relationship of vegetation cover fraction and atmospheric conditions. In particular, the equilibrium vegetation V^* is a monotonic (sigmoidal) function of precipitation, see Equation <ref>. The vegetation dynamics are simulated as a linear relaxation toward this empirical equilibrium and the equilibrium vegetation V^* is a direct function of total incoming precipitation P_i in [mm/year], namely V^* = 0ifP_i < P1 1ifP_i > P2 1.03 - 1.03 ( 1+ α/exp(ϕ)^2 (P_i - P1)^2)^-1 otherwise. Here, P_i represents the amount of precipitation in a cell i,as the spatial extension of our model consists of 10 cells.While VECODE is based on an empirical relationship of vegetation cover fraction based on atmospheric conditions, we here interpret the vegetation variable V^* as a property linked to the tree cover in the Amazon (e.g., tree cover fraction or biomass). We motivate this interpretation by the conceptual nature of the model approach, and the fact that tree coverage in the original VECODE version is modeled by a very similar approach as vegetation coverage (with a curve that is shifted to higher precipitation rates). Making the model more complex by distinguishing plant types would hence not add to our analysis.The parameters P1 and P2 in Eq. <ref> are given by P1= β·exp(ϕ/2) P2= β·exp(ϕ/2) + exp(ϕ)/√(0.03 α) . Parametrization is chosen in a similar way to <cit.> but adapted such that the range of bistability in dependence of precipitation is closer to that in the ARF. Namely, the parameters are α = 0.0011, β = 280, and ϕ = 2.45. Thus, the vegetation V_i ∈ [0,1]of a cell i at time t+1 is given by V_i^t+1 = V_i^t +V^*(P_i^t) - V_i^t/τΔ t , where P_i^t is itself a function of the vegetation state V^t. Namely, following <cit.>, we implement an atmosphere-vegetation feedback in our model by forcing the precipitation with the vegetation, installing the moisture-advection feedback as the coupling mechanism. While this coupling can in principle capture many land surface mechanisms, with this focus, the resulting feedback mechanisms can then be considered to act uni-directional from east to west following the fluxes in the atmosphere over the Amazon <cit.>. Thus, the cells can be thought as in a row with cell #1 representing the most downstream region (southwestern Amazon) and cell #10 the eastern-most cell (East coast of South America). Then, the precipitation from moisture recycling P_i^recycled is a sum of vegetation in the cells to the east of cell i (cells with higher indices j≥ i), weighted by the distances. Mathematically, it can be expressed as P_i^recycled = ρ·∑_j≥ i1/j-i+1· V_j and acts a coupling between the cells. The scaling factor ρ is set to 600. Note that each cell's precipitation is also dependent on its own vegetation cover V_i.Additionally, in each cell the precipitation depends on the amount of moisture that arrives from the Atlantic ocean, which is represented by the control parameter B, as well as on the amount of its own and more eastern cells' vegetation. This background precipitation is the product of a scaling factor s_i times B(t). The scaling factors_i = max{0.1· i,0.2} resembles the amount of precipitation that results directly from the ocean. Hence, it is highest in the east (s_10 = 1.0) and decreases towards the west (s_1=s_2=0.2). In summary, the total precipitation in cell i is a sum of background precipitation and precipitation P_i^recycled from moisture recycling in the `east' of the cell and given byP_i = B · s_i + P_i^recycled + ση ,where the standard deviation of the white noise is set to σ = 20. A visualization of the deterministic part of precipitation in the model is given in Fig. <ref>.To mimic a climate change scenario with declining moisture inflow from the Atlantic ocean, the bifurcation parameter B decreases linearly from 1100 to 990 over time in our model runs.One should note that due to the different magnitudes of moisture inflow from moisture recycling, the critical value of the control parameter B differs between the cells.The interaction of vegetation and precipitation leads to the stability diagram shown in Fig. <ref> and to a transition to a low vegetation state once the critical threshold of precipitation is crossed.While the characteristic relaxation time in VECODE is climate dependent, we here follow <cit.> and set it to a constant value (here: τ = 100 years), which allows a more straightforward interpretation of CSD in the model. The constant τ resembles the inherent timescale of the vegetation system.The time step Δ t = 1 corresponds to one year in accordance with P representing mean annual precipitation. While this contradicts the VOD observations that are available for 10 years, with fluctuations happening on monthly scale, it must be pointed out here that the time scale difference does not matter as a time step unit could be interpreted as e.g. days as well. Realizations of the model are the numerical approximations of the solutions of the given equations by an Euler-Maruyama-scheme with random white noise added to the precipitation. As this analysis focuses on measures that are valid in the vicinity of the fixed point, equilibrium runs are performed. For each change in the bifurcation parameter, 1000 steps of the Euler-Maruyama-scheme are executed, ofwhich the last one is added to the data set as one time step. Moreover, the data of the first time step are the result of 10000 steps with initial values randomly distributed around the fixed point. The simulations shown are based on 1000 time steps. All results are based on 1000 realizations of the stochastic model.§.§ Resilience analysisTo assess changes in resilience based on the concept of CSD, we de-trend and de-season the data sets and calculate the variance, AC1 and spatial correlation in sliding windows. The change is then defined as the time series' linear trends.§.§.§ De-trending and de-seasoning All indicators of CSD are based on perturbations of the state variable around its equilibrium. Hence, the variable to analyze must not contain any trend or seasonality.For the conceptual model, this is achieved by subtracting the equilibrium vegetation state of the corresponding cell from its vegetation state in each realization.For the satellite vegetation data, the trend and seasonality of each VI in each grid cell are removed by applying the Seasonal-Trend decomposition using LOESS (STL). The parameters are set to the default values proposed in <cit.>, but an additional analysis confirms that the general results are robust against variation in the parametrization. §.§.§ Detection of resilience changes The actual indicators of resilience are then computed on sliding windows. For each window, variance and AC1 are calculated grid-cell-wise, with correlation referring to the Pearson correlation coefficient throughout this study. The spatial correlation of a cell i is defined as C_i = 1/n∑_j Cor(r_i, r_j) for j ∈Ω_i, where r_i and r_j are the residuals of the VI in the grid cells i and j on the current time window and Ω_i = {jdist(i,j) ≤ 100km} and n being the size of Ω_i.To put the formula into words, the spatial correlation is the mean of the cell's temporal correlation with all its neighbors, where neighbors are limited to all cells within a given radius. For the conceptual model, the radius is set to 1 and the distance between two cells i and j is defined as i-j. Hence, only directly neighboring cells are considered when computing a cell's spatial correlation value.Regarding the VIs, the radius is 100 km for the main results, where all distances are calculated based on the great circle distance between grid cell centers. However the results are robust against other choices of the radius. For the conceptual model, the size of the windows is set to 100 time steps. In the real data, defining the sliding window size must be done with consideration of the total length of the time series. Large window sizes ensure robust indicator values, but leave less time steps to calculate their corresponding trends or tendencies. In the main analysis, the window size is 5 years, equivalent to 60 data points. The trade-off is especially high in the case of the single-sensor data considered here due to their short availability. Yet, the results are robust with regards to the window size.To evaluate the change in resilience over time, the development of the corresponding indicators of CSD over time is of interest.This change over time is then quantified by the trend, which is defined as the slope of a linear regression. While the trend might be biased by large jumps in comparison to many small changes, the Kendall τ correlation coefficient assesses the time series' tendency, measured as the steadiness of the increase. All results are stated in terms of trends, but are robust when compared to results assessed by the tendency. §.§.§ Time of emergence To assess the reliability of the indicators, we define the time of emergence (ToE) as the time a significant trend emerges for the first time.To calculate the ToE for the conceptual model, the significance in terms of the p-value of each single trend in each time step is calculated. To do so, 1000 phase surrogates of each single realization's vegetation residual are created. For each surrogate, the indicators are then calculated in the same manner as for the actual state variable.For each time step, the trend in the surrogate's as well as original state variable's indicators until this point in time is measured. The p-value is then defined as the one minus the percentile in which the later is when assuming it results from the distribution given by the former. An indicator at a given time and cell is regarded significant, if p < 0.05. The time of emergence is then defined as the first time when the trend becomes significant. The ToE was similarly defined e.g. in <cit.> as the first time a signal-to-noise ratio is above a certain threshold. An indicator of CSD used as an Early Warning Sign (EWS) should preferably warn as early as possible. Thus, indicators are better when their increase is significant long before a Tipping Point. Furthermore, they are more reliable when the spread in the ToE over the realizations is small. § DATA Remote sensing of high biomass regions such as the ARF is challenging for several reasons. Vegetation Indices (VIs) based on optical imaging may fail due to the dense canopy that can lead to asymptotic saturation <cit.>. Moreover, artifacts from persistent cloud cover and aerosols may remain in the processed VI <cit.>.In contrast, VOD <cit.> is derived from microwave satellite observations and linked to vegetation water content <cit.> via which it can be interpreted as an indicator of canopy density and above-ground-biomass. <cit.> showed that VOD is more suitable for a vegetation resilience analysis based on CSD at the global scale. Besides the reliability of the VI, sufficiently long time series are crucial for the analysis of the evolution of vegetation resilience.Yet, while long-time scale merged VOD products exist (e.g. VODCA <cit.>), intercalibration techniques in multi-sensor observational products providing long time series can cause artifacts in any resilience analysis that is based on CSD <cit.>. Hence, here we analyse single-sensor data, all of which was recorded over a time span of at least eight years. For the time period from 2000 to 2020 two sensors recorded suitable data. First, the Advanced Microwave Scanning Radiometer Earth Observing System sensor (AMSR-E) <cit.>was active from June 2002 to October 2011. We combine the daily into monthly data by taking averages over full months following <cit.>, so only the time period from July 2002 to September 2011 is analyzed.AMSR-E provides VOD data based on the C- as well as the X-band. The C- and X-bands stem from a sampling of frequencies around 6.9 and 10.7 GHz., respectively. In theory, shorter wavelengths (X-band) are mainly responsive to the moisture content of the canopy <cit.>, while longer wavelengths (C-band) are more sensitive to deeper vegetation layers, including the woody parts <cit.>. AMSR-E's successor AMSR2 <cit.>was launched in July 2012 and is still active. In this sensor the C-band was divided into two frequency bands termed C1 and C2 (6.9 and 7.3 GHz), and VOD data can be derived from both. The first complete month of AMSR2 is August 2012, and due to the availability of the data used to exclude Human Land Use (HLU), AMSR2 is analyzed until December 2020.To access the changes in the Amazon rainforestover the last decades, we use the VOD data derived from the nighttime overpass recordings as they are more suitable for VOD <cit.>, and the spatial resolution is kept at 0.25^∘. §.§ Grid cell selectionTo define natural rainforest grid cells within the Amazon basin (<http://worldmap. harvard.edu/data/geonode:amapoly_ivb>), two datasets are used to check for the following two requirements.A cell must have at least 80 % evergreen broadleaf fraction (EBlF) and 0 % human land use (HLU), extracted from the MODIS Land Cover dataset <cit.> (MCD12C1, Version 6), based on Land Cover Type 1 in percent. This data is available from 2001 until 2020. We define HLU as any of Croplands, Urban and Built-up Lands, and Cropland/Natural Vegetation Mosaics.To be as rigorous and conservative as possible, also any cell with more than one percent forest loss during the same period, 2001-2020, according to <cit.> is excluded. The percentage refers to the accumulated area of forest loss detected over time. This data selection is visualized in Fig. <ref>.§ RESULTS§.§ Resilience indicators in a conceptual model of vegetation-atmosphere moisture recyclingThe conceptual model implements moisture-recycling from east to west in combination with decreasing moisture inflow from the Atlantic ocean. The latter one corresponds to a potential climate change scenario and acts as the forcing in the model.Once the point where the eastern cell tips to a low vegetation state is reached, the abrupt decline in vegetationreduces the inflow of recycled moisture in all the cells further to the west, thus inducing a tipping cascade. It is important to note that within one realization, all cells tip simultaneously, so the spread in the actual tipping point in Fig. <ref>b is a result of stochastic differences between the realizations.Before the Tipping Point, the resulting decline in precipitation is almost linear for all of the cells, compare the zoom-in into precipitation in Fig. <ref>.Interestingly, deforestation in the east-most cell mimicked by a artificial linear reduction of this cell's vegetation state also induces a cascade of precipitation and vegetation tipping (see Fig. <ref>).In Fig. <ref>, the vertical black lines mark the time when the total precipitation of at least one cell in at least one realization drops below the critical value for the first time.Thus, only time steps before entering this region are used in the resilience analysis. For all cells, variance, AC1 and spatial correlation as indicators of CSD are calculated on sliding windows and their change is assessed as the linear trends. For all indicators and all cells, the three indicators increase on average with almost no negative trends (see Fig. <ref>). As defined in the Methods Section, the time of emergence (ToE) is the time when the trend of an indicator becomes significant for the first time. In Fig. <ref> the median ToE is highlighted by an orange line and differs both between different cells and between different indicators for the same cell.For the eastern cell, variance is the first indicator to become significant. This early emergence is because this cell is the closest to a standalone tipping point, and so its variance is increasing towards infinity and thus the increase becomes significant earlier in time. This earlier increase shows that the tipping of the eastern cell triggers the cascading tipping of the other cells. Note that for this cell, the spatial correlation does not become significant in all realizations. For the second easternmost cell, the median ToE is comparable between the spatial correlation and the variance. For all other cells except the westernmost the spatial correlation outperforms the other indicators as an EWS (for the westernmost cell the AC1 has the earliest ToE). The comparison to the variance is especially striking, as the variance trend's significance emerges very late or not at all for these cells. One can thus conclude that the AC1 and spatial correlation become significant earlier and more often for the coupled cells, with spatial correlation being ahead in time for most cells.On the other hand, variance exhibits early significant trends for the (almost) uncoupled cells.Furthermore, an indicator is regarded more reliable when its trend consistently becomes significant at one time. This can be translated to a smaller spread in its ToE.It is apparent that the spread in ToE is smaller for spatial correlation for the highly coupled cells, indicating its robustness in systems with strong coupling. However, any indicator can sometimes have spuriously significant trends that lose their significance at a later time, so it is informative to also define a `permanent' ToE. This ToE is defined as the time when the emergence of a significant trend is permanent for the rest of the time period, and is shown in Fig. <ref>.While this definition reduces the large uncertainty in the ToE of AC1, the choice of the method does not alter relations between the median ToEs of the different indicators. The overall result also stays the same: the spatial correlation is the most reliable and earliest indicator to have significant positive trends in coupled cells. We conclude that, whilst their significance might emerge at different times, all three indicators are expected to increase for spatially extended systems like the ARF when approaching a critical transition.Moreover, the comparatively early and tightly spread time of emergence of the spatial correlation proves its reliability as an indicator of CSD in coupled systems. Consequently, if the ARF is losing resilience we would expect coherent and significant increases in all three indicators in observations of Amazon vegetation, with most pronounced signs in the spatial correlation.§.§ Vegetation resilience in the Amazon basinAs the ARF is a spatially coupled system, similar to the simple moisture recycling model presented above, we expect to see increasing indicators of CSD in case of resilience loss. Three indicators are considered here, namely variance, AC1, and spatial correlation. The change in an indicator's time series is quantified by the linear trend (see Methods section). Fig. <ref> displays the spatial pattern of trends in the individual indicators as found for the different data sets. It is important to keep the different time spans of the two sensors in mind, as the results can only be compared within each sensor's bands.2002 - 2011 (AMSR-E): From July 2002 to September 2011, AMSR-E was active. The changes in the single indicators for AMSR-E's C- and X-band are depicted in Fig. <ref>a)-f). For its C-band, the distributions of trends of all indicators have a positive median, so overall more cells exhibit a positive trend than not.Whilst the spatial distributions of trends in variance, AC1 and spatial correlation are distinct, in all three the positive trends cluster in the southwest and along the northern basin boundary. AMSR-E's X-band has, for all three indicators, an even stronger tendency towards cells with a positive trend. The regions of positive trends comprise those of the C-band, but extend across the whole Amazon basin, with the only exceptions being some cells with non-positive trends in the east, close to the Amazon river. As explained above, in the case of CSD we would expect the changes in all three indicators to be coherent. Fig. <ref> summarizes the results from the three indicators. The maps show the number of indicators exhibiting a positive trend at each grid cell.For AMSR-E's C-band, Fig. <ref>a) confirms that the most prominent patch of positive trends in all indicators is the southwestern part of the Amazon basin. Further signs of extended resilience loss are noticeable along the northern basin boundary. For the X-band, the overall resilience loss of the ARF, yet especially pronounced in the southwest, becomes apparent in Fig. <ref>b and e).The time series of spatial averages suggest that the ARF as an interacting ecosystem has, on average, lost resilience over the years 2002 to 2011, with clear signals in both bands. 2012 - 2020 (AMSR2): Turning to the time period from August 2012 until December 2020, for which VOD data based on AMSR2 was analyzed, the signal is less clear (Fig. <ref>). Even so, except for the AC1 of the C2-band, the trends of all indicators are more often positive than negative. As for AMSR-E, the two bands in AMSR2 have different spatial trend distributions. For the C1-band, patches of positive trends, mostly in the variance and spatial correlation, are visible in the northeast as well as in the west. For the C2-band, the signals are stronger in the west but less pronounced in the northeast, and again strongest in the variance and spatial correlation. The spatial comparison of the trends in the three indicators in Fig. <ref>c) reveals that, based on the C1-band, the vegetation in the northeast of the Amazon basinhas lost resilience. Even though no clear signals in this region were found for the years 2002-2011 based on AMSR-E's C-band, its X-band does show destabilization in this region already in the years before 2012. Considering the C2-band, positive trends co-occur mainly in the very west, where both bands from AMSR-E indicate a loss of resilience for the years before. The spatially averaged indicator time series clearly increase over the period until around mid-2019. Interestingly, from then on until the end of the study period, all time series decrease, although the decrease in the spatial correlation is marginal compared to the previous increases.We restrict our study period to years before 2020 as the data for excluding human land use is not available thereafter, but using a less conservative analysis we find that the indicators continue to decrease in the 2020-22 period. Yet, this decrease could be explained by forest loss and the results for AMSR2 analyzed until 2022 cannot be relied on (see Fig. <ref>). Furthermore, AMSR2 also provides an X-band (10.7 GHz), for which results are shown in the supplement (Figs. <ref> and <ref>) as the theory implies that it is less suitable for biomass assessment in the Amazon than the C-bands.All results are robust with respect to the parametrization of the de-trending method STL (see Figs. <ref> and <ref>), the size of the sliding windows (see Figs. <ref> and <ref>), the measure of increase in the indicator time series (see Figs. <ref> and <ref>), and the maximum distance that defines `neighbors' (see Fig. <ref>).Even though they reveal a less dramatic picture of the condition of the Amazonian rainforest than that found by <cit.>, these results confirm the resilience loss found in that work, which was based on the AC1 indicator and the merged data set VODCA and started in the early 2000s. In this work, we use a number of different indicators of CSD as well as different observations, which makes our results especially robust. In particular, for all data sets considered the number of cells where all three indicators show a positive trend is more than double the expected number of 12.5 % (=0.5^3, with 29.5 % and 59.3 % for AMSR-E's C- and X-band and 24.5 % and 21.5 % for AMSR2's C1- and C2-band, respectively, see also Fig. <ref>a-d)). § CONCLUSIONSIn spatially coupled systems, the spatial correlation is expected to increase prior to a critical transition, establishing an indicator of CSD. In this work we first used a conceptual model to show that for a system with spatial extension and coupling similar to the ARF, variance, AC1 and spatial correlation increase as it approaches a critical transition. This is the case even if a cascade of tipping is induced by a single cell <cit.>. The simulations revealed that for strongly coupled cells where a transition is caused by a reduction of the incoming recycled moisture, spatial correlation is an especially reliable and early mean of detecting the loss of resilience and an approaching transition.Recent studies have shown that satellite data are appropriate only under certain conditions for investigating changes in the resilience of the ARF <cit.>. In particular, it has been shown that time series which combine different data sources might inherit artifacts resulting from the merging procedure <cit.>, and thus in our work we exclusively analyzed single sensor data.While time series of several decades would be favorable to capture long-term vegetation dynamics, shorter time series are still capable of sensing physiological responses to droughts and other environmental conditions. The sensors AMSR-E and AMSR2 provide acceptably long VOD time series (2002-2011 and 2012-2020, respectively), which we analyzed on a monthly resolution, following <cit.>. For the early 2000s we find an overall increase of the CSD indicators, with more striking signs of resilience loss in AMSR-E's X-band. The spatial pattern is consistent across the two bands, with the largest losses of resilience occurring in the southwest and north. From 2012 to 2020, AMSR2 data reveals a less clear picture. Yet, the cells in the C1-band where all three indicators increase reside mostly in the northeast, coinciding with the resilience loss detected by AMSR-E in the preceding years. The cells that are likely undergoing destabilization according to AMSR2's C2-band are concentrated in the southwest.Overall, even though the results differ somewhat for the individual data sets, we can conclude that the ARF's vegetation experienced a loss of resilience during the first two decades of the 21st century. More pronounced signals were found for the time period from 2002 to 2011, with the regions of destabilization comprising the western Amazon basin, the band along the northern boundary as well as the northeastern parts. Interestingly, the regions in the southwest where destabilization is detected in all data sets correspond to the regions downstream from the `atmospheric rivers'. Hence, they are highly dependent on moisture recycling, implying that they are considerably spatially coupled and found to be more vulnerable to tipping due to network effects <cit.>. In combination with the results from the conceptual model, which show that spatial correlation gives an especially reliable indication of CSD in a highly coupled sub-system, the spatial correlation can be considered the most reliable indicator in the southwest. This is in line with the fact that all data sets show increasing spatial correlation in parts of the southwestern Amazon. These increases could hint at a destabilization due to changes in incoming recycled moisture, which in return could be an effect of the high deforestation rates in the `arc of deforestation' further upstream of the `atmospheric rivers'.The main forcing affecting the ARF's vegetation and potential resilience changes is presumably the precipitation. Thus, it is essential to ensure that the detected changes in the vegetation's indicators of CSD are not a direct representation of corresponding changes in precipitation statistics. We thus calculate the same CSD indicators for the precipitation at each grid cell, and analyse the regions in which the sign of their trends agree with the sign of the trend in VOD CSD indicators (see Fig. <ref>). This analysis shows that the signals found in Fig. <ref> are not a consequence of statistical changes in precipitation. On the other hand, it is still possible that the vegetation resilience loss is related to a decrease in the annual precipitation. Yet Fig. <ref> shows that the signs of CSD cannot always be explained by negative trends in precipitation. However, this could be caused by a lag in time between the changes in precipitation and the resilience loss in vegetation.Furthermore, the lack of a negative trendin mean annual precipitation could still coincide with overall drying: in case of an increasing evapo-transpirative demand or, as many studies have suggested, shifts in the dry season length and strength, and increasing frequency of droughts that could be evened out by increases in precipitation during the rainy season or flood years <cit.>. Thus, further work is needed to better understand the interplay of causes that can drive the ARF towards a dieback.This work has focused on spatial correlation as an indicator of CSD, due to the spatially coupled structure of the ARF and the theoretical advantages this indicator shows in numerical experiments.Yet, multiple other potential (spatial) CSD indicators exist, such as spatial variance, spatial autocorrelation, spatial skewness <cit.>, or spatial permutation entropy <cit.> . Several of these would be applicable in this setting, but their thorough investigation and comparison was beyond the scope of this study. Still, a comparison of different spatial resilience indicators could improve our understanding of their applicability as well as their reliability to detect changes in resilience in the ARF, as well as in other spatially coupled ecosystems.If we are to robustly capture resilience changes, our efforts must be focused in a few key directions. First, long single-sensor time series are preferred to reliably trace the dynamics and potential resilience changes of vegetation ecosystems. Second, sensors and their derived VIs must be adequate to address the question of interest. To that end, it is important to find measures to assess the suitability of data sets. In the context of residuals-based resilience analyses, a sufficiently high signal-to-noise ratio is crucial.Furthermore, with respect to dense vegetation such as the ARF, it would be helpful to better understand problems induced by saturation in the VIs on their higher order statistics <cit.>.Third, the applicability of indicators of CSD in different settings must be better understood, such that the most suitable approaches can be chosen depending on the system analyzed.Overall, the complex changes we find in the ARF suggest that combining multiple datasets and indicators can give a clearer picture on the applicability of CSD, and the statistical robustness of trends in different parts of the Amazon. Our findings suggest that the previously found loss of resilience in the early 2000s <cit.> can, in parts, be confirmed by our approach and data, with less distinct signals for the years 2012 to 2020. Nevertheless, we find a destabilization of the vegetation in the ARF since the beginning of the century, independent of the data source or the indicator of resilience change, that is especially pronounced in the southwestern Amazon basin.§ OPEN RESEARCH SECTIONAll data used in this study is publicly available.For this study, only cells within the Amazon basin (<https://worldmap.maps.arcgis.com/home/item.html?id=f2c5f8762d1847fdbcc321716fb79e5a>, accessed on January 28, 2021) are considered. Human Land Use is extracted from the MODIS Land Cover dataset <cit.> (MCD12C1, Version 6) available at <https://lpdaac.usgs.gov/products/mcd12c1v006/> (accessed on November 11, 2021), based on Land Cover Type 1 in percent. The Hansen deforestation data <cit.> was downloaded on May 31, 2022, from <https://storage.googleapis.com/earthenginepartners-hansen/GFC-2021-v1.9/download.html>. The VOD from AMSR-E <cit.>(LPRM-AMSR_E_L3_D_SOILM3_V002, C- and X-band) and AMSR2 <cit.> (LPRM-AMSR2_L3_D_SOILM3_V001, C1- and C2-band) can be found at <https://hydro1.gesdisc.eosdis.nasa.gov/data/WAOB/LPRM_AMSRE_D_SOILM3.002> and <https://hydro1.gesdisc.eosdis.nasa.gov/data/WAOB/LPRM_AMSR2_D_SOILM3.001/> and were last accessed on November 8 and December 24, 2022, respectively.The precipitation data from CHIRPS <cit.> is available at <https://data.chc.ucsb.edu/products/CHIRPS-2.0/global_monthly/netcdf/> and was last accessed on November 16, 2022. It was downscaled to the same resolution as the VOD data by selecting only the grid cells matching VOD's grid cell centers (center of 5× 5 cells). § CONFLICT OF INTEREST/COMPETING INTERESTSThe authors declare no competing interests.§ AUTHOR CONTRIBUTIONSLB, CB, and NB conceived and designed the research. LB conducted the analysis and prepared a first version of the manuscript. All authors discussed results, drew conclusions and edited the manuscript.§ SUPPLEMENTARY INFORMATIONSupplementary figures accompany this paper in the supplementary material. § ACKNOWLEDGMENTS This is TiPES Contribution #223; the Tipping Points in the Earth System (TiPES) project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 820970. Furthermore, this work has received funding from the VolkswagenStiftung, from the Marie Sklodowska-Curie grant agreement No. 956170, by the Federal Ministry of Education and Research under grant No. 01LS2001A, the DFG Grant SM 710/2-1, by the DARPA AI-assisted Climate Tipping point Modelling (ACTM) project (No. HR0011-22-9-0031) and the Bezos Earth Fund (No. XXX). Supplementary Material for Spatial correlationincrease in single-sensor satellite data reveals loss of Amazon rainforest resilience [NO \author GIVEN] January 14, 2024 =========================================================================================================================================§ ADDITIONAL INSIGHT FROM THE CONCEPTUAL MODEL§.§ Text S1. Stability in the conceptual modelThe interaction of vegetation and precipitation is plotted in Figure <ref>. In the conceputal model, vegetation is a function of absolute incoming precipitation only and is not dependent on the cell's location. Hence, it is the same for each cell and plotted as the single black solid line.Precipitation on the other hand is a function of the vegetation of the cell, but also of the vegetation of the cell's eastern neighbors. Hence, it differs for all cells, which are encoded by the colors as in Figure <ref>. For the calculation of one cell's precipitation as a function of vegetation, its eastern neighbors are considered to be in equilibrium.Furthermore, precipitation depends on the amount of moisture coming in from the Atlantic ocean, or in other words on the control parameter B. The solid lines represent the state in the beginning of the experiment with B = 1010, and the dashed lines in the end with B=990.Where the black and coloured lines cross at some (V^*, P^*), the corresponding value of vegetation supports the value of precipitation P^*, but also is V^* supported by P^*. Hence, this point is considered a fixed point, or equilibrium.As the moisture inflow is reduced in the simulations from 1010 (solid lines) to 990 (dashed lines), the fixed points with high vegetation values vanish, explaining the tipping in theFigures <ref> and <ref> .§.§ Text S2. Precipitation decrease in the conceptual modelThe results of simulations with decreasing moisture inflow from the Atlantic ocean (decreasing B(t)) were presented in the main text in Figure <ref>. Figure <ref> zooms in into cell-wise total precipitation and highlights its stochasticity as well as the almost linear decrease up to the Tipping Point of vegetation. §.§ Text S3. Deforestation scenario leads to Tipping in the conceptual model The results of simulations with decreasing moisture inflow from the Atlantic ocean (decreasing B(t)) were presented in the main text in Figure <ref>. Figure <ref> is identical but presents the results from simulations with unchanging inflow from the Atlantic, i.e. B(t)=1000 for all t. Instead, the amount vegetation in the east-most cell is considered the bifurcation parameter and constantly decreased from 1 to 0, thereby mimicking deforestation. §.§ Text S4. Distribution of the trends for the three resilience indicatorsIn the conceptual model, all indicators show clear trends when approaching the Tipping Point of vegetation, as illustrated in Figure <ref>. There, clearly all three indicators show a strong shift towards positive values with only very few negative ones for AC1. In a spatially extended system like the ARF, one would thus expect all three indicators to increase when the corresponding grid cell loses resilience.§.§ Text S5. Distribution of the trends for the three resilience indicatorsIn Figure <ref> the time of emergence (ToE) was defined as the first time when a trend becomes significant. Yet, as more conservative approach one can define the ToE as the time when a trend becomes permanently significant, i.e. when all trends later in time are significant as well. The results with this definition are displayed in Figure <ref>. The uncertainty in the timing of this type of ToE differs much less for AC1 compared to the definition in the main text. This indicates that AC1 might be more prone to false alarms.§ AMSR2 UNTIL 2022 EXCLUDED DUE TO MISSING HUMAN LAND USE DATA §.§ Text S6. The influence of forest loss for the years 2021 to 2022Since the data sets for excluding HLU are only available until 2020, the VOD data by AMSR2 was only analyzed until 2020 even though it is available until present times.The maps in Figure <ref>a) and c) show the difference between the number of indicators with positive trends if analyzed until 2020 (as in the main text, Figure <ref>) and until 2022 for AMSR2's band C and X, respectively. Differences mainly occur where sudden changes in VOD after 2020 can have a strong influence on the detrending method STL. Since STL acts on sliding windows over the whole data set as is (to detrend) and over the separate months (for de-seasonalizing), the influence of strong changes after 2020 increase over time.This becomes apparent in the the spatially averaged time series in b) and d), where all indicators start at the same level independent for both cases, where the analysis is performed until 2020 (solid lines) as well as until 2022 (dotted lines), but diverge afterwards. Such increasingly strong discrepancies in the period until 2020, where the data is equivalent, must stem from STL, which over time becomes more sensitive to strong changes in later time steps due to its sliding window approach. Yet, this is only the case for strong changes in the VOD after 2020, as it could result from forest loss and Human interference.Hence, we argue that the exclusion of HLU based on the years until 2020 is not sufficient to analyze data thereafter. This explanation is supported by high rates of forest loss since the end of 2020, with more than 18.000 km^2 deforestation and 27.000 km^2 degradation registered by the Instituto Nacional de Pesquisas Espaciais's monitoring programs <cit.>. Hence, the results for AMSR2 analyzed until 2022 cannot be relied on (see Figure <ref>).§ INVESTIGATION OF RESILIENCE LOSS IN AMSR2'S X-BAND§.§ Text S7. The influence of forest loss for the years 2021 to 2022 The X-band of AMSR2 results from the observation of a shorter wavelength compared to the two C-bands. This leads to less sensitivity to aboveground biomass, which is why the results are less likely to indicate whether the ARF is loosing resilience or not.For completeness, Figures <ref> and <ref> show the results for the X-band. While the spatial correlation is more often increasing than not, which is comparable to the results in the C-bands, this is not the case for variance and AC1 (Figure <ref>). While the percentage of cells in Figure <ref>a) exhibiting a positive trend in all three indicators is still above what would be expected by random (12.5 %), it is far less than detected for AMSR2's C-bands. § RESULTS PROVING ROBUSTNESS §.§ Text S8. Robustness with respect to the parameterization of the STL algorithmThe results in Figures <ref> and <ref> are based on the residual found by the STL algorithm with default parameterization. Yet, Figure <ref> and <ref> prove that the analysis is not influenced by this choice as the results are almost identical when the length of the seasonal smoother equals 13 (instead of 7).§.§ Text S9. Robustness with respect to the size of the sliding windowsThe choice of the sliding window size is difficult for short time series as that of AMSR-E's and AMSR2's VOD. Yet, the results are robust w.r.t it, as Figure <ref> based on sliding windows of 3 years is almost identical to Figure <ref> with windows of 5 years. This also holds for the summarizing Figure <ref> when compared to Figure <ref>. It is to note that the spatially averaged time series of the three indicators are more noisy for the choice of smaller windows, which is due to the fact the the calculation of the indicators becomes more unstable when calculated on only 36 data points, corresponding to the 3 years.The right choice of the window size in an approach to resilience analysis like the one shown here is challenging, as the increase can only be determined reliably based on enough indicator time steps either. The fact that the results are robust with respect to the window size gives confidence in the choice of 5 years as default. §.§ Text S10. Robustness with respect to the measure of changeIn the main text, any change in indicators is measured in terms of linear trends. Yet, another option would be the Kendall's τ as a measure of steadiness of increase. Figure <ref> is the analogue of Figure <ref> but with increase in time series quantified in terms of Kendall's τ. In Figure <ref>, increase in the indicators is quantified in terms of linear trends. The results are almost identical when measuring the indicators' increase by the Kendall τ rank coefficient. §.§ Text S11. Robustness with respect to the distance defining neighboring cellsThe definition of the spatial correlation depends on the distance taken into account when taking the average correlation with neighboring cells. Figure <ref> shows the trends of the spatial correlation for the different VIs comparing different radii defining the set of neighboring cells. As expected, a greater number of neighbors spatially smooths the results, but does not influence the general spatial pattern of the trends of spatial correlation. The turquoise circle on the bottom of the map shows an exemplary circular band of neighboring cells. In the main text, all results are based on a radius of 100 km.§ POTENTIAL DRIVING FORCES AND SOURCES OF BIAS §.§ Text S12. The influence of the South American Monsoon system as a potentially bistable systemIn the Amazon basin, the South American monsoon system and the atmospheric moisture recycling form a complex and potentially bi-stable system. Hence the monthly precipitation itself could exhibit signs of CSD, which would then drive vegetation changes that are wrongly interpreted as the destabilization of the vegetation.In Figure <ref>, the signs of the trend of variance, AC1 and spatial are compared between VOD and precipitation <cit.>. We argue that due to the high amount of cells where the values differ in sign for the VIs and precipitation (sum of diagonal cells in boxes expected to be 50 % for random assignment), the observed changes in the indicators are likely a measure of critical slowing down in the vegetation and not only a direct effect of variability in precipitation. §.§ Text S13. The influence of changes in precipitation as a potential driving force If precipitation is the main driver of ARF resilience loss, one would expect that in regions with stronger decreases in precipitation (negative trend), more critical slowing down can be observed. This comparison in displayed in Figure <ref>. In parts, decreased precipitation could potentially explain the destabilization detected in the western Amazon basin by both bands of AMSR-E as well as in the western and northeastern regions detected by AMSR2's bands. However it is important to note here that the number of cells with an increasing amount of precipitation and decreasing indicators and those with decreasing amount of precipitation and increasing indicators add up to only about 50 %, which is what is to be expected for a random assignment of positive and negative trends in both.§.§ Text S14. The influence of changes in temperature Instead of decreasing precipitation (see Figure <ref>, an increase in temperature can also lead to vegetation destabilization through water stress by higher evapo-transpirative demand. If this was the driver of the CSD, increasing temperatures must coincide with positive trends in the indicators of CSD in Figure <ref>. Yet changes in temperature cannot explain the detected destabilization in the years 2002 to 2011. For the time period of AMSR2, destabilization detected in the northeast (band C1) and the east (band C2) might at least partially be accounted for rising temperatures (the percentage of cells with rising/declining temperatures and in-/decreasing indicators falls between 48.6 % and up to 59.3 %).§.§ Text S15. The potential bias induced by saturation In case of optical sensors, their saturation over high biomass regions often renders the VIs insensitive to the subtle changes due to perturbations that a resilience analysis based on CSD relies on. The argument is that with decreasing biomass, the VIs become more sensitive, which then translates into a signal of CSD as a direct artifact of the higher perception of perturbations. Even though VOD results from observations of microwaves, we want to address the question of saturation in VOD.As Figure <ref> shows, we cannot exclude this as an artifact for AMSR-E's band C as well as AMSR2's two bands, as they hardly observed any cells with increasing biomass. Interestingly, AMSR-E's band X detected an increase in biomass as well as a loss of resilience in most of the cells (77.3 % to 78.2 %). Thus, its signal of destabilization of almost the entire ARF during the years 2002 to 2011 is not an artifact of saturation of the band. | http://arxiv.org/abs/2310.18540v1 | {
"authors": [
"Lana L. Blaschke",
"Da Nian",
"Bathiany",
"Maya Ben-Yami",
"Taylor Smith",
"Chris A. Boulton",
"Niklas Boers"
],
"categories": [
"physics.geo-ph"
],
"primary_category": "physics.geo-ph",
"published": "20231027235542",
"title": "Spatial correlation increase in single-sensor satellite data reveals loss of Amazon rainforest resilience"
} |
[ Unveiling the Potential of ProbabilisticEmbeddings in Self-Supervised LearningDenis Janiak Jakub Binkowski Piotr Bielak Tomasz Kajdanowicz Wrocław University of Science and Technology]In recent years, self-supervised learning has played a pivotal role in advancing machine learning by allowing models to acquire meaningful representations from unlabeled data. An intriguing research avenue involves developing self-supervised models within an information-theoretic framework, but many studies often deviate from the stochasticity assumptions made when deriving their objectives. To gain deeper insights into this issue, we propose to explicitly model the representation with stochastic embeddings and assess their effects on performance, information compression and potential for out-of-distribution detection. From an information-theoretic perspective, we seek to investigate the impact of probabilistic modeling on the information bottleneck, shedding light on a trade-off between compression and preservation of information in both representation and loss space. Emphasizing the importance of distinguishing between these two spaces, we demonstrate how constraining one can affect the other, potentially leading to performance degradation. Moreover, our findings suggest that introducing an additional bottleneck in the loss space can significantly enhance the ability to detect out-of-distribution examples, only leveraging either representation features or the variance of their underlying distribution. § INTRODUCTIONIn recent years, self-supervised learning methods have gained prominence in computer vision, enabling the utilization of abundant unlabeled data <cit.>. Contrastive methods <cit.>, which train the model to discriminate positive and negative samples from a batch of examples, proved to be very successful in many downstream tasks, facilitating rapid advancement in the domain. However, these methods still have some inherent constraints, such as the necessity of mining negative examples. Non-contrastive techniques have addressed this issue through strategies such as feature decorrelation and information maximization <cit.>, as well as distillation and architectural constraints <cit.>. These approaches have effectively ensured uniformity <cit.> and prevented representation collapse <cit.>. Methods employing feature decorrelation to prevent representation collapse are of particular interest, as they are closely linked to an information-theoretic framework, which provides a formal means of quantifying the information content and redundancy in data, facilitating a deeper understanding and analysis of these techniques.In many real-world scenarios, the abundance of data comes with inherent uncertainties. For instance, images captured in the wild might possess inherent aleatoric uncertainties due to factors like low resolution, imperfect cropping, or angle <cit.>. Recent machine learning advancements emphasize quantifying these uncertainties, crucial for safety-critical applications, like medical imaging <cit.>, or active learning <cit.>.In self-supervised learning, probabilistic embeddings present a promising avenue to model and leverage such uncertainties effectively,bridging the gap between data invariance and robust representation. The inherent flexibility of predicting a distribution over the embedding space, instead of deterministic point estimates, enables models to function effectively in environments laden with uncertainty or noise <cit.>.However, there remain challenges in seamlessly integrating probabilistic embeddings with self-supervised techniques, which often leads studies to deviate from the stochasticity assumptions set out in their objectives <cit.>. This paper, hence, aims to explore these intricacies, shedding light on the harmonization of probabilistic embeddings with decorrelation-based self-supervised methods and the consequent effects on out-of-distribution detection.In our paper, we introduce probabilistic embeddings to feature decorrelation-based self-supervised methods (Barlow Twins <cit.>, VICReg <cit.>) and explore their effect within either the representation or loss space.Our contributions are as follows: * We demonstrate that using probabilistic embeddings in the loss space (Z) produces results equivalent to deterministic methods; while using them in the representation space (H) results in a bottleneck, adversely affecting downstream task performance.* We examine the mutual information between input, representation, and loss spaces and conjecture that the aforementioned decline is caused by the representation bottleneck that prioritizes data invariances while compromising generalization. Empirical evaluations support our hypothesis.* We showcase our method's capability in detecting out-of-distribution samples using the variance of the embedding distribution, outperforming both label-free (Mahalanobis) and label-based (MaxSoftmax, ODIN) detectors.§ RELATED WORKSSelf-supervised learning (SSL) The primary objective of SSL is to optimize a specific loss function tailored to capture meaningful patterns or relationships within unlabeled data. This loss function is crafted to create surrogate tasks, such as predicting missing parts of the data, rotations, or other data transformations, that encourage the network to learn useful and invariant features from the input data.For instance, in contrastive learning (<cit.>, <cit.>), the goal is to maximize the agreement between positive (similar) pairs of data samples while minimizing it for negative (dissimilar) pairs. This strategy prompts the network to ensure that similar data instances have representations close to each other in a high-dimensional space, simultaneously pushing dissimilar samples apart. On the other hand, non-contrastive methods adopt various forms of mechanism to prevent representation collapse, eliminating the need for negative samples <cit.>. This could be some architectural constraints <cit.>, clustering-based objective <cit.>, or feature decorrelation <cit.>.In our investigation, we concentrate on the feature decorrelation-based methods, i.e., Barlow Twins <cit.> and VICReg <cit.> methods, due to their close connection with the information-theoretic framework <cit.> and their underlying assumption about data distribution.Probabilistic embeddings (PEs) PEs predict a distribution of embeddings rather than a point estimate vector, providing benefits like stable training on noisy data, accurate aggregation and allowing out-of-distribution detection using predicted uncertainty <cit.>. They have found applications in diverse areas like face and word embeddings, particularly where understanding representation uncertainty or its intrinsic hierarchies is key <cit.>. In addition to their standalone utility, they are often combined with contrastive loss methods <cit.>. In the context of self-supervised learning, Prob-CLR <cit.> integrates PEs into the SimCLR architecture, enhancing its clustering capabilities, but makes assumptions that simplify objectives at the cost of uncertainty estimation. Another work <cit.> introduces a probabilistic variant of the InfoNCE method, which captures the true generative process posteriors and their underlying variances, offering a more human-aligned understanding of uncertainty. Motivated by these results, we investigate applying PEs to Barlow Twins and VICReg models. Out-of-distribution (OOD) detection in SSL Self-supervised learning methods have shown the capacity to enhance robustness and uncertainty estimation. <cit.> successfully leverages features derived from a contrastively learned model to identify OOD examples. Other studies have explored directions such as combining self-supervised with supervised learning objective <cit.>, using hard data augmentations to push away samples <cit.> or incorporating probabilistic modeling to derive uncertainty estimates <cit.>. In particular, <cit.> models the embeddings with von Mises Fischer distribution, where the concentration parameter serves as an uncertainty metric. Similarly, <cit.> examines SimSiam <cit.> within the variational inference framework and utilizes a power spherical distribution to characterize the distribution of embeddings. In our work, we employ a similar methodology, utilizing the variance of embeddings as a measure of uncertainty.Representation vs. loss space In self-supervised learning, a common practice is to utilize a non-linear projection head g(h), a neural network component that maps the feature representations (H) to a space (Z) where the contrastive loss can be optimized more effectively (see Image <ref>). Our study highlights the vital distinction between representation and loss space. <cit.> showed that the projection head serves as a low-rank mapping, identifying and mapping certain features to optimize contrastive loss. This is supported by <cit.>, who found the nonlinear projection head filters out discriminative features. Layers closer to the loss space lose more information, hindering generalization. <cit.> revealed that misalignment between self-supervised learning and classification tasks is a primary cause for certain phenomena.§ INFORMATION-THEORETIC BACKGROUNDThe information-theoretic perspective provides essential insights into the underlying mechanics of self-supervised learning. While certain models, such as InfoMax <cit.> and Deep InfoMax <cit.>, advocate for maximal data information capture, the Information Bottleneck (IB) principle calls for a delicate balance between informativeness and compression <cit.>.Our work draws from the multi-view information bottleneck framework <cit.>, aiming to capture the essential predictive information shared across different data views.Barlow Twins This technique can be seen as an IB method, where we aim to maximize the information between the image and representation while minimizing the information about data augmentation, essentially making the representation invariant to distortions. This objective can be represented by the following equation:ℒ= I(Z; V) - β I(Z; X) = [ℋ(Z) - ℋ(Z | V)]-β[ℋ(Z)-ℋ(Z | X)]= ℋ(Z | X)+1-β/βℋ(Z)= 𝔼_x [log |Σ_z | x|]+1-β/βlog |Σ_z|,where, X, V and Z represent original images, augmented views and embeddings, respectively, while I(·;·) and ℋ denote the mutual information and the entropy. If we assume a Gaussian distribution for the embeddings, the entropy terms within the objective can be reduced to the log-determinant of their corresponding covariance functions.Notably, Barlow Twins does not optimize covariance matrices directly but instead uses a proxy objective (see Section <ref>). Additionally, the IB formulation in this context doesn't directly address the multi-view characteristic intrinsic to self-supervised learning. VICReg This method, on the other hand, can be contextualized within the multi-view perspective <cit.>, and it is possible to derive the VICReg objective from an information-theoretic standpoint, leveraging a lower bound derived from <cit.>. It can be expressed as maximizing the information between the views and their corresponding embeddings:max.I(Z, V^') =ℋ(Z)-ℋ(Z | V^') ≥ℋ(Z) + 𝔼_v,z|v,v^',z^'|v^'[log q(z|z^')]Notably, the most challenging aspect of this perspective lies in estimating the entropy term ℋ(Z), which is generally computationally intractable. Again, the covariance and variance terms within the VICReg framework serve as proxies for maximizing the entropy term. This can be achieved by diagonalizing the covariance matrix, e.g., by increasing values along its diagonal while pushing the off-diagonal terms towards zero.§ METHODOLOGY §.§ Self-supervised learning framework First, let us formalize the feature decorrelation-based SSL framework setup for point estimate embeddings. We sample an image x from a dataset 𝒟 and create two views v and v^' by applying transforms t and t^' sampled from a distribution 𝒯. These views are then fed into an encoder f_θ, parameterized by θ, to create representations h=f_θ(v) and h^'=f_θ(v^'). Next, the representation vectors are passed through the projector g_ϕ, parameterized by ϕ, to obtain embeddings z=g_ϕ(h) and z^'=g_ϕ(h^'). The loss function ℒ(·, ·) is then applied to these embeddings z and z^'. In general, we can define the loss function for these methods as:ℒ(Z, Z^') = ℒ_inv(Z, Z^') + ℒ_reg(Z, Z^'),where Z = [z_1, …, z_n] and Z^' = [z^'_1, …, z^'_n] represent a batch of n embedding vectors of dimension d for sampled image views. Barlow Twins In this method, the loss function is computed using the cross-correlation matrix R on embeddings, which are mean-centred along the batch dimension:R_ij = corr(∑_b Z_bi, ∑_b Z^'_bj)= cov(∑_b Z_bi, ∑_b Z^'_bj)/σ_∑_b Z_biσ_∑_b Z^'_bj, where b indexes batch samples, i and j index the embedding vector dimensions, and σ denotes standard deviation. From this matrix, we compute the invariance term:ℒ_inv(Z, Z^') = ∑_i(1-R_i i)^2,and regularization (feature decorrelation) termℒ_reg(Z, Z^') = λ∑_i ∑_j ≠ i R_i j^2, where λ is a loss scaling coefficient.VICReg In contrast, VICReg calculates the decorrelation term from a covariance matrix:ℒ_cov(Z) = 1/d∑_i ∑_j ≠ i C_ij,which is simply the sum of the squared off-diagonal coefficients of the covariance matrix C_ij = cov(∑_b Z_bi, ∑_b Z^'_bj).To prevent collapse, VICReg includes a variance regularization term to the loss:ℒ_var(Z) = 1/d∑_i=1^d max(0, γ-σ_∑_b Z_bi + ϵ)which is a hinge function that operates on the standard deviation of the embeddings across the batch dimension. VICReg utilizes γ = 1 and a small ϵ value to avoid numerical instabilities.Both regularization terms are calculated separately for Z and Z^' using τ and ν as loss coefficient:ℒ_reg(Z, Z^')= τ [ℒ_var(Z) + ℒ_var(Z^')]+ ν [ℒ_cov(Z) + ℒ_cov(Z^')]The invariance term is calculated using mean-squared error loss with α as a loss coefficient:ℒ_inv(Z, Z^') = α/n∑_i=1^nZ_i-Z_i^'_2^2. §.§ Probabilistic embeddings We propose to extend the aforementioned SSL frameworks to probabilistic embeddings. Drawing inspiration from the works of <cit.> and <cit.>, we can reformulate our self-supervised objective as an information maximization problem.We aim to maximize the mutual information between the views and their corresponding embeddings, i.e., I(Z; V^') and I(Z^'; V). We utilize the following lower bound:I(Z; V^')= ℋ(Z) - ℋ(Z|V^') ≥ℋ(Z) + 𝔼_v^'[log q(z|v^')] ≥ℋ(Z) + 𝔼_z|v[𝔼_z^'|v^'[log q(z|z^')]] From <cit.>, we know that the first term in this lower bound, ℋ(Z), is implicitly optimized by our regularization term ℒ_reg. Furthermore, we address the second term in this lower bound by optimizing the invariance term ℒ_inv. Therefore, we recover the objective from Eq. <ref>. Expectations from Eq. <ref> are evaluated over empirical data distribution. Specifically, we obtain the expectations by backpropagating through K Monte Carlo (MC) samples using the reparametrization trick <cit.>:𝔼_z|v[𝔼_z^'|v^'[log q(z|z^')]] ≃ 1/nK∑_i=1^n∑_k=1^K log q(z_ik|z^'_ik). Similarly to Eq. <ref>, we estimate the expected value of the regularization loss by evaluating the posteriors according to the specific model. Stochastic loss space (Z-prob.) In this model variant, we first deterministically encode the image view v into the representation h using a deterministic encoder f_θ. Then we sample z based on h using the stochastic projector q_ϕ(z|h) (which follows a Normal distribution). For ease of notation, we use q'(z|h) defined as follows: q'(z|v) = q_ϕ(z|f_θ(v)), z ∼𝒩(z | μ_ϕ(h), σ_ϕ^2(h)I) We apply the same procedure for the second image view v^' to produce the representation h^' and the embedding z^', utilizing the same encoder and projector parameters θ and ϕ. Stochastic representation space (H-prob.) When considering probabilistic embeddings within the representation space H, our derivation must also account for the presence of h. Let's assume the following joint distribution of q(v, h, z):q(v, h, z) = q(z|v, h) q(h|v) q(v).Given that z depends on h, computing q(z|v) necessitates marginalizing out h.Consequently, the expectation term from Eq. <ref> can be expressed as:𝔼_v^'[log q(z|v^')] ≐𝔼_h, v^'[log q(z|h, v^')]. We sample h using the stochastic encoder q_θ(h|v), which follows a Normal distribution:h ∼𝒩(h | μ_θ(v), σ_θ^2(v)I) Then we obtain the embedding z by mapping the representation h with a projector, g_ϕ. We apply the same procedure for the second image view v^' to produce the representation h^' and the embedding z^', utilizing the same encoder and projector parameters θ and ϕ.Regularization Moving from point estimates to probabilistic embeddings, we introduce an additional layer of uncertainty, which helps capture the inherent ambiguity and variability in the data. However, it also raises the challenge of regularizing this stochasticity to prevent trivial solutions and obtain reliable uncertainty estimates. To address this issue, we follow the Variational Information Bottleneck (VIB) framework <cit.> and formulate an additional regularization term to the loss function in the form of a KL divergence between the probabilistic embeddings q(·|v) and q(·|v'), and a predefined prior q̂(·), typically 𝒩(0, 1):ℒ_div = β/2 [KL(q(·|v) || q̂(·)) + KL(q(·|v') || q̂(·))],This regularization, controlled by /beta parameter, acts as a bottleneck, constraining the capacity of our probabilistic embeddings, which has been shown to be effective in previous work <cit.> in terms of improving robustness and disentanglement.The overall modified loss function becomes:ℒ = ℒ_inv + ℒ_reg + ℒ_div. § EXPERIMENTS §.§ Setup We pre-train our model in a self-supervised manner (without labels) on the ImageNet ILSVRC-2012 dataset <cit.>. We adopt the same image augmentations and closely adhere to the original works in determining the loss coefficients.We opt for the smaller ResNet-18 <cit.> architecture as our backbone encoder and a smaller projector. Our experimental setup involves training the model for 100 epochs with a batch size of 256 using AdamW <cit.> optimizer. We rely on an 𝒩(0, 1) prior for Barlow Twins and VICReg methods across both models (H- and Z-prob.) and use 12 Monte Carlo (MC) samples. For more details, see Appendix <ref>. A more comprehensive exploration of PEs hyperparameters is conducted in the ablation study (Section <ref>). §.§ ImageNet evaluationWe follow the evaluation procedure of <cit.>, as laid out in the original works of Barlow Twins and VICReg and report the results for linear classification and semi-supervised learning tasks. For the linear classification, we train a linear classifier on the frozen representation from our pre-trained backbone encoder. Importantly, for the H-prob. method, which yields an embedding distribution, we compute the final representation as an average over posterior samples (see: Eq. <ref>).For the semi-supervised learning task, both the backbone encoder and a linear classifier are fine-tuned. We utilize ImageNet subsets corresponding to 1% and 10% of the labels <cit.>. A detailed procedure can be found in the Appendix <ref>. Our training process was conducted once due to computational constraints and training stability <cit.>.The results are presented in Table <ref>. From our observations, the probabilistic embeddings in the loss space (Z-prob.) generally outperform the probabilistic embeddings in the representation space (H-prob.) for both the Barlow Twins and VICReg methods, particularly in the linear classification task. This phenomenon is further explained in Section <ref>, where we demonstrate that this distinction arises due to the significant amount of information shared between the representation and loss spaces. It is noteworthy that the difference becomes lower in the semi-supervised task, especially for 1% of available labels. §.§ Transfer learningTo further verify the implications of probabilistic embeddings, we perform transfer learning experiments<cit.>.Specifically, we freeze the encoder pre-trained on ImageNet and train a single linear layer on top of it.We compare probabilistic embeddings (H- and Z-prob.) to their deterministic counterparts for Barlow Twins and VICReg methods. To this end, we utilize three datasets for evaluation: INaturalist <cit.>, SUN397 <cit.>, and Flowers-102 <cit.>. The detailed hyperparameters are in Appendix <ref>. The results are shown in Table <ref>. As in the previous experiments, Z-prob. embeddings yield results on par with the Deterministic approach, while for H-prob. we observe a performance degradation.§.§ Out-of-distribution detection To investigate the out-of-distribution capabilities of our probabilistic embeddings, we follow a similar evaluation procedure to <cit.> and train our models on the CIFAR-10 dataset <cit.>. We consider the original test set of CIFAR-10 as IN data and assess its ability to distinguish between other OUT datasets, such as Textures <cit.>, TinyImageNet(crop, resized) <cit.> and LSUN(crop, resized) <cit.>. We introduce detectors based on the first two moments of the embeddings' variance (denoted as SigmaMean and SigmaStd) and compare them to other detectors.We report the averaged AUROC metric over three runs and all OUT datasets – see Table <ref>.As observed, leveraging intrinsic properties of stochastic embeddings, such as their variance, can be highly effective as an OOD detector. In some instances, it matches or surpasses the performance of detectors relying on label information. More OOD experiment details can be found in Appendix <ref>.§ ABLATIONS The ablation study was performed on the CIFAR-10 dataset. Each model was trained on three different seeds for 200 epochs with a batch size of 256. Appendix <ref> provides a more detailed setup for the ablation study. Prior We compare 𝒩(0, 1) prior to a Mixture of Gaussians (MoG) to study the effect of using a more expressive distribution for modeling our probabilistic embeddings. The MoG prior has the following form: 1/M∑_m=1^M 𝒩(μ_m, diag(σ_m^2)), where M denotes the number of mixtures, while μ_m and σ_m denote trainable parameters of a specific Gaussian in the mixture model. The results are shown in Table <ref>. We can see that contrary to our intuition, the effect of MoG on performance is insignificant, often degrading the model's efficacy. Beta scale We study the effect of different β scales on model performance. As mentioned in Section <ref>, this term controls the bottleneck and, therefore, the capacity of the embeddings. The results are presented in Table <ref>. We observe better model performance for sufficiently small β, while higher values may deteriorate the model's efficacy. However, by reducing β too much, the variance of the embeddings reduces accordingly, making the embeddings more deterministic, as shown in Appendix <ref>.Number of Monte Carlo samples In this experiment, we study the benefit of using multiple MC samples to estimate the expectation from Eq. <ref>. Following the VIB framework, we use either 1 or 12 samples. The results of this experiment are presented in Table <ref>.For the H-prob. embeddings, increasing the number of MC samples improves model performance, particularly with higher values of β. This indicates that using more MC samples offers a better and less biased estimation of the expectations. Conversely, the number of MC samples seems to have an insignificant effect on the Z-prob. embeddings.§ INFORMATION COMPRESSION In this section, we analyze the information compression factor of probabilistic embeddings. As they introduce an additional bottleneck to the network, we aim to explore its real effect on the amount of information conveyed between these pairs of spaces: image views (V), representations (H) and embeddings (Z).To this end, we employ Mutual Information Neural Estimation (MINE) <cit.>, which provides an efficient and scalable way to estimate mutual information between the aforementioned spaces. The relationship between the mutual information I(V; H) and I(H; Z) for the Barlow Twins method is illustrated in Figure <ref>.We hypothesize that the reason for the weaker performance of H-prob. embeddings is high mutual information between representation and loss space I(H;Z).We believe that is due to the constrained representation, containing more discriminative features relevant to the contrastive loss, losing generic information leveraged by a downstream task. Nonetheless, Z-prob. embeddings contain more information about the image views I(V;H), which helps them mitigate such effect (see <cit.>).Furthermore, we observe that using more MC samples enhances the model's performance for higher values of β. We attribute this improvement to the increased amount of information shared between the two views, z and z^'. According to <cit.>, this information is the predictive information of representation, which corresponds to the invariance term in the SSL loss function (see Section <ref>).Figure <ref> shows that for a higher number of MC samples, the information I(Z;Z^') increases and invariance loss decreases accordingly. Even though the MC samples exacerbate the regularization loss, we can improve the model's performance. The extensive results regarding the mutual information are given in Appendix <ref>. § CONCLUSIONSIn this work, we unveiled the potential of probabilistic embeddings in self-supervised learning. We introduced probabilistic embeddings to feature decorrelation-based methods, specifically Barlow Twins and VICReg. When applied in the loss space (Z-prob.), these embeddings performed on par with the deterministic counterparts. However, their incorporation into the representation space (H-prob.), posed challenges, leading to compromised downstream performance. We performed a thorough analysis of mutual information in the network and posit that this is due to an overemphasis on data invariance and, in the aftermath, lower generalization. Importantly, our method exhibited a robust ability to detect out-of-distribution samples, even outperforming certain label-based detectors. This showcases the potential of our approach and suggests avenues for future research in self-supervised learning optimization.§ DETAILED PROCEDURE AND HYPERPARAMETERS FOR EXPERIMENTS §.§ Pre-training In this section, we provide a comprehensive overview of the experimental framework, expanding on the preliminary details shared in Section <ref>. We pre-trained our model using a self-supervised approach on the ImageNet ILSVRC-2012 dataset <cit.>. In our experiments, we opted for a smaller encoder, ResNet-18, compared to the original Barlow Twins and VICReg studies. Additionally, we used a reduced embedding dimensionality of 1024 and a smaller batch size of 256. We pre-trained the model for a total of 100 epochs. Consequently, we employed a more compact projector network consisting of three fully connected linear layers, each with a dimensionality of 1024, where the initial two layers are followed by batch normalization and ReLU activation. This decision to employ smaller networks and embeddings is primarily driven by the computational constraints we faced during our experiments. However, we posit that our findings can be generalized to larger networks <cit.>. Smaller batch size is driven by our observation that it yields superior results with fewer epochs (based on performed hyperparameter search). Owing to the unusual behavior exhibited by the SGD optimizer in the self-supervised learning process when using stochastic embeddings, we instead utilized the AdamW <cit.> optimizer. The SGD exhibited low sensitivity to the β scale hyperparameter but higher to the learning rate, e.g., the variance of the embeddings was dictated rather by the learning rate and not the β scale.We set the learning rate to 1 × 10^-3 and applied a weight decay of 1 × 10^-4. Furthermore, we employed a cosine decay scheduler with two warmup epochs, during which the learning rate increases from 0 to 1 × 10^-3, and then gradually scaled down the learning rate to a final value of 5 × 10^-4.§.§ Linear and semi-supervised evaluation on ImageNet Our experimental approach for linear classification and semi-supervised experiments is similar to methodologies outlined in previous studies <cit.>.Specifically, for the linear classifcation task, the backbone model (ResNet-18), which was pre-trained under both deterministic and probabilistic conditions, remains fixed while we train a single linear layer atop it. We determined the number of epochs using a validation set. If a validation set was unavailable or utilized as a test set, we executed a stratified split, allocating 20% of the training data. We settled on 100 epochs and chose a batch size of 512. For the optimization of the linear layer, we employed the AdamW optimizer <cit.>, initiating with a learning rate of 1 × 10^-2 and decreasing it by a factor of 0.1 in evenly-spaced epochs: 30, 60 and 90 (until it reaches 1 × 10^-5). The weight-decay parameter was configured at 1 × 10^-4. Before passing the representation vector to the linear layer, we performed L_2 normalization, aiming to minimize the impact of scale in the evaluated models. For semi-supervised experiments, we unfroze the backbone model and trained it jointly with the linear layer. For optimization, we utilized the AdamW optimizer. The initial learning rate was set at 1 × 10^-3 for the linear layer and 1 × 10^-4 for the backbone, with both rates decreasing by a factor of 0.1 following a cosine decay schedule. The weight-decay was set to 1 × 10^-5. The training was conducted with a batch size of 256 across 50 epochs.§.§ Transfer Learning Similarly to experiments on ImageNet, we have based our experimental protocol for transfer learning experiments on the ones used in previous works <cit.>. In particular, the backbone model, pre-trained in deterministic and probabilistic settings, is frozen, and one linear layer is trained on top of it with three datasets: INaturalist <cit.>, SUN397 <cit.>, and Flowers-102 <cit.>. We use the same protocol as for linear classification. We leveraged AdamW with learning rate starting from 1 × 10^-2 and decaying by 0.1 in evenly-spaced steps to 1 × 10^-5; weight-decay parameters was set to 1 × 10^-4. We determined the number of 50, 50, and 80 epochs for INaturalist, SUN397, and Flowers-102, respectively. As for batch size, we used 256 for INaturalist and SUN397 and 128 for Flowers-102.§.§ Ablation study In terms of the model's architecture, our configuration during ablation studies on CIFAR-10 mirrors that of our principal ImageNet experiments. Specifically, we employ a ResNet-18 encoder as a backbone, complemented by a non-linear projection head. This head comprises three fully connected linear layers, each featuring a dimensionality of 1024. Notably, the first two of these layers incorporate batch normalization and ReLU activation functions. For the optimization process, we adopted the AdamW optimizer with a weight decay of 1 × 10^-4. Our initial learning rate was set at 1 × 10^-3 with a cosine decay scheduling, which gradually scaled down the learning rate to a final value of 1 × 10^-5. We set the batch size to 256 and trained the model on three different seeds for 200 epochs. The variability in the loss function's magnitude and method-specific sensitivities necessitated the selection of distinct beta (β) scale hyperparameters for each approach, as documented in Table <ref>.In the prior ablation study, we reported results (Table <ref>) averaged over three separate runs and a set of {1, 12} MC samples. For the beta scale ablations, we favored the 𝒩(0, 1) based on its superior performance. The results, once again, were averaged over three distinct runs using the same MC sample set, 1, 12. Lastly, for the number of Monte Carlo samples ablation, we employed the 𝒩(0, 1) prior, showcasing results for both 1 and 12 MC samples, with their averages derived from three unique runs. § PROBABILISTIC EMBEDDINGS §.§ Variance of the embeddingsIn Section <ref>, we highlighted that decreasing the value of beta correspondingly reduces the variance of the embeddings, pushing them towards deterministic embeddings.To illustrate this, we plotted the estimated density of the mean variance of the embeddings in Figures <ref> and <ref>. These plots are based on models from the ablation study, which were trained on the CIFAR-10 dataset. We have observed that by setting a particularly low value for β, the embeddings tend to be almost entirely deterministic. While this is not ideal — as it negates the benefits of stochasticity inherent to probabilistic embeddings — there exists a balance. The challenge lies in minimizing the variance of the embeddings, which typically enhances performance while maintaining sufficient stochasticity to harness the benefits of their probabilistic nature. §.§ IN-distribution uncertaintiesWe delved into the relationship between uncertainty estimates and model predictions, aiming to discern if the variance of the embeddings (sigma) could serve as an uncertainty measure for examples from the in-distribution dataset. Our specific interest was to see if the variance would be higher for more challenging examples where the model is prone to making errors. Figures <ref> and <ref> illustrate the mean variance values across different model variants on the CIFAR-10 dataset. We can observe that, in most cases, the distribution of mean variance of the embeddings is actually shifted, with incorrect predictions often being assigned a higher variance.§ OUT-OF-DISTRIBUTION DETECTION In this section, we present a comprehensive set of results for the out-of-distribution detection task. The evaluation methodology remains consistent with that described in Section <ref>. Tables <ref> and <ref> display the results for both the VICReg and Barlow Twins methods, taking into account various hyperparameters, including the choice of prior, the beta (β) scale, and the number of MC samples. The results have been averaged over three runs using distinct seeds and across all OOD datasets specified in Section <ref> (Textures <cit.>, TinyImageNet(crop, resized) <cit.> and LSUN(crop, resized) <cit.>). The AUROC performance for OOD detection offers insightful comparisons between various methods. Specifically, Entropy <cit.>, MaxSoftmax <cit.>, and ODIN <cit.> are techniques that require label information to detect out-of-distribution examples. On the other hand, methods such as Mahalanobis <cit.>, SigmaMean, and SigmaStd are based solely on representations and do not necessitate label information for OOD detection. This distinction underscores the variety of approaches available in the field, ranging from those dependent on labeled data to others that leverage unsupervised information.For the VICReg method, an intermediate value of β frequently yields the best performance across detectors. With optimal values of beta (i.e., 1e-4), the representation gains from the added bottleneck created by stochastic embeddings. Moreover, we can leverage the characteristics of stochastic embeddings, specifically its variance, as an OOD predictor.Interestingly, the Mahalanobis detector benefits greatly from H-prob. embeddings for such beta for both Standard and MoG priors.Meanwhile, Table <ref> presents the results specific to the Barlow Twins method. Notably, the SigmaStd detector consistently outperforms the SigmaMean detector in nearly all scenarios, and it is particularly effective with the Z-prob. embeddings. The performance of Z-prob. embeddings in the out-of-distribution detection task are particularly impressive, matching or surpassing detectors relying on label information in both the Barlow Twins and VICReg methods.§ INFORMATION BOTTLENECKWe employ the Mutual Information Neural Estimation (MINE) technique as detailed in <cit.> to evaluate the mutual information between the input, representation, and embeddings. For every pair of variables, a distinct network is designated to quantify their mutual information. This network is trained jointly alongside the primary self-supervised network, utilizing a separate optimizer. The statistic network, as referred to in MINE, in our implementation, comprises two layers, each with a dimensionality of 1024. These layers are succeeded by a ReLU nonlinearity and then followed by a third layer that maps to a singular output. The values obtained, showcased in Table <ref>, correspond to the results from the final epoch.From our observations, the mutual information I(V;H) between the input and representation varies across different embeddings. A larger beta value results in a decreased I(V;H) for the H-prob. embeddings, yet it increases for every variant of the Z-prob. embeddings. Increasing the number of MC samples improves both I(H;H) and I(Z;Z). These can be interpreted as a lower bound to the predictive information on an unknown label y, as cited in <cit.>. Furthermore, the mutual information I(H;Z) between representation and the loss space is considerably elevated for probabilistic embeddings. A detailed discussion on this phenomenon is provided in Section <ref>. | http://arxiv.org/abs/2310.18080v1 | {
"authors": [
"Denis Janiak",
"Jakub Binkowski",
"Piotr Bielak",
"Tomasz Kajdanowicz"
],
"categories": [
"cs.LG"
],
"primary_category": "cs.LG",
"published": "20231027120116",
"title": "Unveiling the Potential of Probabilistic Embeddings in Self-Supervised Learning"
} |
A diamond anvil microassembly for Joule heating and electrical measurements up to 150 GPa and 4000 K Michael J. Walter January 14, 2024 ==================================================================================================== Exploration bonuses in reinforcement learning guide long-horizon exploration by defining custom intrinsic objectives. Several exploration objectives like count-based bonuses, pseudo-counts, and state-entropy maximization are non-stationary and hence are difficult to optimize for the agent. While this issue is generally known, it is usually omitted and solutions remain under-explored. The key contribution of our work lies in transforming the original non-stationary rewards into stationary rewards through an augmented state representation. For this purpose, we introduce the Stationary Objectives For Exploration () framework. requires identifying sufficient statistics for different exploration bonuses and finding an efficient encoding of these statistics to use as input to a deep network. is based on proposing state augmentations that expand the state space but hold the promise of simplifying the optimization of the agent's objective. We show that SOFE improves the performance of several exploration objectives, including count-based bonuses, pseudo-counts, and state-entropy maximization. Moreover, SOFE outperforms prior methods that attempt to stabilize the optimization of intrinsic objectives. We demonstrate the efficacy of SOFE in hard-exploration problems, including sparse-reward tasks, pixel-based observations, 3D navigation, and procedurally generated environments. § INTRODUCTION Intrinsic objectives have been widely used to improve exploration in reinforcement learning (RL), especially in sparse-reward and no-reward settings. In the case of Markov Decision Processes (MDPs) with a finite and small set of states, count-based exploration methods perform near-optimally when paired with tabular RL algorithms <cit.>. Count-based methods keep track of the agent's frequency of state visits to derive an exploration bonus that can be used to encourage structured exploration. While much work has studied how to extend these methods to larger state spaces and continuous environments <cit.>, count-based methods introduce unstable learning dynamics that have not been thoroughly studied and can make it impossible for the agent to discover optimal policies. Specifically, any reward distribution that depends on the counts (i.e. the state-visitation frequencies) is non-stationary because the dynamics for the counts change as the agents generate new experiences, and the agent does not have access to the information needed to estimate these dynamics. In an MDP, the convergence of policies and value functions relies on the transition dynamics and the reward distribution being stationary <cit.>. The non-stationarity of count-based rewards induces a partially observable MDP (POMDP), as the dynamics of the reward distribution are unobserved by the agent. In a POMDP, there are no guarantees for an optimal Markovian (i.e. time-homogeneous) policy to exist <cit.>. In general, optimal policies in POMDPs will require non-Markovian reasoning to adapt to the dynamics of the non-stationary rewards <cit.>. Despite this issue, count-based methods are usually paired with RL algorithms that are designed to converge to Markovian policies and hence might attain suboptimal performance.Previous research has either overlooked or attempted to address the non-stationarity issue in intrinsic rewards <cit.>. Some efforts to tackle this problem involve completely separating the exploration and exploitation policies <cit.>. However, these approaches add an additional layer of complexity to the RL loop and can introduce unstable learning dynamics. In this work, we introduce a framework to define stationary objectives for exploration (). provides an intuitive algorithmic modification to eliminate the non-stationarity of the intrinsic rewards, making the learning objective stable and stationary. With minimal complexity, SOFE enables both tractable and end-to-end training of a single policy on the combination of intrinsic and extrinsic rewards.is described in <Ref> and consists of augmenting the original states of the POMDP by including the state-visitation frequencies or a representative embedding. proposes a state augmentation that effectively formulates the intrinsic reward distribution as a deterministic function of the state, at the cost of forcing the agent to operate over a larger set of states. We hypothesize that RL agents with parametrized policies are better at generalizing across bigger sets of states than at optimizing non-stationary rewards.We evaluate the empirical performance of in different exploration modalities and show that enables learning better exploration policies. We present as a method to solve the non-stationarity of count-based rewards. However, we show that provides orthogonal gains to other exploration objectives, including pseudo-counts and state-entropy maximization. Furthermore, our experiments in <Ref> show that is agnostic to the RL algorithm and robust in many challenging environment specifications, including large 3D navigation maps, procedurally generated environments, sparse reward tasks, pixel-based observations, and continuous action spaces. Videos of the trained agents and summarized findings can be found on our supplementary webpage[<https://sites.google.com/view/sofe-webpage/home>].§ RELATED WORK Exploration in RL Exploration is a central challenge in RL. Classical exploration strategies explore in an aleatoric fashion. ϵ-greedy <cit.> samples random actions during training for the sake of exploration. Adding random structured noise in the action space <cit.> can enable exploration in continuous spaces. Maximum entropy RL provides a framework to find optimal policies that are as diverse as possible, and hence better explore the space of solutions <cit.>. For hard-exploration tasks, structured exploration has been studied through the lens of hierarchical RL <cit.>. State-entropy maximization has been proposed to explore efficiently, in an attempt to learn policies that induce a uniform distribution over the state-visitation distribution <cit.>. In MDPs with sparse reward distributions, exploration bonuses (i.e. intrinsic rewards) provide proxy objectives to the agents that can induce state-covering behaviors, hence allowing agents to find the sparse rewards. Count-based methods <cit.> derive an exploration bonus from state-visitation frequencies. Importantly, the inverse counts of a given state measure its novelty and hence provide a suitable objective to train exploratory agents. This property makes count-based exploration an appealing technique to drive structured exploration. However, count-based methods do not scale well to high-dimensional state spaces <cit.>. Pseudo-counts provide a framework to generalize count-based methods to high-dimensional and partially observed environments <cit.>. In modern deep RL applications, many popular methods enable exploration by defining exploration bonuses in high-dimensional state spaces <cit.>, and among them are curiosity-based <cit.>, data-based <cit.> and skill-based <cit.>. Recently, elliptical bonuses have achieved great results in contextual MDPs with high-dimensional states <cit.>. These methods aim to estimate novelty in the absence of the true state-visitation frequencies. <cit.> showed that elliptical bonuses provide the natural generalization of count-based methods to high-dimensional observations. In this work, we show that improves the performance of count-based methods in small MDPs and pseudo-counts in environments with high-dimensional observations (e.g. images), further improving the performance of the state-of-the-art exploration algorithm E3B in contextual MDPs. Additionally, our results show that SOFE provides orthogonal gains to exploration objectives of different natures like state-entropy maximization.Non-stationary objectives A constantly changing (i.e. non-stationary) MDP induces a partially observed MDP (POMDP) if the dynamics of the MDP are unobserved by the agent. In Multi-Agent RL, both the transition and reward functions are non-stationary because these are a function of other learning agents that evolve over time <cit.>. In contextual MDPs, the transition and reward functions can change every episode and hence require significantly better generalization capabilities, which might not emerge naturally during training <cit.>. For MDPs with non-stationary rewards, meta-learning and continual learning study adaptive algorithms that can adapt to moving objectives <cit.>. Learning separate value functions for non-stationary rewards has also been proposed <cit.>.<cit.> proposed DeRL, which entirely decouples the training process of an exploratory policy from the exploitation policy. While DeRL mitigates the effect of the non-stationary intrinsic rewards in the exploitation policy, the exploration policy still faces a hard optimization problem. Importantly, there might not exist an optimal Markovian policy for a POMDP <cit.>. Hence, RL algorithms can only achieve suboptimal performance in these settings. Many exploration bonuses are non-stationary by definition. In particular, count-based methods are non-stationary since the state-visitation frequencies change during training<cit.>. We note that this issue is also present in many of the popular deep exploration methods that use an auxiliary model to compute the intrinsic rewards like ICM <cit.>, RND <cit.>, E3B <cit.>, density models <cit.> and many others <cit.>. In these cases, the non-stationarity is caused by the weights of the auxiliary models also changing during training. In this work, we argue that non-stationarity should not be implicit when an exploration bonus is defined. For this reason, we introduce , which proposes an intuitive modification to intrinsic objectives that eliminates their non-stationarity and facilitates the optimization process. § PRELIMINARIESReinforcement Learning (RL) uses MDPs to model the interactions between a learning agent and an environment. An MDP is defined as a tuple ℳ = (𝒮, 𝒜, ℛ, 𝒯 , γ) where 𝒮 is the state space, 𝒜 is the action-space, ℛ: 𝒮×𝒜→ℝ is the extrinsic reward function, 𝒯 : 𝒮×𝒜×𝒮→ [0,1] is a transition function and γ is the discount factor. The objective of the agent is to learn a policy that maximizes the expected discounted sum of rewards across all possible trajectories induced by the policy.If the MDP is non-stationary, then there exists some unobserved environment state that determines the dynamics of the MDP, hence inducing a partially observed MDP (POMDP), which is also a tuple ℳ' = (𝒮, 𝒪, 𝒜, ℛ, 𝒯, γ) where 𝒪 is the observation space and the true states s ∈𝒮 are unobserved. In a POMDP, the transition and reward functions might not be Markovian with respect to the observations, and therefore, the policy training methods may not converge to an optimal policy.To illustrate this, consider an MDP where the reward distribution is different at odd and even time steps. If the states of the MDP are not augmented with an odd/even component, the rewards appear to be non-stationary to an agent with a Markovian policy. In this case, a Markovian policy will not be optimal over all policies. The optimal policy will have to switch at odd/even time steps.In this work, we extend the previous argument to intrinsic exploration objectives in RL. In the following sections, we uncover the implicit non-stationarity of several exploration objectives and propose a novel method to resolve it. §.§ Exploration Bonuses and Intrinsic RewardsIn hard-exploration problems, exploration is more successful if directed, controlled, and efficient. Exploration bonuses provide a framework to decouple the original task from the exploration one and define exploration as a separate RL problem. In this framework, the extrinsic rewards provided by the environment are aggregated with the intrinsic rewards (i.e. exploration bonuses) to build an augmented learning target. By directing the agent's behavior towards custom exploration bonuses, this formulation induces exploratory behaviors that are state-covering and are well-suited for long-horizon problems. Central to is the introduction of the parameters ϕ_t in the formulation of exploration bonuses ℬ(s_t, a_t | ϕ_t), which enables reasoning about the dynamics of the intrinsic reward distributions. The parameters of the intrinsic reward distribution ϕ_t determine how novelty is estimated and exploration is guided, and if they change over time then ℬ is non-stationary.In the following, we unify count-based methods, pseudo-counts, and state-entropy maximization under the same formulation, which includes ϕ_t. In the next section, we present as a solution to their non-stationarity.§.§.§ Count-Based MethodsCount-based methods keep track of the agent's frequencies of state visits to derive an exploration bonus. Formally, the counts keep track of the visited states until time t, and so 𝒩_t(s) is equal to the number of times the state s has been visited by the agent until time t. Two popular intrinsic reward distributions derived from counts that exist in prior work are: ℛ(s_t, a_t, s_t+1 | ϕ_t) = ℬ(s_t, a_t, s_t+1 | ϕ_t) = β/√(𝒩_t(s_t+1|ϕ_t)) where β weights the importance of the count-based bonus, and: ℛ(s_t, a_t, s_t+1 | ϕ_t) =ℬ(s_t, a_t, s_t+1 | ϕ_t) = {[1, if 𝒩_t(s_t+1|ϕ_t) = 0;0,else ] Note that the state-visitation frequencies 𝒩_t are the sufficient statistics for ϕ_t and hence for the count-based rewards in Equations <ref> and <ref>.That is, the state-visitation frequencies are the only dynamically changing component that induces non-stationarity in count-based rewards.Equation <ref> <cit.> produces a dense learning signal since ℬ(s_t, a_t, s_t+1 | ϕ_t) ≠ 0 unless 𝒩_t(s_t+1) = ∞ which is unrealistic in practice. Equation <ref> <cit.> defines a sparse distribution where the agent is only rewarded the first time it sees each state, similar to the objective of the travelling salesman problem.Throughout the paper, we refer to Equations <ref> and <ref> as √()-reward and salesman reward.§.§.§ Pseudo-CountsTo enable count-based exploration in high-dimensional spaces, the notion of visitation counts has been generalized to that of pseudo-counts <cit.>. Prior work has estimated pseudo-counts through density models <cit.>, neural networks <cit.>, successor representations <cit.>, and samples from the Rademacher distribution <cit.>. Recently, <cit.> proposed elliptical bonuses (E3B) as a natural generalization of count-based methods. An appealing property of E3B is that it models the complete set of state-visitation frequencies over the state space, and not only for the most recent state[Previous pseudo-count methods allowed the agent to query a density model with a single state and obtain its pseudo-count. However, E3B maintains a model of the state-visitation frequencies over the complete state space. The latter is key for SOFE to obtain sufficient statistics of the E3B reward in Equation <ref>.] Concretely, the E3B algorithm produces a bonus: ℬ(s_t, a_t, s_t+1 | ϕ_t)= ψ_t(s_t+1)^T C^-1_tψ_t(s_t+1) C_t= ∑_t=0^T ψ_t(s_t)ψ_t(s_t)^T where ψ_t is an auxiliary model that produces low-dimensional embeddings from high-dimensional observations. Since the ellipsoid is updated after each transition, the exploration bonus is non-stationary. The matrix C_t defines an ellipsoid in the embedding space, which encodes the distribution of observed embeddings in a given trajectory. Since C_t is the only moving component of Equation <ref>, it is a sufficient statistic to characterize the non-stationarity of the reward distribution. Note that in an MDP with finite state space, where ψ is the one-hot encoding of the states, the exploration bonus in Equation <ref> becomes a count-based bonus similar to Equation <ref>. Concretely, C_t-1^-1 becomes a diagonal matrix with the inverse state-visitation frequencies for each state in the elements of the diagonal <cit.>.§.§.§ State-Entropy Maximization State-entropy maximization is a widely used exploration objective that consists of training policies to induce a uniform distribution over the state-marginal visitation distribution <cit.>. A canonical formulation of this problem is presented in <cit.>. Maximizing the state-entropy objective corresponds to training policies to maximize the following reward distribution <cit.>: ℛ(s_t, a_t, s_t+1 | ϕ_t) = ℬ(s_t, a_t, s_t+1 | ϕ_t) = - log p_ϕ_t(s_t+1) The parameters θ define the policy and ϕ are the parameters of a generative model which estimates the state-marginal distribution d^π_θ(s_t). Note that the sufficient statistics of the generative distribution are also sufficient statistics for the intrinsic reward in Equation <ref>. Throughout the paper, we refer to this algorithm as surprise maximization (S-Max) and use the Gaussian distribution to model trajectories of states with p_ϕ_t. Hence the sufficient statistics of p_ϕ_t for the reward reward in Equation <ref> are ϕ_t = (μ_t ∪σ_t^2). We present the details of S-Max in Section <ref>. § STATIONARY OBJECTIVES FOR EXPLORATION In the following, we present a training framework for Stationary Objectives for Exploration (SOFE). Any exploration bonus ℬ(s_t, a_t, s_t+1|ϕ_t) derived from dynamically changing parameters will define a non-stationary reward function. Without any modification, exploration bonuses define a POMDP: ℳ = (𝒮, 𝒪, 𝒜, ℬ, 𝒯 , γ). For simplicity, we have fully replaced the task-reward ℛ with the exploration bonus ℬ, and we consider that the only unobserved components in the POMDP are the parameters of the reward distribution[This assumption holds true if the agent has access to sufficient statistics of the transition dynamics (e.g. grid environments), and makes SOFE transform a POMDP into a fully observed MDP. Even when there are unobserved components of the true states apart from the parameters of the intrinsic reward distributions, we empirically show that SOFE mitigates the non-stationary optimization, yielding performance gains.]. Hence, we argue that the unobserved states s ∈ S satisfy s_t = o_t∪ ϕ_t. Note that the transition function of the POMDP is generally only Markovian if defined over the state space and not over the observation space: 𝒯: 𝒮×𝒜×𝒮→ [0,1].R0.5< g r a p h i c s >enables agents to observe the sufficient statistics of the intrinsic rewards and use them for decision-making. The sufficient statistics for exploration bonuses are always available during training as they are explicitly computed to produce the intrinsic rewards. However, current RL methods do not allow the agents to observe them. Hence, any method that aims to solve ℳ faces optimizing a non-stationary objective, which is difficult to optimize, as it can require non-Markovian properties like memory, continual learning, and adaptation, and may only find suboptimal policies. In this work, we argue that non-stationarity should not be implicit in the formulation of an exploration objective. For this reason, we propose , which augments the state space by defining an augmented MDP ℳ̂ = (𝒮̂, 𝒜, ℬ, 𝒯, γ) where 𝒮̂ = {𝒪 ∪ ϕ}, with 𝒪 being the observations from ℳ. Note that we get rid of the observation space 𝒪 in the definition of ℳ̂ because by augmenting the original observations from ℳ with the sufficient statistics for ℬ we effectively define a fully observed MDP. This simple modification allows instantiating the same exploration problem in a stationary and Markovian setting. That is the optimal policies in ℳ̂ are also optimal in ℳ. This is true since the transition and reward functions are identical in ℳ and ℳ̂. We note that the update rule for the parameters ϕ_t must be Markovian, meaning that these can be updated after every step without requiring information other than s_t and s_t+1. For example, counts only increment by one for the state that was most recently visited:𝒩_t+1(s) = 𝒩_t(s)∀ s ∈{S - s_j}, where s_j = s_t+1 and 𝒩_t+1(s_j) = 𝒩_t(s_j) + 1. The latter also applies to E3B and S-Max, since the ellipsoid C_t and parameters of the generative model are updated incrementally with every new transition (see Equation <ref> and Section <ref>). Given the sufficient statistics, the intrinsic reward distributions in Equations <ref>,<ref>, <ref>, <ref> become fully Markovian, and hence are invariant across time.§ EXPERIMENTS is designed to improve the performance of exploration tasks. To evaluate its efficacy, we study three questions: (1)How much does facilitate the optimization of non-stationary exploration bonuses? (2) Does this increased stationarity improve exploration for downstream tasks? (3) How well does scale to image-based state inputs where approximations are needed to estimate state-visitation frequencies?R0.5< g r a p h i c s >We use 3 mazes and a large 3D map to evaluate both goal-reaching and purely exploratory behaviors. Maze 1: a fully connected, hard-exploration maze; Maze 2: a maze with open spaces and a goal; Maze 3: same as Maze 1 but with 3 doors which an intelligent agent should use for more efficient exploration; 3D map: a large map with continuous state and action spaces.To answer each of these research questions, we run the experiments as follows. (1) We use three different mazes without goals to investigate how compares to vanilla count-based methods and S-Max in reward-free exploration. Concretely, we evaluate whether SOFE allows for better optimization of purely exploratory behaviors. We also use a large 3D environment with continuous state and action spaceswhich introduces complex challenges as it requires more than purely navigation skills.Secondly (2), we use a 2D maze with a goal and sparse extrinsic reward distribution. This is a hard-exploration task where the extrinsic reward is only non-zero if the agent reaches the goal, which requires a sequence of 75 coordinated actions. We evaluate whether enables better optimization of the joint objective of intrinsic and task rewards. Furthermore, we use the DeepSea sparse-reward hard-exploration task from the DeepMind suite <cit.> and show that SOFE achieves better performance than DeRL <cit.> which attempts to stabilize intrinsic rewards by training decoupled exploration and exploitation policies.Thirdly (3), we apply on the E3B <cit.> algorithm as argued in Section <ref> to demonstrate the effectiveness of the approach with an imperfect representation of the state-visitation frequencies. We use the MiniHack-MultiRoom-N6-v0 task, originally used for E3B in <cit.>, and the Procgen-Maze task <cit.>. In both environments, the task is to navigate to the goal location in a procedurally generated map and the extrinsic reward is only non-zero if the agent reaches the goal. Both environments return pixel observations. Minihack additionally returns natural language observations. However, the Procgen-Maze task is more challenging because each episode uses unique visual assets, requiring an additional level of generalization, while in Minihack, different episodes only vary in the map layout. Additionally, we include the Habitat environment <cit.> to evaluate purely exploratory behaviors and show the results in Section <ref>.We provide the details of the network architectures, algorithm hyperparameters, and environment specifications in Section <ref>. Furthermore, we provide an in-depth analysis of the behaviors learned by SOFE in Section <ref>, which uncovers valuable insights on how SOFE learns to drive exploration more efficiently. §.§ Reward-Free Exploration In this section, we focus on the first research question and consider the reward-free setting to evaluate purely exploratory behaviors. We use the 3 mazes in Figure <ref> and measure map coverage, which correlates with exploration in navigation environments. In Figure <ref>, we show how enables agents to explore the mazes better than vanilla count-based methods. Even though we fix the count-based rewards described in Equations <ref> and <ref>, generally enables RL agents to better optimize them, achieving higher state coverage. Section <ref> contains the results across all algorithms and exploration modalities.We also run experiments in a large 3D environment from the Godot RL repository <cit.>, to evaluate 's ability to scale to continuous state and action spaces. This environment contains challenging dynamics that require exploratory agents to master a variety of skills, from avoiding lava and water to using jump pads efficiently <cit.>. Figure <ref> shows that also scales to these more complex settings, enabling SAC <cit.> agents to achieve higher map coverage across different exploration modalities. Additionally, we show that SOFE stabilizes the state-entropy maximization objective. Figure <ref> shows the episodic map coverage achieved in Maze 2 by the vanilla S-Max algorithm compared to the augmented S-Max with SOFE. These results provide further evidence that SOFE is a general framework that tackles the non-stationarity of exploration objectives and provides orthogonal gains across objectives of different natures. §.§ Exploration for Sparse Rewards In the previous section, we showed that enables RL agents to better explore the state space. In this section, we evaluate whether can achieve better performance on hard-exploration tasks. We evaluate count-based methods and SOFE in Maze 2 in Figure <ref>. For each of the RL algorithms, we compare training with the sparse extrinsic reward only and training with the extrinsic and intrinsic rewards with and without SOFE. Figure <ref> shows that significantly improves the performance of RL agents in this hard-exploration task. Our results confirm that extrinsic rewards are not enough to solve such hard-exploration tasks and show that is significantly more effective than vanilla count-based methods, achieving the highest returns across multiple RL algorithms. PPO <cit.>, PPO+LSTM <cit.>, and A2C <cit.> achieve near-optimal goal-reaching performance only when using SOFE. Importantly, policies equipped with LSTMs have enough capacity to model the non-stationary intrinsic rewards <cit.> and could learn to count implicitly <cit.>. However, our results show that SOFE further improves the performance of recurrent policies when optimizing for non-stationary intrinsic rewards. Additionally, we compare SOFE to DeRL <cit.> in the DeepSea environment and show the results in Table <ref>. DeRL entirely decouples the training process of an exploratory policy from the exploitation policy to stabilize the optimization of the exploration objective. SOFE is degrees of magnitude less complex than DeRL as it only requires training an additional feature extractor. Still, SOFE achieves better results in the harder variations of the DeepSea environment. §.§ Exploration in High-dimensional Environments In this section, we evaluate and E3B <cit.>, the state-of-the-art exploration algorithm for high-dimensional contextual MDPs. E3B tackles the challenging problem of estimating the true state-visitation frequencies from pixel observations. As argued in Section <ref>, the ellipsoid is the only moving component of the E3B objective. Hence, we evaluate whether including either the diagonal or the full ellipsoid in the state enables better exploration. We optimize the E3B objective with IMPALA <cit.> as proposed in <cit.>. Section <ref> contains the details of the policy architecture. Figure <ref> shows that also improves the performance of pseudo-count-based methods, providing empirical evidence that reducing the non-stationarity of a reward distribution enables better optimization even in high-dimensional environments. In Section <ref>, we include experiments with E3B and SOFE in the reward-free setting using the Habitat simulator. These show that SOFE improves the sample efficiency of E3B.§ CONCLUSIONWe identify that exploration bonuses can be non-stationary by definition, which can complicate their optimization, resulting in suboptimal policies. To address this issue, we have introduced a novel framework that creates stationary objectives for exploration (). is based on capturing sufficient statistics of the intrinsic reward distribution and augmenting the MDP's state representation with these statistics. This augmentation transforms the non-stationary rewards into stationary rewards, simplifying the optimization of the agent's objective. We have identified sufficient statistics of count-based methods, the state-entropy maximization objective, and E3B. Our experiments provide compelling evidence of the efficacy of across various environments, tasks, and reinforcement learning algorithms, even improving the performance of the state-of-the-art exploration algorithm in procedurally generated environments. Using augmented representations, SOFE significantly improves exploration behaviors, particularly in challenging tasks with sparse rewards and across multiple exploration modalities. Additionally, extends to large continuous state and action spaces, showcasing its versatility.iclr2024_conference§ APPENDIX§.§ Stationarity of the count-based rewardsIn this Section, we show that the state-visitation frequencies must be observed by the agent in order to make the count-based objective fully Markovian and stationary.If the expectation of the count-based bonus is constant, then the bonus is stationary. Let 𝒩_t be the table of state-visitation frequencies at time step t, and let 𝒩_t(s_t) be the count of visits to state s_t at time t. The count-based reward is given by ℬ(s_t) = 1/𝒩(s_t). Without loss of generality, we have defined ℬ as a function of a single state as the count-based bonus ℬ(s_t, a_t, s_t+1) = 1/𝒩(s_t+1) only depends on the next state. The proof holds for any monotonically decreasing function of the counts.The expectation of ℬ(s_t) is given by:E[ℬ(s_t)]= ∑_s P(s_t = s) ·1/𝒩_t(s_t)= 1/𝒩_t(s_t)· P(s_t = s_t) + ∑_s ≠ s_t P(s_t = s) ·1/𝒩_t(s)Importantly, the counts provide information about the state visitation distribution of the policy. Assuming that the policy visitation distribution P(s_t = s) is proportional to the state visit counts (P(s_t = s) ∝ N_t(s)), we can write:P(s_t = s)= 𝒩_t(s)/∑_s'𝒩_t(s')E[ℬ(s_t)]= 1/𝒩_t(s_t)·𝒩_t(s_t)/∑_s'𝒩_t(s') + ∑_s ≠ s_t𝒩_t(s)/∑_s'𝒩_t(s')·1/𝒩_t(s) Simplifying the expression, we get:E[ℬ(s_t)]= 1/∑_s'𝒩_t(s') + ∑_s ≠ s_t1/∑_s'𝒩_t(s') The terms in the summation are constant across time because the summation is over all states except s_t, and the denominator ∑_s'𝒩_t(s') is based on the count of visits to each state, which the agent can observe with SOFE.Additionally, the update rule of the counts is also Markovian, as for each transition (s_t, s_t+1) we have: 𝒩_t+1(s) = 𝒩_t(s)∀ s ∈{S - s_j}, where s_j = s_t+1 and 𝒩_t+1(s_j) = 𝒩_t(s_j) + 1. §.§ Reward-Free exploration with SOFE and E3BR0.5< g r a p h i c s >Map coverage on a held-out set of 100 3D scenes of the HM3D dataset. The E3B agents trained using explore the new scenes better.As in Section <ref>, we evaluate if can enable better optimization of the non-stationary exploration bonus, in this case for E3B. We consider the reward-free setting for purely exploratory behaviors. For this reason, we use the Habitat simulator <cit.> and the HM3D dataset <cit.>, which contains 1,000 different scenes of photorealistic apartments for 3D navigation. We train E3B and our proposed augmented versions for 10M environment steps and measure their map coverage in a set of 100 held-out scenes. We optimize the E3B exploration bonus with PPO <cit.> which requires 31 hours in a machine with a single GPU. We show the results in Figure <ref>. In Figure <ref> we show the learning curves corresponding to the results presented in Section <ref>. §.§ Analysis of the behaviours learned by SOFEBy using on count-based methods, RL agents extract features from the state-visitation frequencies and use them for decision-making. To better understand how the agents use the augmented information, we artificially create an object 𝒩_0 with 𝒩_0(s_i) > 0∀_i ∈{𝒮 - s_j} and 𝒩_0(s_j) = 0. Intuitively, we communicate to the agent that all states in the state space but s_j have already been visited through the state-visitation frequencies. We evaluate PPO agents pre-trained on reward-free episodic exploration and show the results in Figure <ref>. Results show that augmented agents efficiently direct their exploration towards the unvisited states, self-identifying these as goals. This reveals how the agents leverage the augmented information for more efficient exploration.§.§ Training Details§.§.§ Network ArchitectureWe use Stable-Baselines3 <cit.> to run our experiments in the mazes, Godot maps, and DeepSea. For DQN, PPO, A2C, and SAC we use the same CNN to extract features from the observation. The CNN contains 3 convolutional layers with kernel size of (3×3), stride of 2, padding of 1, and 64 channels. The convolutional layers are followed by a fully connected layer that produces observation embeddings of dimension 512. For the augmented agents, we use an additional CNN with the same configuration to extract features from ϕ_t. The augmented agents concatenate the representations from the observation and the parameters ϕ_t and feed these to the policy for decision-making, while the vanilla methods (e.g. counts, S-Max) only extract features from the observations. We show the CNN architecture in Figure <ref>.For Minihack and Procgen, we use the official E3B codebase, which contains baselines for ICMand RND, and uses IMPALA to optimize the exploration bonuses. We use the same policy architecture as in <cit.>, which contains an LSTM. We ran the experiments in Minihack and Procgen for 100M steps. For the augmented agents, we design a CNN that contains 5 convolutional layers with a kernel size of (3×3) and stride and padding of 1, batch normalization layers after every convolutional layer, max-pooling layers with a kernel size of (2×2) and stride of 1, followed by a fully-connected layer that produces embeddings of dimension 1024. This architecture allows to extract features from the 512x512 ellipsoids, which are later passed together with the observation features to the policy for decision-making. We show the CNN architecture in Figure <ref>. §.§ Algorithm Hyperparameters §.§.§ DQN §.§.§ PPO§.§ A2C§.§ Environment Details §.§.§ MazesWe designed the mazes in Figure <ref> with Griddly <cit.>. The 3 mazes are of size 32x32. The agents observe entity maps: matrices of size (map_size, map_size) of entity id's (e.g. 0 for the floor, 1 for the wall, and 2 for the agent). The action space is discrete with the four-movement actions (i.e. up, right, down, left). In the following, we now show an example of the observation space of a 5x5 variation of the mazes we use throughout the paper following the OpenAI Gym interface <cit.>. obs_shape = env.observation_space.shape# obs_shape == (3,5,5)obs, reward, done, info = env.step( ... )# obs = [ [ # avatar in these locations [0,0,0,0,0], [0,1,0,0,0], [0,0,0,0,0], [0,0,0,0,0], [0,0,0,0,0] ], [ # wall in these locations [1,1,1,1,1], [1,0,0,0,1], [1,0,0,0,1], [1,0,0,0,1], [1,1,1,1,1] ], [ # goal in these locations [0,0,0,0,0], [0,0,0,0,0], [0,0,0,0,0], [0,0,0,1,0], [0,0,0,0,0] ] ] §.§.§ DeepSea R0.5< g r a p h i c s >The DeepSea environment.The DeepSea environment is taken from <cit.> and has been used to evaluate the performance of intrinsic exploration objectives <cit.>. DeepSea represents a hard-exploration task in a N × N grid where the agent starts in the top left and has to reach a goal in the bottom right location. At each timestep, the agent moves one row down and can choose whether to descend to the left or right. The agent observes the 2D one-hot encoding of the grid and receives a small negative reward of -0.01/N for moving right and 0 rewards for moving left. Additionally, the agent receives a reward of +1 for reaching the goal and the episode ends after N timesteps. Hence, it is very hard for an agent trained on extrinsic rewards only to solve the credit assignment problem and realize that going right is the optimal action. The episodes last exactly N steps, and the complexity of the task can be incremented by increasing N. §.§.§ Godot MapWe use the Godot game engine to design the 3D world used in Section <ref>, which we open-source together with the code. We show a global view of the map in Figure <ref>. The map is of size 120x120 and has continuous state and action spaces. To apply count-based methods we discretize the map in bins of size 5, resulting in a 24x24 object 𝒩_t. The observations fed to the agent are the result of shooting several raycasts from the agent's perspective. The observations also contain global features like the agent's current velocity, rotation, and position. The action space contains 3 continuous dimensions that control the velocity, rotation, and jumping actions.§.§.§ Minihack MultiroomWe use the Multiroom-N6 task from the Minihack suite <cit.> to evaluate the performance of E3B and our proposed augmentation, as originally used in <cit.>. The environment provides pixel and natural language observations and generates procedurally generated maps at each episode. The rewards are only non-zero when the agent finds the goal location in a maze that contains 6 rooms. We use the same policy architecture described in Section C.1.1 in <cit.>.§.§.§ Procgen MazeWe use the Procgen-Maze task from the Procgen benchmark <cit.> to evaluate the performance of E3B and our proposed augmentation. We use the memory distribution of mazes. The mazes are procedurally generated at each episode, have different sizes, and use different visual assets. Procgen-Maze provides pixel observations and the rewards are only non-zero if the agent finds the goal location.§.§ State-entropy Maximization In this section, we provide the pseudo-code for the surprise-maximization algorithm presented in Section <ref>. Note that the update rule for the sufficient statistics of the generative model p_ϕ_t is Markovian as it is updated at each timestep with the new information from the next state. As mentioned in the paper, we use a Gaussian distribution to model p_ϕ_t, and hence when using SOFE, we pass its mean and standard deviation to the RL agents.§.§ Exploration for Sparse RewardsIn this section, we show the complete set of results for Section <ref>. The results include confidence intervals and learning curves for DQN, A2C, PPO, and PPO-LSTM for the task of reaching the goal in Maze 2 in Figure <ref>. We also include the partially-observable setting, where the agent does not observe the full maze but 5x5 agent-centred observation. §.§ Reward-free Exploration In this section, we show the complete set of results for Section <ref>. The results include learning curves for DQN, A2C, PPO and PPO-LSTM measuring the map coverage achieved by these algorithms in the 3 mazes in Figure <ref>.§.§.§ DQN§.§.§ A2C§.§.§ PPO §.§.§ PPO-LSTM | http://arxiv.org/abs/2310.18144v3 | {
"authors": [
"Roger Creus Castanyer",
"Joshua Romoff",
"Glen Berseth"
],
"categories": [
"cs.LG",
"cs.AI"
],
"primary_category": "cs.LG",
"published": "20231027135118",
"title": "Improving Intrinsic Exploration by Creating Stationary Objectives"
} |
Parameter estimation for second-order SPDEs in multiple space dimensions Patrick Bossert Institute of Mathematics Julius-Maximilians-Universität Würzburg Würzburg, 97074, Germany 20 October 2023 ==========================================================================================================================We analyse a second-order SPDE model in multiple space dimensions and develop estimators for the parameters of this model based on discrete observations of a solution in time and space on a bounded domain.While parameter estimation for one and two spatial dimensions was established in recent literature, this is the first work which generalizes the theory to a general, multi-dimensional framework. Our approach builds upon realized volatilities, enabling the construction of an oracle estimator for volatility within the underlying model. Furthermore, we show that the realized volatilities have an asymptotic illustration as response of a log-linear model with spatial explanatory variable. This yields novel and efficient estimators based on realized volatilities with optimal rates of convergence and minimal variances. For proving central limit theorems, we use a high-frequency observation scheme. To showcase our results, we conduct a Monte Carlo simulation. 62F12, 62M10, 60H15§ INTRODUCTIONMultidimensional stochastic partial differential equations (SPDEs) expand upon the principles of their one-dimensional counterparts to address scenarios involving multiple spatial dimensions. These equations find application across diverse scientific domains, enabling the exploration of the interplay between deterministic dynamics and stochastic fluctuations in systems spanning physics, geophysics, biology, finance, and environmental science.Recent interest in applications of one-dimensional SPDEs and statistical methods to calibrate them is evident in the works of <cit.>, <cit.>, <cit.>, and <cit.> Notably, researchers have leveraged power variations, a concept well-established in financial high-frequency settings, to develop statistical inference methodologies, as evidenced by works like <cit.>, <cit.>, and <cit.>.Multi-dimensional SPDE models, on the other hand, offer a much larger variability for modelling natural phenomena. Therefore, it is intuitive that applications of these SPDEs is also of great relevance, especially for two- and three-dimensional spaces. See, for instance, <cit.> for an application in connection with the climate phenomenon El Niño and references therein for applications to sea temperature, <cit.> for an application in Geostatistics and dealing with seismic data and <cit.> for an application in climate science. For an overview with many references to specific applications in various fields we refer to <cit.>.While power variations have received considerable attention in the context of one-dimensional SPDEs, their utilization for SPDEs in multiple spatial dimensions remains in its nascent stages. In a pivotal contribution, <cit.> analysed a two-dimensional SPDE model, laying the groundwork for further research in the realm of second-order, linear multi-dimensional SPDEs.In this endeavor, we follow a theoretical framework related to <cit.>, adapting it to accommodate multiple spatial dimensions.Within this multi-dimensional model, we establish the foundation for parameter estimation by utilizing quadratic increments. Building upon a high-frequency assumption over a fixed time horizon, inspired by <cit.>, along with a regularity assumption, we construct a volatility estimator tailored to the multi-dimensional SPDE model. Subsequently, we link these realized volatilities to a log-linear model to enhance our understanding of the system.To facilitate empirical investigations, we develop a simulation methodology that extends the one-dimensional counterpart known as the replacement method, as introduced by <cit.>, to multiple spatial dimensions. A brief overview of the notational conventions employed in this paper can be located at the beginning of Section <ref>. One challenging task when working within a multi-dimensional framework is dealing with the technical difficulties that arise during the transition from one to multiple space dimensions. For instance, when opting for the spectral approach, it becomes necessary to determine a Riemann sum approximation for sums with a multi-dimensional index set. In addition to these technical challenges, the multidimensional random field exhibits significant structural changes, affecting the dependencies within the model and the behaviour of the error terms. As a result, the generalization from a single space dimension, as researched by <cit.>, to multiple spatial dimensions is not straightforward and requires careful treatment. Consequently, we anticipate that our research will provide valuable insights into statistics for SPDEs in multiple spatial dimensions and contribute efficient estimators that leverage realized volatilities.We consider the following linear second-order SPDE in d∈ℕ, d≥ 2, spatial dimensions with additive noise:[ [ X_t(y) = A_ϑ X_t(y) t+σ B_t(y),(t,y)∈[0,1]× [0,1]^d;X_0(y)=ξ(y), y∈[0,1]^d; X_t(y)=0, (t,y)∈[0,1]×∂ [0,1]^d ]],where y=(y_1,…,y_d)∈[0,1]^d. The operator A_ϑ in equation (<ref>) is given by:A_ϑ = η∑_l=1^d ∂/∂ y_l^2+∑_l=1^d ν_l∂/∂ y_l+ϑ_0,with fixed parameters ϑ=(ϑ_0,ν_1,…,ν_d,η), where ϑ_0,ν_1,…,ν_d∈ℝ and η,σ>0.In the temporal domain, we consider the interval t∈[0,1], which can be extended to t∈[0,T] for T>0, while the spatial domain encompasses the d-dimensions unit hypercube. Additionally, we introduce B as a cylindrical Q-Brownian motion defined over [0,1]^d, where the initial condition ξ:[0,1]^d→ℝ is independent to (B_t). We adopt Dirichlet boundary condition such that X_t(y)=0 holds for all (t,y)∈[0,1]×∂ [0,1]^d. Furthermore, we introduce the curvature parameter, expressed as κ=(κ_1,…,κ_d), where κ_l:=ν_l/η∈ and l=1,…,d. Additionally, we define the normalized volatility, denoted as σ_0^2:=σ^2/η^d/2>0, where the parameter σ is called volatility. One well-established example of a linear, second-order SPDE is the heat equation. The stochastic version of this equation in d-dimensions can be represented as follows:[ [ X_t(y) =η∑_l=1^d ∂/∂ y_l^2X_t(y) t+σ B_t(y), (t,y)∈[0,1]× [0,1]^d; X_0(y)=ξ(y),y∈[0,1]^d;X_t(y)=0,(t,y)∈[0,1]×∂ [0,1]^d ]], where σ governs the degree of randomness in the cooling process, while η serves as a parameter denoting thermal conductivity. In the context of the one-dimensional heat equation, this equation models the cooling process of objects like rods or thin entities. However, when extended to two or three dimensions, it characterizes the cooling process of broader surfaces or spatial volumes, with potential applications ranging from modelling the cooling of plate-like structures to the temperature dynamics of large bodies of water such as the sea surface or seawater. In all applications, the initial condition reflects the starting temperature of the object or system.Given the prevalence of multi-dimensional models in various natural phenomena, especially when influenced by random factors, the analysis of these multi-dimensional equations becomes particularly intriguing and relevant.§ PROBABILISTIC STRUCTURE AND STATISTICAL SETUPTo investigate the multi-dimensional SPDE model introduced in (<ref>), we opt for the spectral approach.Different to the situation with unbounded spatial support, the differential operator A_ϑ in (<ref>) admits a discrete spectrum. Hence, the spectral approach involves representing the random field as a sum of orthogonal eigenfunctions weighted by stochastic coefficients. This methodology is widely adopted by researchers in the field, see, for instance <cit.>, <cit.>, or <cit.>.Moreover, we extend the two-dimensional approach presented by <cit.>.In the context of the spectral approach, the corresponding Hilbert space is defined as follows:H_ϑ:={f:[0,1]^d, f_ϑ<∞ and f(y)=0, for y∈∂ [0,1]^d}. The norm ·_ϑ is defined via the corresponding inner product f_ϑ:=⟨ f,f⟩_ϑ for f∈ H_ϑ, given by⟨ f,g⟩_ϑ:=∫_0^1⋯∫_0^1f(y_1,…,y_d)g(y_1,…,y_d)exp[∑_l=1^dκ_ly_l] y_1⋯ y_d,where f,g∈ H_ϑ. The domain of the operator A_ϑ is defined as follows:𝒟(A_ϑ)={f∈ H_ϑ:f_ϑ,∂/(∂ y_l)f_ϑ,∂^2/(∂ y^2_l)f_ϑ<∞,for all l=1,…,d}.The decomposition of the operator A_ϑ from (<ref>) results in the eigenfunctions (e_k)_k∈^d and eigenvalues (-λ_k)_k∈^d, given bye_k(y) :=e_k(y_1,…,y_d):=2^d/2∏_l=1^dsin(π k_ly_l)e^-κ_ly_l/2, λ_k :=-ϑ_0+∑_l=1^d(ν_l^2/4η+π^2k_l^2η),where k=(k_1,…,k_d)^⊤∈^d.When comparing the representation of the eigenfunctions and eigenvalues in d-space dimensions to those in one space dimension, as presented in <cit.>, we observe that we have extended the eigenfunctions and eigenvalues in one dimension to each spatial dimension. The orthonormal property of the eigenfunctions in the one-dimensional case seamlessly extends to the multidimensional setting, effectively defining an orthonormal system denoted as (e_k)_k∈^d. This observation allows us to independently decompose each spatial axis using the one-dimensional eigenfunctions, which incorporate rescaling, sine functions, and exponential terms with dependencies on the respective parameters κ_l, l=1,…,d. As a consequence, we derive a spectral decomposition by considering a product model over each dimension. Moreover, the operator A_ϑ is self-adjoint within the Hilbert space H_ϑ. This self-adjoint property carries significant importance, as it guarantees that the eigenfunctions collectively form a complete and orthogonal basis within H_ϑ. This, in turn, empowers us to effectively represent solutions to the SPDE model outlined in equation (<ref>) using a spectral decomposition. Given that these properties can be readily deduced through standard calculations, we will forego presenting their proofs.We address the Q-Wiener process, denoted as W_t(y), within a Sobolev space defined over the bounded domain [0,1]^d. For a comprehensive understanding of Q-Wiener processes, readers are encouraged to consult references such as <cit.> or <cit.>. A crucial distinction that arises when transitioning from one spatial dimension to higher dimensions is that the random filed, denoted as X_t(y), is not square integrable when considering a white noise, i.e., [||X_t^𝕀||_ϑ^2]= ∞, where Q=id denotes the identity operator. Remarkably, this phenomenon persists even in two spatial dimensions, as demonstrated by the authors in <cit.>. To rectify this issue and ensure finite variance of the paths, it becomes necessary to employ a coloured cylindrical Wiener process instead of a white noise. This implies the introduction of an additional parameter into the model, effectively damping the Wiener process and resulting in the random field being square integrable.In our model, we incorporate a Q-Wiener process to account for the stochastic noise. By implementing damping mechanisms, <cit.> have successfully devised statistical inference techniques based on high-frequency observations, leveraging a spectral approach within the context of two spatial dimensions.To specify the damping mechanism in our model, we adopt one natural approach by defining (B_t)_t≥ 0 as follows:⟨ B_t,f⟩_ϑ := ∑_k∈^dλ_k^-α/2⟨ f,e_k⟩_ϑ W_t^k,for f∈ H_ϑ, t≥ 0 and independent real-valued Brownian motions (W_t^k)_t≥ 0, k∈^d. In the preceding definition, the cylindrical Brownian motion B experiences a structural transformation due to the inclusion of the term λ_k^-α/2 in its spectral decomposition. This modification inherently results in a fundamental change in the probabilistic characteristics of the random field. The parameter α assumes special importance, as it essentially governs the Hölder regularity of the marginal processes of the random field. In the context of one spatial dimension, a related parameter was examined by <cit.>, where it was referred to as “spatial correlation parameter”. Moreover, we assume that α∈ (d/2-1,d/2), then it can be seen that∑_k∈^dA_ϑ^-(1+α)/2e_k_ϑ^2=∑_k∈^d1/λ_k^1+α<∞,which implies that the Q-Wiener process is well-defined. The operator Q is then defined byQe_k=λ^-α_kA_ϑ^-1/2e_k_ϑ^2 e_k,where the corresponding eigenvalues of Q are given by λ^-α_kA_ϑ^-1/2e_k_ϑ^2, k∈^d. Note, that the lower bound α> d/2-1 is essential for the Q-Wiener process being well-defined, where the upper bound α< d/2 serves the purpose of developing statistical inference. For a comprehensive overview of Wiener processes on Hilbert spaces, we refer to <cit.>. For readings on a different approach for the choice of Q in two space dimensions we refer to <cit.>. Consider a mild solution of the SPDE model from equation (<ref>), which satisfies the integral representation:X_t=e^tA_ϑξ+σ∫_0^t e^(t-s)A_ϑ B_sa.s., for every t∈[0,1]. Then, the spectral decomposition of the random field X_t is given byX_t(y)=∑_k∈^d x_k(t)e_k(y), where x_k(t):=⟨ X_t,e_k⟩_ϑ. The coordinate processes (x_k) follow the Ornstein-Uhlenbeck dynamics, governed by the equation:x_k(t)= -λ_kx_k(t) t + σλ_k^-α/2 W_t^k,for every k∈^d. Utilizing that A_ϑ is self-adjoint yields the following representation for the coordinate processes: x_k(t)=⟨ X_t,e_k⟩_ϑ =e^-λ_kt⟨ξ,e_k⟩_ϑ+σλ_k^-α/2∫_0^t e^-λ_k(t-s) W_s^k,where we used that ⟨ e^tA_ϑf,e_k⟩_ϑ=e^-λ_kt⟨ f,e_k⟩_ϑ for f∈ H_ϑ. In fact, by using the latter representation for x_k(t), we can observe that the random field is square integrable, i.e.:[X_t_ϑ^2]≤ C ∑_k∈^d1/λ_k^1+α<∞.For statistical inference, we establish a high-frequency observation scheme on a discrete grid in time and space. Similar to the one-dimensional case, it is essential to restrict the observations in order to bound correlations, which naturally arise in SPDE models. Therefore, we introduce the following mapping: x_0:=min_i=1,…,d x_i≠ 0{x_1,…,x_d},where we set min∅=0 and consider the following Assumption to our model. [Observation scheme] Suppose we observe a mild solution X of the SPDE model from equation (<ref>) on a discrete grid (t_i,y_j)∈ [0,1]×[0,1]^d, with equidistant observations in time t_i=iΔ_n for i=1,…,n and y_1,…,y_m∈[δ, 1-δ]^d, where n,m∈ and δ∈(0,1/2). We consider the following asymptotic regime: (I) Δ_n 0 and m=m_n, as n, while nΔ_n=1 and m=𝒪(n^ρ), for some ρ∈(0,(1-α')/(d+2)∧ 1/2),where α=d/2-1+α' and α'∈(0,1). Furthermore, we consider thatm_n·min_j_1,j_2=1,…,m_n j_1≠ j_2y_j_1-y_j_2_0 is bounded from below, uniformly in n for the asymptotic regime (I).Since our statistical inference relies on power variation based on temporal increments, it necessitates fewer spatial observations compared to temporal ones.Since this relation already applies in one space dimension, it is intuitive to extend this Assumption to higher space dimensions, cf. <cit.>. The damping parameter, which also influences the randomness in our model, also impacts the relationship between the resolutions of observations in temporal and spatial dimensions. As the dimensionality increases, the number of available spatial observations decreases. In particular, in one dimension, researchers such as <cit.> and <cit.> demonstrated that in this regime, realized volatilities, expressed as:(y)=∑_i=1^n (X_iΔ_n(y)-X_(i-1)Δ_n(y))^2, j=1,…,m,are sufficient for estimating parameters with an optimal rate of convergence (m_n n)^-1/2. However, <cit.> established different optimal convergence rates when the condition m_n/√(n)→ 0 is violated, proposing rate-optimal estimators in this context based on double increments in both space and time. The norm ·_0 quantifies the smallest change between two spatial observations in each dimension, effectively extending the condition used for one-dimensional SPDEs to multiple dimensions. This condition becomes especially necessary to bound covariances of the realized volatilities across different spatial coordinates. It's worth noting that alternative choices besides ·_0 can be considered for controlling the covariance structure in this multi-dimensional model. For instance, one might intuitively opt for the Euclidean norm instead of examining changes along each axis. Nevertheless, we use the presented mapping to directly connect the assumptions from the one-dimensional model, as outlined in <cit.>, with Assumptions <ref>, facilitating a straightforward comparison between them.Furthermore, we impose the following regularity condition, as introduced by <cit.>. [Regularity] For the SPDE model from equation (<ref>) we assume that (i) either [⟨ξ, e_k⟩_ϑ]=0 for all k∈^d and sup_k∈^dλ_k^1+α[⟨ξ,e_k⟩_ϑ^2]<∞ or [A_ϑ^(1+α)/2ξ^2_ϑ]<∞ holds true, for α∈(d/2-1,d/2),(ii) (⟨ξ,e_k⟩_ϑ)_k∈^d are independent.§ VOLATILITY ESTIMATIONIn this section, our aim is to develop an estimator for the volatility parameter σ^2>0. For this aim, we will utilize quadratic increments in time, as this statistic usually contains information about the volatility of the underlying process. For the estimation of σ^2, we assume the remaining parameters in A_ϑ to be known, as well as the damping parameter α. Hence the orthonormal system e_k and the eigenfunctions λ_k are known, which enables us to estimate the volatility based on discrete recordings of X_t in multiple time and space coordinates. More precisely, we will conduct volatility estimation using the method of moments. It is well established, that in one space dimension, the increments of a solution process X_t behaves different than the standard setup for semi-martingales, see, for instance <cit.> or for semi-martingales <cit.>. As we transfer the SPDE model to a multi-dimensional setup, it is expected, due to the coloured noise structure of B, that the behaviour of the increments again changes. Therefore, analysing the expected value of temporal squared increments and realized volatilities, gains a deeper insight into the multi-dimensional SPDE model and the capabilities of statistical inference. On Assumptions <ref> and <ref>, we have uniformly in y∈[δ,1-δ]^d that[(Δ_i X)^2(y)]=Δ_n^α'σ^2e^-κy_1Γ(1-α')/2^d(πη)^d/2α'Γ(d/2)+r_n,i+(Δ_n),where α'∈(0,1) and a sequence r_n,i satisfying sup_1≤ i≤ nr_n,i=(Δ_n^α') and ∑_i=1^nr_n,i=(Δ_n^α'). Furthermore, rescaling yields the following:[1/nΔ_n^α'∑_i=1^n(Δ_i X)^2(y)]=σ^2e^-κy _1Γ(1-α')/2^d(πη)^d/2α'Γ(d/2)+(Δ_n^1-α'). Comparing this result to the SPDEs in one space dimension presented in <cit.> reveals some crucial differences in the structure of the random fields. In higher space dimensions, we observe the appearance of the normalized volatility σ_0^2 and the curvature term e^-yκ, which are transposed from one space dimension.Furthermore, in higher dimensions, we obtain extra constants, among others, depending on α'. However, the most significant distinction when working in higher dimensions is that the parameter α', resulting from the coloured noise in this model, influences the leading term on one side and reduces the convergence speed of the error term on the other side. By referring to <cit.>, we can see that α' governs the regularity in time, which is reflected in the presence of Δ_n^α'. Additionally, employing the Kolmogorov-Chentsov theorem (Kolmogorov continuity theorem) and Proposition <ref>, we find that the paths of X_t are Hölder-continuous of almost order α'/2.Note, that the space dimension d of the model only affects the leading term of the expected value, while the order of the error term is solely dependent on α'. Additionally, the latter proposition reveals that the remainder r_n,i becomes negligible when summing over the squared increments. As this remainder includes the initial condition, we observe that the impact of the initial condition becomes irrelevant when using the realized volatility statistic. Consequently, constructing an estimator based on the method of moments will yield better results for small α'. Assuming the parameters κ∈^d, η>0, and α'∈(0,1) to be known, an estimator based on the first moment method of the rescaled realized volatility for the volatility parameter σ^2 is therefore given byσ̂^2_y:=σ̂^2_n(y):=1/nΔ_n^α'K∑_i=1^n(Δ_iX)^2(y)e^κy_1,where the constant K is defined byK:=Γ(1-α')/2^d(πη)^d/2α'Γ(d/2). Since the estimator σ̂^2_y estimates the volatility parameter σ^2 based on a single spatial point, we also introduce the following estimator:σ̂^2:=σ̂^2_n,m:=1/nmΔ_n^α'K∑_j=1^m∑_i=1^n(Δ_iX)^2(y_j)e^κy_j_1,for spatial points y_1,…,y_m∈ [δ,1-δ]^d.An important distinction between coloured and white noise is that coloured noise often leads to correlated discrete increments, whereas we often find uncorrelated increments in white noise models. As demonstrated in <cit.>, discrete temporal increments of a SPDE model in one spatial dimension are already negatively correlated, despite the use of white noise. This circumstance implies that we do not need to develop a fundamentally different theory, for instance, for the proofs of central limit theorems.Nevertheless, by varying the structure of the cylindrical Brownian motion, we can expect a change in the autocovariance structure, which now depends on α'. On Assumptions <ref> and <ref>, it holds for the covariance of the increments (Δ_iX)(y), 1≤ i≤ n uniformly in y∈[δ,1-δ], for all δ∈(0,1/2) that(Δ_iX(y),Δ_jX(y)) =-σ^2 e^-κy_1Δ_n^α'Γ(1-α')/2^d+1(πη)^d/2α'Γ(d/2) ×(2i-j^α'-(i-j-1)^α'-(i-j+1)^α')+ r_i,j+(Δ_n),where i-j≥ 1 and the remainders (r_i,j)_i,j=1,…,n are negligible, i.e. ∑_i,j=1^n r_i,j=𝒪(1). The autocovariance of the coloured noise process appears to depend solely on the spatial coordinate y through the exponential term, which implies that the autocorrelation is independent of the spatial coordinate. Consequently, the autocorrelation structure is determined by the temporal distance or lag between increments rather than the specific temporal locations themselves. If we assume that n is sufficiently large, the autocorrelation of temporal increments can be approximated as follows:ρ_(Δ_iX),α'(i-j) ≈ -i-j^α'+1/2(i-j-1)^α'+1/2(i-j+1)^α'=(i-j^α'-2),for i ≠ j. As α'∈(0,1), the autocorrelation diminishes as the lag i-j between observations increases. Furthermore, from the first derivative, we observe that the autocorrelation is monotonically decreasing. Thus, the most substantial negative correlation is found at i-j=1, where the autocorrelation takes the value ρ_(Δ_iX),α'(1)=2^α'-1-1. In the one-dimensional case with a white noise structure, corresponding to α'=1/2, the authors <cit.> demonstrated a similar behaviour. They found the most significant (negative) autocorrelation occurred at consecutive increments, with a value of (√(2)-2)/2. Hence, this behaviour extends to multiple spatial dimensions.Assuming that the initial condition is a stationary normal distribution with ξ∼𝒩(0,σ^2/(2λ_k^1+α)), the random field X_t becomes a Gaussian random field. Proposition <ref> provides valuable information regarding the identifiability of parameters using temporal increments statistics such as realized volatility. In a manner similar to one space dimension, it appears feasible to consistently estimate the natural parameters, given as the normalized volatility σ_0^2=σ^2/η^d/2 and the curvature parameter κ.Although the SPDE model in multiple-space dimensions possess an alternating structural behaviour compared to its one-dimensional counterpart, we can employ the decay of the autocovariances and derive a central limit theorem (CLT) for the estimator σ̂^2 based on a CLT for ρ-mixing triangular arrays by <cit.>. On Assumptions <ref> and <ref>, we have √(nm_n)(σ̂_n,m_n^2-σ^2)d⟶𝒩(0,Υ_α'σ^4),for n, Υ_α' is a numerical constant defined in equation (<ref>) and m_n=(n^ρ), with ρ∈(0,(1-α')/(d+2)). The previous proposition establishes that a central limit theorem holds for both volatility estimators, σ̂^2_n(y) from equation (<ref>) and σ̂^2_n,m from equation (<ref>), with an asymptotic variance of Υ_α'σ^4. Comparing this result to a SPDE model in one space dimension, as presented in <cit.>, where α'=1/2, reveals that the same asymptotic behaviour is achieved. Hence, this asymptotic behaviour extends to multiple space dimensions. Nevertheless, Assumption <ref> states a stronger restriction than in the one-dimensional case, which is necessary for the covariance (σ̂^2_y_1,σ̂^2_y_2) to asymptotically vanish for two distinct space points y_1,y_2∈[δ,1-δ]^d. As the asymptotic variance in the latter proposition hinges on the unknown volatility parameter, we can not observe confidence intervals for the volatility parameter directly. Nevertheless, confidence intervals can be observed by utilizing the quarticity estimator:σ̂^4:=σ̂_n,m^4:=(2^d(πη)^d/2α'Γ(d/2)/Γ(1-α'))^21/3mnΔ_n^2α'∑_j=1^m∑_i=1^n(Δ_iX)^4(y_j)e^2κy_j_1.Under stronger regularity assumptions, such as sup_k∈^dλ_k^1+α[⟨ξ,e_k⟩_ϑ^l]<∞, for l=4,8, one can show by using the bias-variance decomposition, that the quarticity estimator consistently estimates the quarticity parameter σ^4. Applying Slutskys theorem yields asymptotic confidence intervals. § ASYMPTOTIC LOG-LINEAR MODEL FOR REALIZED VOLATILITIES AND LEAST SQUARES ESTIMATIONIn one space dimension, the authors <cit.> showed, that realized volatilities can asymptotically be linked to a log-linear model, yielding efficient parameter estimation based on ordinary least squares for the natural parameters of the respective one-dimensional SPDE model. The aim of this section is to investigate if this link can be applied in the multivariate case too and therefore, considering parameter estimation for the natural parameters σ_0^2>0 and κ∈. Throughout this section, we assume the damping parameter α to be known.We propose an estimator for the pure damping parameter α' at the end of this section.Building upon the foundation laid by Proposition <ref>, it becomes apparent that rescaled realized volatilities exhibit qualitative resemblance to normal random variables when the count of temporal observations is sufficiently large. Consequently, we are enabled to assert, for adequately large values of n, that√(n)(σ̂^2_y-σ^2)≈𝒩(0,Υ_α'σ^4),where we obtain by rearranging the latter display that(y)/nΔ_n^α' ≈ e^-κy_1Γ(1-α')σ^2/η^d/2α'·1/π^d/2Γ(d/2)2^d√(n)(√(n)+√(Υ)_α'Z)=e^-κy_1Γ(1-α')σ^2_0/α'·1/π^d/2Γ(d/2)2^d(1+√(Υ_α'/n)Z),with Z∼𝒩(0,1). We adopt the strategy of converting this approximation into a log-linear model, namely:log((y)/nΔ_n^α') ≈ -κy_1+log(σ_0^2K)+√(Υ_α'/n)Z,where K is defined in (<ref>). To be more precise, we arrive at an approximation that resembles a multiple linear regression model. Considering the asymptotic decorrelation of the realized volatilities across different spatial locations, we can establish this linear model by examining log((y_j)/(nΔ_n^α')) for j=1,…,m. This representation also implies a homoscedastic normal distribution for the errors within the linear model. As the log-realized volatilities are only asymptotically linkable to a log-linear model, we have to carefully analyse the error terms.To illustrate, let us revisit the concept of the multiple linear regression model with the help of the following example.An ordinary multiple linear regression model is given byY=Xβ+ε,whereY=[ Y_1; ⋮; Y_m ], X=[ 1 y_1^(1) y_d^(1); ⋮ ⋮ ⋱ ⋮; 1 y_1^(m) y_d^(m) ], β=[ β_0; β_1; ⋮; β_d ],and homoscedastic errors ε=(ε_1,…,ε_m)^⊤, with [ε_i]=0, (ε_i)=σ^2>0, for i=1,…,m and (ε_i,ε_j)=0, for all i,j=1,…,m, with i≠ j. In addition, the variance-covariance matrix of ε is given by Σ:=(ε)=σ^2E_m. We call the parameter β_0 intercept and the parameters β_i as slope, where i=1,…,d.Suppose that m≥ (d+1), and the matrix X possesses a full rank of (d+1). Under these assumptions, the least squares estimator for the unobservable parameter β within this model can be expressed as follows:β̂=(X^⊤ X)^-1X^⊤ Y.Substituting the representation of Y into the latter expression results in the following identity: β̂=(X^⊤ X)^-1X^⊤ Y=β+(X^⊤ X)^-1X^⊤ε,which shows that the estimators β̂ is unbiased. In particular, the inverse of (X^⊤ X)^-1 exists due to the full rank condition on the design matrix X.The component-wise estimators highlighted in Example <ref> are commonly referred to as Gauss-Markov estimators. It is well-known, that the Gauss-Markov estimators qualify as BLUE (Best Linear Unbiased Estimators), implying that they possess the minimum variance among all linear and unbiased estimators. However, it is important to acknowledge that the number of observations (Y_j)_1≤ j≤ m is intrinsically linked to the dimensionality, specifically requiring m≥ (d+1), as stated in the preceding example. Consequently, we introduce the the following full-rank assumption.Let y_1,…,y_m∈[δ,1-δ]^d, where m≥ d such that the linear spanspan(Y_m)=^d+1, where Y_m:={(1,y_1),…,(1,y_m)},is a spanning set of ^d+1. In accordance with Assumption <ref>, it is established that the discretization of the random field is more refined in time than in space, denoted by m=(n^ρ), where ρ∈(0,(1-α')/(d+2)). Additionally, Assumption <ref> imposes the requirement that a minimum of d+1 spatial observations is necessary to construct an estimator for the natural parameters. Collectively, these assumptions enforce a minimal number of temporal points, indicated byn>(d+1)^d+2/1-α',Asymptotically, this restriction is evidently satisfied since d is fixed. However, the restrictive nature becomes significant in a simulation scenario. The latter display implies that n grows exponentially with the dimensions d≥ 2. Furthermore, if α' is close to one, the growth of n becomes particularly pronounced. Therefore, estimating the natural parameters using this least squares approach based on realized volatilities might only be accurate for lower dimensions, such as d=2,3, or when a large number of temporal observations is available. We can now establish the estimators for the natural parameters σ_0^2, κ_1, …, κ_d within the context of the SPDE model from equation (<ref>). Leveraging the approximation (<ref>) and referencing Example <ref>, we proceed to define the multi-dimensional parameter and its corresponding estimator as follows:Ψ:=[ log(σ_0^2K);-κ_1; ⋮;-κ_d ]∈^d+1 and Ψ̂:=Ψ̂_n,m:=(X^⊤ X)^-1X^⊤ Y∈^m,where X:=[ 1 y_1^(1) y_d^(1); ⋮ ⋮ ⋱ ⋮; 1 y_1^(m) y_d^(m) ]∈^m× (d+1) and Y:=[ log((y_1)/nΔ_n^α');⋮; log((y_m)/nΔ_n^α') ]∈^m. To effectively estimate the natural parameters σ_0^2, κ_1, …, κ_d, we introduce the parameter υ∈ (0,∞)×ℝ^d along with its associated estimator υ̂, defined as follows:υ:=[ σ_0^2; κ_1; ⋮; κ_d ] and υ̂:=υ̂_n,m:=h^-1(Ψ̂):=[ h_1^-1(Ψ̂_1);⋮; h_d+1^-1(Ψ̂_d+1) ],where Ψ̂=(Ψ̂_1,…,Ψ̂_d+1)^⊤ and h:(0,∞)×^d→^d+1, h^-1:^d+1→ (0,∞)×^d, withh(x)=[ log(x_1K);-x_2; ⋮;-x_d+1 ] and h^-1(x)=[ e^x_1/K;-x_2; ⋮;-x_d+1 ].Note, that h(υ)=Ψ.When considering a central limit theorem, one concern lies in determining the asymptotic variance. In the context of Example <ref>, we obtain:(√(m)(β̂-β)) =(√(m)β̂)=σ^2c(c/mX^⊤ X)^-1n⟶σ^2cΣ^-1, where we make the assumption that c/m(X^⊤ X) converges to a symmetric positive-definite variance-covariance matrix Σ∈ℝ^(d+1)× (d+1), where c>0 is a suitable constant. This assumption consequently entails that Σ^-1 is also symmetric and positive-definite.In our model, we observe spatial coordinates y_1,…,y_m within the range [δ,1-δ]^d, signifying that these spatial observations are situated at least δ>0 distance away from the boundaries of the unit hypercube. We can examine the structure of the matrix X^⊤ X by utilizing the explicitly provided expression of X from Example <ref> and have 1-2δ/mX^⊤ X =1-2δ/m[m∑_j=1^m y_1^(j)∑_j=1^m y_2^(j) ∑_j=1^m y_d^(j);∑_j=1^m y_1^(j)∑_j=1^m (y_1^(j))^2 ∑_j=1^m y_1^(j)y_2^(j)∑_j=1^m y_1^(j)y_d^(j);∑_j=1^m y_2^(j) ∑_j=1^m y_2^(j)y_1^(j) ∑_j=1^m(y_2^(j))^2∑_j=1^m y_2^(j)y_d^(j);⋮⋮⋮⋱⋮;∑_j=1^m y_d^(j) ∑_j=1^m y_d^(j)y_1^(j) ∑_j=1^m y_d^(j)y_2^(j)∑_j=1^m(y_d^(j))^2 ]m⟶Σ,where Σ=(Σ_i,l)_1≤ i,l≤ d+1, with Σ_i,l:= 1-2δ , ifi=l=1, lim_m1-2δ/m∑_j=1^m y_l-1^(j) , ifi=1,2≤ l≤ d+1, lim_m1-2δ/m∑_j=1^m y_i-1^(j) , if2≤ i≤ d+1, l=1, lim_m1-2δ/m∑_j=1^m (y_i-1^(j))^2, if2≤ i=l≤ d+1, lim_m1-2δ/m∑_j=1^m y_i-1^(j)y_l-1^(j) , if 2≤ i,l≤ d+1 , with i≠ l.The convergence of the Riemann sums is guaranteed by the straightforward bounds:0≤1-2δ/m∑_j=1^m a_j≤ 1, for all m∈ℕ, where the sequence (a_j) corresponds to the relevant sequence within the Riemann sums in equation (<ref>).We give this elementary example, since our estimator Ψ̂ and the asymptotic variance-covariance matrix of our estimator will be in line with the translation of the example <ref> to our model, as stated in the following proposition.On Assumptions <ref>, <ref> and <ref>, we have √(nm_n)(Ψ̂_n,m_n-Ψ)d⟶𝒩(0,Υ_α'(1-2δ)Σ^-1),for n, δ∈(0,1/2), 0=(0,…,0)^⊤∈^d+1, Υ_α' defined in equation (<ref>) and Σ defined in equation (<ref>). As in the one-dimensional case, our central limit theorem is readily feasible and provides asymptotic confidence intervals, as it only depends on the known parameter α'. The latter proposition also states, that the connection between realized volatilities and a log-linear model transfers to multiple-space dimensions. Utilizing the multivariate delta method yields a CLT for the estimator υ̂, given in the following corollary. On Assumptions <ref>, <ref> and <ref>, we have √(nm_n)(υ̂_n,m_n-υ)d⟶𝒩(0,Υ_α'(1-2δ)J_σ_0^2Σ^-1J_σ_0^2),for n, δ∈(0,1/2), 0=(0,…,0)^⊤∈^d+1, Υ_α' defined in equation (<ref>), Σ^-1 defined in equation (<ref>) and J_σ_0^2 defined in equation (<ref>).We turn our attention to the estimation of the pure damping parameter α'∈(0,1), and therefore estimating the parameter α. As this parameter necessarily arises when considering a multi-dimensional SPDE, an estimation of this parameter becomes even more pronounced then in one-space dimensions. In addition, the presented estimators in this paper were constructed under the premises, that α' is known. When dealing with real-world data, this assumption may not be fulfilled. As already mentioned, the damping parameter controls the Hölder regularity of the temporal marginal processes of a solution X_t, which is also effecting the correlations in our model. Therefore, we follow an approach for estimating α' by using a well-established concept from estimating the Hurst parameter for fractional Brownian motions. The main idea is to use two different temporal grids, one containing all the available data and the other, containing a thinned version of the original grid. Having both grids, we aim to use realized volatilities in order to gain information about the pure damping parameter. In detail, let us consider a mild solution X_t(y) of the SPDE model from equation (<ref>). Assume we obtain X on a grid with 2n temporal and m spatial points according to Assumption <ref>.First, we want the new grid to be equidistant in time with ñ=(2n), ñ<2n temporal points, such that it satisfies the observation scheme in Assumption <ref>. Furthermore, Proposition <ref>, as presented in section <ref>, suggests to filter the original grid such that the new grid contains the maximum amount of temporal points. Intuitively, having the most possible temporal points, while respecting an equidistant order of these, should shrink the variance of the estimator. Hence, we set ñ=n. As we need to distinguish between both temporal resolutions we introduce the following notations. The temporal increments for both grids are denoted by(Δ_2n,i_1X)(y):=X_i_1Δ_2n-X_(i_1-1)Δ_2n and (Δ_n,i_2X)(y):=X_i_2Δ_n-X_(i_2-1)Δ_n,where 1≤ i_1≤ 2n and 1≤ i_2≤ n. The increments of the filtered temporal grid can be rewritten by(Δ_n,iX)(y) =X_2iΔ_2n-X_2(i-1)Δ_2n=(Δ_2n,2iX)(y)+(Δ_2n,2i-1X)(y),where i=1,…,n. Furthermore, by using a index transformation, we can write:(Δ_n,iX)(y) =1_2(i)((Δ_2n,iX)(y)+(Δ_2n,i-1X)(y)),for i=1,…,2n, where 2 denotes the set of all even and non-negative integers, i.e.: 2={0,2,4,…}. Thus, the realized volatilities can be defined as:RV_2n(y):=∑_i=1^2n(Δ_2n,iX)^2(y) and RV_n(y):=∑_i=1^n(Δ_n,iX)^2(y).By using equation (<ref>), we can link the filtered realized volatilities with the original grid and obtain:RV_n(y)=RV_2n(y)+2∑_i=2^2n1_2(i)(Δ_2n,iX)(y)(Δ_2n,i-1X)(y).By using equation <ref>, we can construct an estimator for the pure damping parameter using the following approach: log((y)/n)-log((y)/2n) ≈α'(log(Δ_n)-log(Δ_2n))+√(Υ_α'/n)Z_1-√(Υ_α'/2n)Z_2=α'log(2)+√(Υ_α'/n)Z_1-√(Υ_α'/2n)Z_2, where Z_1,Z_2∼𝒩(0,1) and y∈[δ,1-δ]^d. Among others, the linear model proposes an estimator for the unknown parameter α' given byα̂':=α̂_2n,m':=1/log(2)m∑_j=1^mlog(2(y_j)/(y_j)).For analysing asymptotic properties of this estimator, it is crucial to investigate the correlation structure of quadratic temporal increments and the product of consecutive temporal increments, as evident by (<ref>). Having this knowledge on the covariances, we can prove the following CLT. On Assumptions <ref> and <ref> we have √(2nm_n)(α̂'_2n,m_2n-α')d⟶𝒩(0,log(2)^-2(3Υ_α'-2^2-α'(Υ_α'+Λ_α'))),for n, where m_2n=((2n)^ρ) with ρ∈(0,(1-α')/(d+2)), Υ_α' defined in (<ref>) and Λ_α' defined in equation (<ref>).As the proof of this central limit theorem uses analogous techniques as used for Proposition <ref>, we omit the proof and only provide the proof leading to the asymptotic variance in Section <ref>. The term -2^2-α'(Υ_α'+Λ_α')) in the asymptotic variance, as outlined in the letter CLT, represent the non-negligible covariance structures that appear when using realized volatilities on two temporal grids with different resolutions. Since Proposition <ref> also establishes the consistency of the estimator α̂', we can conclude that the estimators σ̂^2_y and σ̂^2 from Section <ref>, along with Ψ̂ and υ̂ from Section <ref>, remain consistent when α' is unknown and therefore replaced via plug-in by the estimator α̂'. We can also preserve the original CLTs from the estimators σ̂^2,Ψ̂ and υ̂ from the Propositions <ref>, <ref> and Corollary <ref>, by accepting a slightly slower rate than n^1/2. § SIMULATION METHODS AND MONTE CARLO SIMULATION STUDY §.§ Simulation methods To simulate linear one-dimensional SPDE models, two techniques have been established: the truncation method and the replacement method, as referenced in <cit.> and <cit.>, respectively. In the subsequent discussion, we will explore both simulation methods in the context of multi-dimensional cases, beginning with the truncation method.The truncation method relies on the Fourier decomposition of a mild solution X_t(y) of a SPDE model from (<ref>) and allows to simulate these SPDE models with deterministic or normally distributed initial conditions ξ. The concept involves truncating the Fourier series from (<ref>) at a sufficiently large cut-off frequency K=(K_1,…,K_d)∈^d, simulating only the first K_l Fourier modes respectively, where l=1,…,d. Assuming a deterministic or normally distributed initial condition, combined with Assumption <ref>, we find the coordinate processes normally distributed, where x_k(t)∼𝒩(e^-λ_k tμ_ξ , σ^2/2λ_k^1+α(1-e^-2λ_k t) +e^-2λ_k tσ_ξ^2),for k∈^d. Here μ_ξ and σ_ξ^2 denotes the expected value and variance of the initial condition ξ, respectively.Assuming that the initial condition is deterministic, we can deduce that x_k is normally distributed with a variance of σ^2(1-e^-2λ_k t)/(2λ_k^1+α).Notably, the variance of the Fourier modes x_k is influenced by the damping parameter α. A larger value of α implies a stronger damping and quicker convergence of the variance towards zero, when k→∞. On the other hand, a smaller value of α indicates a weaker damping and slower convergence of the variance. To simulate the Fourier modes x_k, we have for t=0,…,(N-1)Δ_n thatx_k(t+Δ_n)-x_k(t)e^-λ_kΔ_n =σλ_k^-α/2∫_t^t+Δ_ne^-λ_k(t+Δ_n-s) W_s^k.Hence, we infer the recursive representation:x_k(t+Δ_n)=x_k(t)e^-λ_kΔ_n+σ√(1-exp[-2λ_kΔ_n]/2λ_k^1+α)𝒩_t,with i.i.d. standard normals 𝒩_t and x_k(0)=⟨ξ,e_k⟩_ϑ, where ξ is either deterministic or normal distributed. We therefore introduce the truncation method by approximating the Fourier series of X_t(y) using a cut-off frequency 𝒦:={1,…,K}^d, where K∈. In one space dimension, the effectiveness of this method is strongly influenced by the chosen cut-off rate K ∈. The authors <cit.> observed through empirical study that insufficiently large values of K lead to considerable biases in the simulations. Selecting an appropriate cut-off rate also appears to be dependent on the number of spatial and temporal observations. Even for moderate sample sizes, a cut-off rate of K=10^5 is recommended, but it comes with a significant computational cost. For instance, simulating a single realization of X on a grid with M=100 spatial points and N=10^4 temporal points, using a cut-off rate K=10^5, takes approximately 6 hours when utilizing 64 cores. These issues becomes even more pronounced when dealing with multiple space dimensions. When simulating multi-dimensional SPDEs, it is reasonable to choose a cut-off frequency of at least K=10^5 as well, leading to (K+1)^d loop iterations. For example, in a two-dimensional case, <cit.> performed simulations at 200× 200 equispaced coordinates with a temporal resolution of N=10^3. Using a cut-off rate of K=10^5, the simulation of one sample path took approximately 100 hours while using three personal computers. This highlights the computational challenge of simulating multi-dimensional SPDEs with a large cut-off frequency, as it requires a substantial amount of computing power and time. However, the use of a sufficiently high cut-off frequency is crucial to ensure accurate and unbiased simulations of the SPDEs.These issues motivated the second approach, known as the replacement method. The author <cit.> build on the work of <cit.>, by replacing the higher Fourier modes instead of cutting them off. For introducing this approach, we assume ξ≡ 0. The main idea of the replacement approach is to change the Hilbert space, leading to the infinite Fourier representation in (<ref>). Therefore, we assume the spatial coordinates to be equidistant y∈{0,1/M,…,(M-1)/M,1}^d along each space dimension, i.e., y_j=j/M=(j_1/M,…,j_d/M) and j∈{0,…,M}^d=:𝒥. We define the inner product by⟨ f,g,⟩_ϑ,M:=1/M^d∑_j∈𝒥f(y_j)g(y_j)e^κy_j_1,where f,g:[0,1]^d. It holds that (e_k)_1≤ k<M from equation (<ref>) form an orthonormal system with respect to the inner product ⟨·,·⟩_ϑ,M.Hence, we can express a solution X_t as:X_t(y_j)=∑_m∈ℳU_m(t)e_m(y_j), with U_m(t)=⟨ X_t,e_m⟩_ϑ,M,where ℳ={1,…,M-1}^d. Note, that e_m(y_j)=0, if m=(m_1,…,m_d) contains at least one entry m_l, which is either zero or M, i.e. m_l∈{0,M}, for a l∈{1,…,d}. Using the Fourier representation, as given in equation (<ref>), we have U_m(t)=∑_k∈^dx_k(t)⟨ e_k,e_m⟩_ϑ,M.Let k∈^d, then we can decompose the inner product by⟨ e_k,e_m⟩_ϑ,M=2^d/M^d∑_j∈𝒥∏_l=1^dsin(π k_l y_j_l)sin(π m_ly_j_l)=∏_l=1^d⟨ẽ_k_l,ẽ_m_l⟩_ϑ,M,1,where ẽ_k and ⟨·,·⟩_ϑ,M,1 denote the respective one-dimensional orthonormal basis and inner product as defined in <cit.>. Thereby, we also know that⟨ẽ_k,ẽ_m⟩_ϑ,M,1=1,if k=m+2lM or k=2M-m+2lm for l∈_0, k∈ and m∈{1,…,M-1}. Therefore, the index set ℐ_m is given by the following d-fold Cartesian product:ℐ_m:=_l=1^dℐ_m_l,1,where ℐ_k,1 denotes the one-dimensional index set introduced by <cit.>, given byℐ_k,1^+={k+2lM,l∈_0}, ℐ_k,1^-={2M-k+2lM,l∈_0}, ℐ_k,1:=ℐ_k,1^+∪ℐ_k,1^-,where k∈. Since x_ld=-x_l, for all l∈^d, we have U_m(t)=∑_k∈^dx_k(t)⟨ e_k,e_m⟩_ϑ,M=∑_l∈ℐ_mx_l(t), where x_l denotes the coordinate process from equation (<ref>).The covariances of the coordinate processes, given by(x_k(t_i),x_k(t_j)) =σ^2/2λ_k^1+αe^-λ_ki-jΔ_n(1-e^-2λ_kmin(i,j)Δ_n),are vanishing if λ_k∝k^2 is significantly larger than 1/Δ_N due to the presence of the exponential term. Therefore the coordinate processes (x_k(t_i))_1≤ i≤ N effectively behave like i.i.d. centred normal random variables, with a variance:(x_k(t_i))≈σ^2/2λ_k^1+α,for a sufficiently large k∈^d. Analogously to <cit.>, we choose a bound L∈ and replace all coordinate processes (x_k) with k∉(0, LM)^d by a vector of independent normal random variables (z_l)_l∈^d with variance σ^2/(2λ_l^1+α), i.e.:U_m(t)=∑_l∈ℐ_ml∈ (0,LM)^dx_l(t)+∑_l∈ℐ_ml∉ (0, LM)^dz_l(t). Since the normal distribution is stable with respect to summation, we can replace the sum of the normal random variables with centred normal random variables R_m∼𝒩(0,s_m^2), wheres_m^2=∑_l∈ℐ_m l∉ (0, LM)^dσ^2/2λ_l^1+α.By equation (<ref>), it is evident that the series in s_m^2 converges.In the one-dimensional case, <cit.> developed a formula to precisely compute the one-dimensional replacement variance. One key advantage of this formula is its closed form, which enables rapid computation with minimal computational time.However, in the multivariate case, the series becomes more intricate due to the additional exponent 1+α and the squaring of the summation indices. This complexity renders direct application of related series, such as the multiple zeta function or its extension, the multiple Lerch zeta function, impractical, cf. <cit.> or <cit.>. Consequently, we currently resort to numerical approximation methods to estimate the variance s_m^2, given bys_m^2≈∑_l∈ℐ_m l∈ (0,KM)^d\ (0,LM)^dσ^2/2λ_l^1+α=:s̃_m,where K>L, K∈ denotes the cut-off of the approximation. The multi-dimensional replacement method is then given byX_t_i(y_j)=∑_m∈ℳU_m(t_i)e_m(y_j), where U_m(t_i)=∑_l∈ℐ_ml∈ (0,LM)^dx_l(t_i)+R̃_m(i),where R̃_m(i)∼𝒩(0,s̃_m) denote the respective replacement random variables with the cut-off variance s̃_m and t_i+1-t_i=1/N, where i=1,…,N. In this numerical approach, the quality of the simulation is highly dependent on the chosen variance cut-off K, as this cut-off effects the quality of the replacements R̃_m. If K is selected to be too small, it will result in a negative bias in the simulations. Therefore, it is essential to carefully select an appropriate value for K to ensure accurate and reliable simulations without introducing any significant bias. In Figure <ref>, we conducted a simulation of a two-dimensional SPDE model on a grid with N=10^4 temporal points and M=10 spatial points on each axis. The top row displays a comparison between the theoretically realized values, as per Proposition <ref>, and the sample mean of the rescaled realized volatility for three different cut-off values: K=20, 100, 1500. The bottom row illustrates the corresponding deviations between the theoretical predictions and the empirical outcomes. Notably, for the case of K=20, a significant negative bias is observed, while the bias diminishes as the cut-off frequency increases.An implementation of this method on R-programming language can be found in the R-package [Link to web-page: <https://github.com/pabolang/SecondOrderSPDEMulti>], available on the web-page . When performing a Monte Carlo study, the variance s_m, needs to be calculated only once. Since the runtime for larger K values, can be enormous, we have implemented an option within the functionin the named R-package. This option allows for the utilization of the precomputed variance s_m using the function , which dramatically reduces runtime when performing a Monte Carlo study. §.§ Monte Carlo simulation study To illustrate the central limit theorem described in Proposition <ref>, we conducted a Monte Carlo study. In this study, we simulated a 2-dimensional SPDE model based on equation (<ref>). Each simulation was performed on an equidistant grid in both time and space, with N=10^4 time steps and M=10 spatial steps, resulting in a total of 121 spatial points. The simulation employed the following parameter values: ϑ_0=0, ν=(6,0), η=1, σ=1, and α' taking on values from the set {4/10,1/2,6/10}, corresponding to three distinct damping scenarios. In each scenario, 1000 Monte Carlo iterations were executed. We utilized the replacement method detailed in Section <ref>, with L=10, and for α'=4/10 and α'=1/2, we set a cut-off frequency of K=10^3, while for α'=6/10, we used K=1500.Figure <ref> presents a comparison between the empirical distribution of each scenario and the asymptotic normal distribution as stipulated in Proposition <ref>. To estimate the kernel density, we employed a Gaussian kernel with Silverman's 'rule of thumb'. As discussed in Section <ref>, the replacement method introduced a notable negative bias due to the cut-off frequency K. To address this bias, we centred the data by utilizing the sample mean of the volatility estimations. This approach provided a clear basis for visually comparing the empirical and theoretical distributions. All three scenarios exhibit a substantial fit, with the volatility estimator employing a spatial boundary of δ=0.05, resulting in 81 spatial points for estimation. The sample mean of the volatility estimations were found to be 0.986 for α'=4/10, 0.975 for α'=1/2, and 0.988 for α'=6/10.Figure <ref> depicts a comparison between the empirical distribution of each case and the asymptotic normal distribution as described in Corollary <ref>. The top row shows the simulation results for α'=4/10, and the bottom row presents the results for α'=6/10.Each row consists of three plots, which assess the goodness of fit between the kernel density estimation and the centred normal distribution, as outlined in Corollary <ref>. In these plots, grey represents the results for estimating the normalized volatility parameter σ_0^2, while the other panels in each row (yellow and brown) represent the results for the curvature parameters κ_1 and κ_2, respectively. To account for structural bias in the data, we centred the data by employing the sample mean of the corresponding estimates. In this simulation study, where N=10^4, we must adhere to the following restriction, as outlined in Assumption <ref>:M<N^(1-α')/(d+2)≈ 3.98, if α'=4/10 3.16, if α'=1/22.51, if α'=6/10 . As Assumption <ref> necessitates a minimum of three observations for the application of the estimator υ̂, we have chosen the following observation scheme:𝒮_3:={(1/10,3/10),(4/10,2/10),(7/10,5/10)}.Indeed, this observation scheme satisfies the Assumption <ref>, as evident by the following calculation:1 1/10 3/10 1 4/10 2/10 1 7/10 5/10 =0.12 ≠ 0,where A denotes the determinant of a matrix A∈^p× p, for p∈.For the cases α'∈{4/10,1/2}, we obtain that 𝒮_3<N^(1-α')/(d+2), whereas the assumption <ref> is (slightly) violated for α=6/10, since 𝒮_3>N^(1-α')/(d+2).Nevertheless, we present the simulation results in Figure <ref> for the two cases α'=4/10 and α'=6/10 and observe that both scenarios exhibit a substantial fit. Since the results for the case α'=1/2 are comparable to the two cases presented for α', we omit this plot. The sample means of the respective estimations are summarized in Table <ref>. We close this section by providing density plots for estimating the parameter α'. Figure <ref> shows a comparison between the empirical distribution of each case and the asymptotic normal distribution as described in Proposition <ref>. The left panel shows the simulation results for α'=4/10, the middle panel displays the results for α'=1/2, and the right panel presents the results for α'=6/10. To account for structural bias in the data, we centred the data by employing the sample mean of the corresponding estimates.To estimate the damping parameter, we adopted a spatial threshold of δ=0.05, which led to the utilization of 81 spatial coordinates for estimation. The parameter choices employed for the two-dimensional SPDE model are consistent with the simulation study presented earlier for the previous estimators. All three scenarios exhibit a significant fit, where we observe a qualitative difference between lower values of α' and higher values. This distinction can be attributed to the fact that α governs the Hölder regularity of the sample paths. Lower values of α result in rougher paths, thereby yielding a more accurate fit. The sample means of the estimates are given by 0.393 for α'=4/10, 0.484 for α'=1/2 and 0.554 for α'=6/10.§ PROOFSWe begin by clarifying some notations used in this paper:x_1:=∑_l=1^d x_l, x:=(∑_l=1^dx_l^2)^1/2, x_∞:=max_l=1,…,dx_l.Note, that the introduced notations ·_2,·_∞ define a norm on ^d. However, the notations ·_0 and ·_1 do not define a norm, as they do not even map to the non-negative real numbers. Nevertheless, we use a norm notation to indicate an operation across all the spatial dimensions. For a measurable function f:^d we define the ℒ^p-norm byf_ℒ^p(D):=(∫_Df(x)^px)^1/p,where D⊆^d.Finally, we define the point-wise product by : ^d×^d ^d xy ↦ (x_1y_1,…,x_dy_d)^⊤.We say for k,j∈^d that they are not alike, i.e. k≠j, if there exists at least one index l_0∈{1,…,d} with k_l_0≠ j_l_0. In the following we use the decomposition of the increments x_k for k∈^d, given byΔ_ix_k =⟨ξ,e_k⟩_ϑ(e^-λ_kiΔ_n-e^-λ_k(i-1)Δ_n)+σλ_k^-α/2∫_0^(i-1)Δ_ne^-λ_k(iΔ_n-s)-e^-λ_k((i-1)Δ_n-s) W_s^k +σλ_k^-α/2∫_(i-1)Δ_n^iΔ_ne^-λ_k(iΔ_n-s) W_s^k=A_i,k+B_i,k+C_i,k,where A_i,k :=⟨ξ,e_k⟩_ϑ(e^-λ_kiΔ_n-e^-λ_k(i-1)Δ_n),B_i,k := σλ_k^-α/2∫_0^(i-1)Δ_ne^-λ_k((i-1)Δ_n-s)(e^-λ_kΔ_n-1) W_s^k,C_i,k :=σλ_k^-α/2∫_(i-1)Δ_n^iΔ_ne^-λ_k(iΔ_n-s) W_s^k.§.§ Proofs of Section 3This section is structured in two parts. The first parts provides the proofs for calculating the expected value of the rescaled realized volatilities and the decay of the autocovariance, as stated in Propositions <ref> and <ref>. The second part proofs the central limit theorem for the estimator σ̂^2. §.§.§ Proofs of Propositions <ref> and <ref>For proving Proposition <ref>, we need some auxiliary lemmas. Let f:[0,∞) be twice continuously differentiable with x^d-1f(x^2)_ℒ^1([0,∞)), x^df'(x^2)_ℒ^1([1,∞)) and x^d+1f”(x^2)_ℒ^1([1,∞))≤ C for some C>0, then it holds: (i) Δ_n^d/2∑_k∈^df(λ_kΔ_n)=1/2^d(πη)^d/2Γ(d/2)∫_0^∞ x^d/2-1f(x) x-∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γf(π^2ηz^2)z +(∫_0^√(Δ)_nr^d-1f(r^2) r∨Δ_n∫_√(Δ)_n^1 r^d-1f'(r^2) r∨Δ_n∫_√(Δ)_n^1 r^d+1f”(r^2) r),where B_γ defined in equation (<ref>).(ii) For {j_1,…,j_l}⊂{1,…,d}, γ_j,l∈{0,1}^d, where (γ_j,l)_i=1_i∈{j_1,…,j_l}, with i=1,…,d and l=1,…,(d-1), we have Δ_n^d/2∑_k∈^df(λ_kΔ_n)cos(2π k_j_1y_j_1)·…·cos(2π k_j_ly_j_l)=(-1)^l∫_B_γ_j,lf(π^2ηz^2)z +(max_k=0,…,lΔ_n^k/2+1/δ^l+1∫_√(Δ)_n^1r^d-k+1|f”(r^2)| r∨max_k=0,…,lΔ_n^k/2+1/δ^l+1∫_√(Δ)_n^1 r^d-k-1|f'(r^2)| r) + ( Δ_n^(l+1)/2/δ^l∫_√(Δ)_n^1 r^d-l|f'(r^2)| r). (iii) For {j_1,…,j_l}={1,…,d}, i.e. l=d, we have Δ_n^d/2∑_k∈^df(λ_kΔ_n)cos(2π k_1y_1)·…·cos(2π k_dy_d)=(Δ_n^d/2f(Δ_n))+(Δ_n^d/2/δ^d∫_√(Δ)_n^1 rf'(r^2) r) +(max_k=0,…,d-1Δ_n^k/2+1/δ^d+1∫_√(Δ)_n^1r^d-k+1f”(r^2) r∨max_k=0,…,d-1Δ_n^k/2+1/δ^d+1∫_√(Δ)_n^1 r^d-k-1f'(r^2) r). In particular, it holds for a γ̃∈{0,1}^d with γ̃_1=l and 1≤ l≤ d-1 that∫_B_γ̃f(π^2ηz^2)z=(Δ_n^l/2∫_√(Δ)_n^1r^d-1-lf(r^2) r),and ∑_γ_1=1γ∈{0,1}^d^d∫_B_γf(π^2ηz^2)z=(max_l=1,…,d-1Δ_n^l/2∫_√(Δ)_n^1r^d-1-lf(r^2) r∨∫_0^√(Δ)_nr^d-1f(r^2) r). We begin this proof by making the substitution z_l^2=k_l^2Δ_n, such thatλ_kΔ_n=π^2η∑_l=1^dz_l^2+Δ_n(∑_l=1^d(ν_l^2/4η)-ϑ_0).Subsequently, employing the Taylor expansion with the Lagrange remainder, we obtain thatf(λ_kΔ_n)=f(π^2η∑_l=1^dz_l^2)+f'(ξ)(λ_kΔ_n-π^2η∑_l=1^dz_l^2)=f(π^2η∑_l=1^dz_l^2)+(Δ_n).For k∈^d we define:a_k:=(a_k_1,…,a_k_d)∈^d_+, with a_k_l:=√(Δ)_n(k_l+1/2),where l=1,…,d and [a_k-1,a_k]:=[a_k_1-1,a_k_1]×…×[a_k_d-1,a_k_d]⊂ (0,∞)^d.Note, that a_k_l-a_k_l-1=√(Δ)_n for l=1,…,d and a_0:=√(Δ)_n/2. Moreover, by defining f̃(x):=f(π^2η x^2), we observe that Δ_n^d/2∑_k∈^df(λ_kΔ_n)-∫_[√(Δ)_n/2,∞)^df(π^2ηz^2)z= Δ_n^d/2∑_k_1=1^∞⋯∑_k_d=1^∞ f(π^2ηΔ_n∑_l=1^dk_l^2 )-∑_k_1=1^∞⋯∑_k_d=1^∞∫_a_k_1-1^a_k_1⋯∫_a_k_d-1^a_k_d f(π^2η∑_l=1^dz_l^2) z_1⋯ z_d +𝒪(Δ_n) =∑_k∈^d∫_a_k-1^a_kf(π^2ηΔ_nk^2)-f(π^2ηz^2)z+(Δ_n)=∑_k∈^d∫_a_k-1^a_kf̃(√(Δ)_nk)-f̃(z)z+(Δ_n)=:T_1+(Δ_n),where · denotes the euclidean norm. Define the function g:_+^d_+, with g(x)=f̃(x). Since √(Δ)_nk represents the mid-point of the interval [a_k-1,a_k] for a k∈^d, we can apply a Taylor expansion at the point √(Δ)_nk, leading to the following expression:g(√(Δ)_nk)-g(z) =g(√(Δ)_nk)-(g(√(Δ)_nk)+∇ g(√(Δ)_nk)^⊤(z-√(Δ)_nk) +1/2(z-√(Δ)_nk)^⊤ H_g(ξ_k)(z-√(Δ)_nk)),where ∇ g denotes the gradient of g, H_g the Hessian-matrix of g and ξ_k∈[a_k-1,a_k].Let us introduce the shorthand notation g'_l(z):=∂ g(z)/(∂ z_l), which represents the partial derivative of g(z) with respect to z_l. Then, we have:∫_a_k-1^a_k∇ g(√(Δ)_nk)^⊤(z-√(Δ)_nk)z =Δ_n^(d-1)/2∑_l=1^d g_l'(√(Δ)_nk)∫_a_k_l-1^a_k_l(z_l-√(Δ)_nk_l) z_l=0.Since every term in the Taylor expansion from equation (<ref>) disappears, we proceed by redefining the term T_1 as follows:T_1 :=-∑_k∈^d∫_a_k-1^a_k1/2(z-√(Δ)_nk)^⊤ H_g(ξ_k)(z-√(Δ)_nk))z.Additionally, the order of the term T_1 will be analysed in display (<ref>). For now, our primary focus is on the main term, which can expressed by:Δ_n^d/2∑_k_1=1^∞⋯∑_k_d=1^∞ f(λ_(k_1,…,k_d)Δ_n)=∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞ f(π^2η∑_l=1^dz_l^2) z_1⋯ z_d+(T_1∨Δ_n)=∫_0^∞⋯∫_0^∞ f(π^2η∑_l=1^dz_l^2) z_1⋯ z_d-∫_^d_+\[√(Δ)_n/2,∞)^df(π^2ηz^2)z+(T_1∨Δ_n).Before delving into the analysis of the compensation integral, defined by:ℐ:=∫_^d_+\[√(Δ)_n/2,∞)^df(π^2ηz^2)z,and the error term T_1, let us first examine a transformation of the main integral.To facilitate our analysis, we employ d-dimensional spherical coordinates and we have:∫_0^∞⋯∫_0^∞ f(ηπ^2∑_l=1^dz_l^2) z_1⋯ z_d=∫_0^∞ r^d-1f(π^2η r^2) r∫_0^π/2sin^d-2(φ_1)φ_1⋯∫_0^π/2sin(φ_d-2)φ_d-2∫_0^π/2φ_d-1.For l∈, it holds that∫_0^π/2sin^l(x) x=√(π)Γ(1+l/2)/2Γ(1+l/2),where Γ(x) denotes the Gamma function. Furthermore, we obtain that∏_l=1^d-2∫_0^π/2sin^l(x) x=π^d/2-1/2^d-2Γ(d/2).Thus, we have:∫_0^∞⋯∫_0^∞ f(ηπ^2∑_l=1^dz_l^2) z_1⋯ z_d=π^d/2/2^d-1Γ(d/2)∫_0^∞ r^d-1f(π^2η r^2) rand therefore obtain:Δ_n^d/2∑_k∈^df(λ_kΔ_n) =1/2^d(πη)^d/2Γ(d/2)∫_0^∞ x^d/2-1f(x) x-ℐ+(T_1∨Δ_n). To analyse the compensation term ℐ, we initiate the process by decomposing the set ^d_+\[√(Δ)_n/2,∞)^d. Let γ∈{0,1}^d\{0}^d, where γ=(γ_1,…,γ_d) and let ψ(x)=1_[0,√(Δ)_n/2)(x). With these definitions, we can introduce the following set:B_γ:={x∈[0,∞)^d : x_1∈ψ^-1(γ_1),…,x_d∈ψ^-1(γ_d)}⊂ [0,∞)^d.Hence, we can decompose the set ^d_+\[√(Δ)_n/2,∞)^d using the following disjoint union:^d_+\[√(Δ)_n/2,∞)^d=⋃_γ_1=1γ∈{0,1}^d^dB_γ.which enables the decomposition of the integral ℐ as follows:ℐ =∫_^d_+\[√(Δ)_n/2,∞)^df(π^2ηz^2)z=∑_γ_1=1γ∈{0,1}^d^d∫_B_γf(π^2ηz^2)z. Let us now focus on two cases. Firstly, the scenario where γ_1<d, and secondly, the case where γ_1=d. In the first case, we assume that γ_1=l, where l ∈{1, …, d-1}. This implies that there exist indices {i_1,…,i_l}⊂{1,…,d} and {1,…,d}\{i_1,…,i_l}={j_1,…,j_d-l} with γ_i_k = 1 for k = 1, …, l and γ_j_k = 0 for k = 1, …, d-l. Moreover, we assume that i_1 < … < i_l and j_1 < … < j_d-l.Although we are integrating over an area corresponding to an infinite hyperrectangle, transforming into d-dimensional spherical coordinates provides a convenient representation, facilitating the analysis of the integral's order. During the transformation into d-dimensional spherical coordinates, we can always ensure that the angles φ_1, …, φ_d-1 are bounded by (0, π/2), and consequently, we have:∫_B_γf(π^2ηz^2)z=(∫_√(Δ)_n/2^∞ r^d-1f(r^2) r), where we used the fact that the radius r is always greater or equal than √(Δ)_n/2. However, given that l dimensions vanish when integrating and as n tends to infinity, we can determine the order more precisely. Therefore, we can always consider the transformation:x_i_1 =rcos(φ_1), x_i_2=rsin(φ_1)cos(φ_2), ⋯ x_i_l=rsin(φ_1)⋯sin(φ_l-1)cos(φ_l), … x_j_1 =r∏_k=1^lsin(φ_k)cos(φ_l+1), ⋯ x_j_d-l-1=r∏_k=1^d-2sin(φ_k)cos(φ_d-1), x_j_d-l=r∏_k=1^d-1sin(φ_k),which allows without loss of generality to set i_1=1,…,i_l=l and j_1=l+1,… j_d-l=d. We can bound the angles φ_1, …, φ_l as follows:0≤ x_k=rcos(φ_k)∏_l=1^k-1sin(φ_l)≤√(Δ)_n/2 ⇔arccos(√(Δ)_n/2r∏_l=1^k-1sin(φ_l))≤φ_k≤π/2,where k=1,…,l and 1≤ l≤ (d-1). By rearranging the integration order, we have ∫_B_γf(π^2ηz^2)z= ∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^√(Δ)_n/2⋯∫_0^√(Δ)_n/2 f(π^2η∑_l=1^dz_l^2) z_i_l⋯ z_i_1 z_j_1⋯ z_j_d-l≤∫_√(Δ)_n/2^∞ f(π^2η r^2)∫_0^π/2⋯∫_0^π/2∫_arccos(a_1)^π/2⋯×∫_arccos(a_l)^π/2J_dφ_l⋯φ_1φ_l+1⋯φ_d-1 r,where b_1:=√(Δ)_n/2r, ⋯ ,b_l:=√(Δ)_n/2r∏_k=1^l-1sin(φ_k).Note, that we can use the following inequality for the determinant J_d:J_d≤ r^d-1sin(φ_1)^l-1sin(φ_2)^l-2⋯sin(φ_l-1). By utilizing the identity π/2 - arccos(x) = arcsin(x) and the inequality arcsin(x) ≤ xπ/2, for x ∈ [0,1], we deduce that∫_arccos(b_1)^π/2⋯∫_arccos(b_l)^π/2J_dφ_l⋯φ_1≤ r^d-1∫_arccos(b_1)^π/2⋯∫_arccos(b_l-1)^π/2∫_arccos(b_l)^π/2φ_lsin(φ_1)^l-1⋯sin(φ_l-1)φ_l-1⋯φ_1≤r^d-1π/2∫_arccos(b_1)^π/2⋯∫_arccos(b_l-1)^π/2sin(φ_1)^l-1⋯sin(φ_l-1)√(Δ)_n/2rsin(φ_1)⋯sin(φ_l-1)φ_l-1⋯φ_1≤ CΔ_n^l/2r^d-1-l.Therefore, we have ∫_B_γf(π^2ηz^2)z=(Δ_n^l/2∫_√(Δ)_n^1r^d-1-lf(r^2) r).Note, that this order applies to the derivatives as well, i.e.:∫_B_γh(π^2ηz^2)z =(Δ_n^l/2∫_√(Δ)_n^1r^d-1-lh(r^2) r),where h=f,f',f”.Now, let us consider the last case, where γ_1=d. Since the radius is bounded by {√(Δ)_n/2}^d_2=√(dΔ)_n/2, we can perform a transformation into d-dimensional spherical coordinates using the following inequality:∫_B_γf(π^2ηz^2)z ≤∫_z_2≤√(dΔ)_n/2 z∈ [0,∞)^df(π^2ηz_2^2) z=(∫_0^√(Δ)_nr^d-1f(r^2) r).Consequently, we obtain the following order for the compensation integral ℐ:ℐ =∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γf(π^2ηz^2)z+(∫_0^√(Δ)_nr^d-1f(r^2) r)=(max_l=1,…,d-1Δ_n^l/2∫_√(Δ)_n^1r^d-1-lf(r^2) r ∨∫_0^√(Δ)_nr^d-1f(r^2) r). Regarding the error term T_1 from equation (<ref>), we obtain the following expression for z∈[a_k-1,a_k] and k∈^d:(z-√(Δ)_nk)^⊤ H_g(z)(z-√(Δ)_nk)≤Δ_n/4∑_l_1=1^d∑_l_2=1^d ∂^2/∂ z_l_1∂ z_l_2f(π^2ηz_2^2) =C(Δ_n∑_l_1=1^d∑_l_2=1^d z_l_1z_l_2 f”(π^2ηz_2^2)+dΔ_n/2f'(π^2ηz_2^2))≤ C'dΔ_n (z_2^2 f”(π^2ηz_2^2)+f'(π^2ηz_2^2)),where C,C'>0 are suitable constants. Hence, we have T_1=(Δ_n∫_[√(Δ)_n/2,∞)^dz_2^2 f”(z_2^2)z∨Δ_n∫_[√(Δ)_n/2,∞)^df'(z_2^2)z). Once more, through the transformation into d-dimensional spherical coordinates, we can deduce the order of the Lagrange remainder T_1 as follows:T_1 =(Δ_n∫_√(Δ)_n^1 r^d+1f”(r^2) r∨Δ_n∫_√(Δ)_n^1 r^d-1f'(r^2) r),which completes the proof of the first assertion.We begin the proof of (ii) by establishing the following identity:∏_l=1^ncos(x_l)=1/2^n-1∑_u∈ C_ncos(u^⊤x), where x=(x_1,…,x_n)^⊤ and C_n:={1}×{-1,1}^n-1, with C_n=2^n-1 and n≥ 1. We demonstrate that this identity can be derived using induction. For n∈{1,2}, the identity is readily observed by utilizing the elementary trigonometric identity cos(x ± y) = cos(x)cos(y) ∓sin(x)sin(y). Now, let us assume that the advanced identity holds for an arbitrary n∈ℕ. For n+1, we consider x=(y,z)∈^n+1, where y∈^n and z ∈. Then we have:1/2^n∑_u∈ C_n+1cos(u^⊤x) =1/2^n∑_u∈ C_n(cos(u^⊤y+z)+(cos(u^⊤y-z))=1/2^n-1∑_u∈ C_ncos(u^⊤y)cos(z)=∏_l=1^n+1cos(x_l).By utilizing equation (<ref>), we arrive at the following structure:Δ_n^d/2∑_k∈^df(λ_kΔ_n)cos(2π k_j_1y_j_1)·…·cos(2π k_j_ly_j_l) =Δ_n^d/2/2^l-1∑_k∈^df(λ_kΔ_n)∑_u∈ C_lcos(2πu^⊤ (yk)_j,l) =(Δ_n^d/2/2^l-1∑_k∈^dg(√(Δ)_nk)∑_u∈ C_le^2πu^⊤ (yk)_j,l)+(Δ_n),where (yk)_j,l:=(k_j_1y_j_1,…,k_j_ly_j_l) and {j_1,…,j_l}⊂{1,…,d} and l=1,…,(d-1). Furthermore, it holds with u_i∈{-1,1}, i∈, that∫_a_k-1^a_ke^ 2π∑_i=1^lu_iy_j_iz_j_iΔ_n^-1/2z =Δ_n^d/2/(2π)^l∏_i=1^lu_iy_j_i∏_i=1^l(e^2π a_k_j_iu_iy_j_iΔ_n^-1/2-e^2π a_k_j_i-1u_iy_j_iΔ_n^-1/2)=Δ_n^d/2∏_i=1^lsin(π y_j_i)/π^l∏_i=1^ly_j_ie^ 2π∑_i=1^l u_ik_j_iy_j_i.Defining y_j,l:=(y_j_1,…,y_j_l) and χ:=χ_j,l:^l^d, where the i-th component (χ_j,l(x))_i of χ_j,l(x) is zero if i∈{1,…,d}\{j_1,…,j_l} or else the coordinate x_j_i, lead to:(Δ_n^d/2/2^l-1∑_k∈^dg(√(Δ)_nk)∑_u∈ C_le^2πu^⊤ (yk)_j,l)=∑_u∈ C_l(π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)ℱ[∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]](-2πχ(uy_j,l)Δ_n^-1/2))=:T_2+T_3,where ℱ denotes the Fourier transformation for a f∈ℒ^1(^d). Since we analyse functions f:[0,∞)^d the Fourier transformation is given by integrating over [0,∞)^d. Hence, we define T_2:=∑_u∈ C_lT_2,u, T_3:=∑_u∈ C_lT_3,u, where the components are given by:T_2,u :=(π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(πy_j_i)ℱ[∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-(-1)^lg1_B_γ_j,l](-2πχ(uy_j,l)Δ_n^-1/2)),T_3,u :=(-1)^l(π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)ℱ[g1_B_γ_j,l](-2πχ(uy_j,l)Δ_n^-1/2)),with B_γ is defined in equation (<ref>) and γ_j,l∈{0,1}^d, where (γ_j,l)_i=1 if i∈{j_1,…,j_l} or zero otherwise.Beginning with the analysis of the term T_3, we have for 1≤ l≤ (d-1) that(-1)^lT_3= ∑_u∈ C_l(π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)ℱ[g1_B_γ_j,l](-2πχ(uy_j,l)Δ_n^-1/2))=π^l∏_i=1^ly_j_i/∏_i=1^lsin(π y_j_i)∫_B_γ_j,lg(z)∑_u∈ C_l1/2^l-1cos(2πχ(uy_j,l)^⊤zΔ_n^-1/2)z=π^l∏_i=1^ly_j_i/∏_i=1^lsin(π y_j_i)∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^√(Δ)_n/2cos(2π y_j_lz_j_lΔ_n^-1/2)⋯ ×∫_0^√(Δ)_n/2 g(z)cos(2π y_j_1z_j_1Δ_n^-1/2) z_j_1⋯ z_j_l z_i_1⋯ z_i_d-l. To simplify the notation, we introduce g(z_1, …, z_d) = g̃(z_j_1, …, z_j_l, z_i_1, …, z_i_d-l). Moreover, we can apply integration by parts to obtain:π^l∏_i=1^ly_j_i/∏_i=1^lsin(π y_j_i)∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^√(Δ)_n/2cos(2π y_j_lz_j_lΔ_n^-1/2)⋯ ×∫_0^√(Δ)_n/2 g(z)cos(2π y_j_1z_j_1Δ_n^-1/2) z_j_1⋯ z_j_l z_i_1⋯ z_i_d-l=Δ_n^1/2π^l-1∏_i=2^ly_j_i/2∏_i=2^lsin(π y_j_i)∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^√(Δ)_n/2cos(2π y_j_lz_j_lΔ_n^-1/2)⋯ ×∫_0^√(Δ)_n/2g̃(√(Δ)_n/2,z_j_2,…,z_j_l,z_i_1,…,z_i_d-l)cos(2π y_j_2z_j_2Δ_n^-1/2) z_j_2⋯ z_j_l z_i_1⋯ z_i_d-l -π^l∏_i=1^ly_j_i/∏_i=1^lsin(π y_j_i)∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^√(Δ)_n/2cos(2π y_j_lz_j_lΔ_n^-1/2)⋯ ×∫_0^√(Δ)_n/2 g'_z_j_1(z)sin(2π y_j_1z_j_1Δ_n^-1/2)/2π y_j_1Δ_n^-1/2 z_j_1⋯ z_j_l z_i_1⋯ z_i_d-l.By induction, we have ∑_u∈ C_l(π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)ℱ[g1_B_γ_j,l](-2πχ(uy_j,l)Δ_n^-1/2))=(Δ_n^1/2/2)^l∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞g̃(√(Δ)_n/2,…,√(Δ)_n/2,z_i_1,…,z_i_d-l) z_i_1⋯ z_i_d-l-∑_k=1^l I_k,where we infer by a simple transformation, thatI_k :=Δ_n^(k-1)/2π^l-k+1∏_i=k^ly_j_i/2^k-1∏_i=k^lsin(π y_j_i)∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^√(Δ)_n/2cos(2π y_j_lz_j_lΔ_n^-1/2)⋯ ×∫_0^√(Δ)_n/2(g̃'_z_j_k(√(Δ)_n/2,…,√(Δ)_n/2,z_j_k,…,z_j_l,z_i_1,…,z_i_d-l) ×sin(2π y_j_kz_j_kΔ_n^-1/2)/2πy_j_kΔ_n^-1/2) z_j_k⋯ z_j_l z_i_1⋯ z_i_d-l=Δ_n^(l+1)/2π^l-k∏_i=k+1^ly_j_i/2^k∏_i=k^lsin(π y_j_i)∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^1/2cos(2π y_j_lz_j_l)⋯ ×∫_0^1/2(g̃'_z_j_k(√(Δ)_n/2,…,√(Δ)_n/2,z_j_kΔ_n^1/2,…,z_j_lΔ_n^1/2,z_i_1,…,z_i_d-l) ×sin(2π y_j_kz_j_k)) z_j_k⋯ z_j_l z_i_1⋯ z_i_d-l.In order to determine the order of the terms I_k we proceed by re-transforming the integral as follows:I_k =(Δ_n^(l+1)/2/δ^l-k+1∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^1/2⋯ ×∫_0^1/2g̃'_z_j_k(√(Δ)_n/2,…,√(Δ)_n/2,z_j_kΔ_n^1/2,…,z_j_lΔ_n^1/2,z_i_1,…,z_i_d-l) z_j_k⋯ z_j_l z_i_1⋯ z_i_d-l)=(Δ_n^k/2/δ^l-k+1∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^√(Δ)_n/2⋯∫_0^√(Δ)_n/2z_j_kf'(∑_i=k^l z_j_i^2+∑_j=1^d-lz_i_j^2) z_j_k⋯ z_j_l z_i_1⋯ z_i_d-l).Analogously to the determination of the error term ℐ, we transform into (d-k+1)-dimensional spherical coordinates and obtain with 1≤ k≤ l≤ (d-1) thatI_k=(Δ_n^(l+1)/2/δ^l-k+1∫_√(Δ)_n^1 r^d-lf'(r^2) r),which implies:∑_k=1^l I_k=(Δ_n^(l+1)/2/δ^l∫_√(Δ)_n^1 r^d-lf'(r^2) r).Next, we have (Δ_n^1/2/2)^l∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞g̃(√(Δ)_n/2,…,√(Δ)_n/2,z_i_1,…,z_i_d-l) z_i_1⋯ z_i_d-l=∫_√(Δ)_n/2^∞⋯∫_√(Δ)_n/2^∞∫_0^√(Δ)_n/2⋯∫_0^√(Δ)_n/2g̃(√(Δ)_n/2,…,√(Δ)_n/2,z_i_1,…,z_i_d-l) z_j_1… z_j_l z_i_1⋯ z_i_d-l=:J_1.Utilizing Taylor expansion, we can decompose g as follows:g(z_1,…,z_d) =g̃(z_j_1,…,z_j_l,z_i_1,…,z_i_d-l)=g̃(√(Δ)_n/2,…,√(Δ)_n/2,z_i_1,…,z_i_d-l)+∑_k=1^lg̃'_z_j_k(ξ_1,…,ξ_l,z_i_1,…,z_i_d-l)(z_j_k-√(Δ)_n/2),where∇_l:=[ ∂/∂ z_j_1; ⋮; ∂/∂ z_j_l;id; ⋮;id ], a:=[ √(Δ)_n/2;⋮; √(Δ)_n/2;z_i_1;⋮;z_i_d-l ], z̃:=[ z_j_1; ⋮; z_j_l; z_i_1; ⋮; z_i_d-l ],and ξ_1,…,ξ_l∈[0,√(Δ)_n/2]. Thus, it holds thatJ_1-∫_B_γ_j,lg(z)z ≤∫_B_γ_j,lg̃(√(Δ)_n/2,…,√(Δ)_n/2,z_i_1,…,z_i_d-l)-g(z)z=(√(Δ)_n∑_k=1^l∫_B_γ_j,lz_j_kf'(z_2^2) z)=(Δ_n^(l+1)/2∫_√(Δ)_n^1 r^d-lf'(r^2) r).Hence, we have:J_1 = ∫_B_γ_j,lg(z)z +(Δ_n^(l+1)/2∫_√(Δ)_n^1 r^d-lf'(r^2) r),and therefore, we derive the following:T_3 =(-1)^l∫_B_γ_j,lg(z)z +( Δ_n^(l+1)/2/δ^l∫_√(Δ)_n^1 r^d-lf'(r^2) r). To analyse the order of the term T_2, we begin by distinguishing between two cases: when l is an odd natural number and when l is an even natural number. Considering that the term T_2,u corresponds to the Fourier transform of the function∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-(-1)^lg1_B_γ_j,l,we can analyse the order of this term by adding the following terms:∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-(-1)^lg1_B_γ_j,l=∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-(-1)^lg·(1_B_γ_j,l+1_(√(Δ)_n/2,∞)^d-1_(√(Δ)_n/2,∞)^d). If l is odd, we have ∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-(-1)^lg1_B_γ_j,l=∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-g1_(√(Δ)_n/2,∞)^d +g1_(√(Δ)_n/2,∞)^d∪ B_γ_j,l,since we have disjoint sets. For the case where l is even, we find that∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-(-1)^lg1_B_γ_j,l =∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-g1_(√(Δ)_n/2,∞)^d + g·(1_(√(Δ)_n/2,∞)^d -1_B_γ_j,l)≤∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-g1_(√(Δ)_n/2,∞)^d +g1_(√(Δ)_n/2,∞)^d∪ B_γ_j,l.Therefore, we can decompose T_2 for general l=1,…,d-1 into the following parts:T_2,u ≤(π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(πy_j_i)ℱ[∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-g1_(√(Δ)_n/2,∞)^d](-2πχ(uy_j,l)Δ_n^-1/2)) +(π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(πy_j_i)ℱ[g1_(√(Δ)_n/2,∞)^d∪ B_γ_j,l](-2πχ(uy_j,l)Δ_n^-1/2))=:S_1,u+S_2,u.Furthermore, we define S_i:=∑_u∈ C_lS_i,u, for i=1,2. Starting with S_2, it holds for q∈{1,2} thatx_j^qℱ[g](x)=ℱ[∂^q/∂ x_j^qg](x)≤∂^q/∂ x_j^q g(x)_ℒ_1 and ℱ[g](x)≤x_j^-q∂^q/∂ x_j^q g(x)_ℒ_1,where we use x_j≠ 0 in the last inequality, for j=1,…,d and x∈^d. Hence, we have:S_2,u =( Δ_nπ^l-2∏_i=2^ly_j_i/2^l+1y_j_1∏_i=1^lsin(π y_j_i)∂^2/∂ x_j_1^2 g1_[√(Δ)_n/2,∞)^d∪ B_γ_j,l_ℒ_1). To compute the ℒ_1 norm, we first obtain the following:||∂^2/∂ z_j_1^2 g1_[√(Δ)_n/2,∞)^d∪ B_γ_j,l||_ℒ_1 =∫_[√(Δ)_n/2,∞)^d∪ B_γ_j,l∂^2/∂ z_j_1^2g(z)z=∫_([√(Δ)_n/2,∞)∪̇(0,√(Δ)_n/2) )^l× [√(Δ)_n/2,∞)^d-l∂^2/∂ z_j_1^2g(z)z̃,where z̃=(z_j_1,…,z_j_l,z_i_1,…,z_i_d-l). At this point, it is possible that none of the integration variables z_j_1,…,z_j_l fall within the range (0,√(Δ)_n/2), or one to all of them. Assume we have 0≤ k≤ l of these integration variable within the range (0,√(Δ)_n/2), then there are lk possible combinations to choose k variables from z_j_1,…,z_j_l. As each choice results in the same order of the integral, which is evident by the argumentation followed by display (<ref>), it is sufficient to analyse the order of the integral, where we set the first k integration variables z_j_1,…,z_j_k∈ (0,√(Δ)_n/2). Hence, we get:||∂^2/∂ z_j_1^2 g1_[√(Δ)_n/2,∞)^d∪ B_γ_j,l||_ℒ_1 =∫_([√(Δ)_n/2,∞)∪̇(0,√(Δ)_n/2) )^l× [√(Δ)_n/2,∞)^d-l∂^2/∂ z_j_1^2g(z)z̃=(max_k=1,…,lΔ_n^k/2∫_√(Δ)_n^1r^d-k+1f”(r^2) r∨∫_√(Δ)_n^∞ r^d+1f”(r^2) r ∨max_k=1,…,lΔ_n^k/2∫_√(Δ)_n^1r^d-k-1f'(r^2) r∨∫_√(Δ)_n^∞ r^d-1f'(r^2) r).Thus, we infer the following:S_2 =∑_u∈ C_lS_2,u=(max_k=0,…,lΔ_n^k/2+1/δ^l+1∫_√(Δ)_n^1r^d-k+1f”(r^2) r∨max_k=0,…,lΔ_n^k/2+1/δ^l+1∫_√(Δ)_n^1 r^d-k-1f'(r^2) r),where we have used that y∈[δ,1-δ]^d. We commence the analysis of the term S_1. Here, we find thatS_1,u=π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(πy_j_i)(∑_k∈^d∫_a_k-1^a_k(g(√(Δ)_nk)-g(z))exp[2πχ(uy_j,l)^⊤zΔ_n^-1/2]z). By considering display (<ref>), we can deduce:S_1,u ≤π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(πy_j_i)(-∑_k∈^d∫_a_k-1^a_k∇ g(√(Δ)_nk)^⊤(z-√(Δ)_nk)exp[2πχ(uy_j,l)^⊤zΔ_n^-1/2]z) +π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(πy_j_i)(∑_k∈^d∫_a_k-1^a_k|1/2(z-√(Δ)_nk)^⊤ H_g(ξ) × (z-√(Δ)_nk) exp[2πχ(uy_j,l)^⊤zΔ_n^-1/2] |z)≤π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(πy_j_i)(-∑_k∈^d∫_a_k-1^a_k∇ g(√(Δ)_nk)^⊤(z-√(Δ)_nk)exp[2πχ(uy_j,l)^⊤zΔ_n^-1/2]z) +π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(πy_j_i)∑_k∈^d∫_a_k-1^a_k1/2(z-√(Δ)_nk)^⊤ H_g(ξ)(z-√(Δ)_nk)z. We employ a similar approach as for the term T_1, given in the equations (<ref>) and (<ref>), for the second integral, leading to the term:S_1,u ≤π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(πy_j_i)∑_k∈^d∫_a_k-1^a_k∇ g(√(Δ)_nk)^⊤(z-√(Δ)_nk)cos[2πχ(uy_j,l)^⊤zΔ_n^-1/2]z +(Δ_n/δ^l∫_√(Δ)_n^1 r^d+1f”(r^2) r∨Δ_n/δ^l∫_√(Δ)_n^1 r^d-1f'(r^2)diff r).Employing equation (<ref>), we obtain:S_1 ≤π^l∏_i=1^ly_j_i/∏_i=1^lsin(πy_j_i)|∑_k∈^d∫_a_k-1^a_k∇ g(√(Δ)_nk)^⊤(z-√(Δ)_nk)cos(2π y_j_1z_j_1Δ_n^-1/2)⋯cos(2π y_j_lz_j_lΔ_n^-1/2)z| +(Δ_n/δ^l∫_√(Δ)_n^1 r^d+1f”(r^2) r∨Δ_n/δ^l∫_√(Δ)_n^1 r^d-1f'(r^2) r).Let k∈^d, then it holds that∫_a_k-1^a_k∇ g(√(Δ)_nk)^⊤(z-√(Δ)_nk)cos(2π y_j_1z_j_1Δ_n^-1/2)⋯cos(2π y_j_lz_j_lΔ_n^-1/2)z=∑_l=1^d ∫_a_k-1^a_k g'_z_l(√(Δ)_nk)(z_l-√(Δ)_n k_l)cos(2π y_j_1z_j_1Δ_n^-1/2)⋯cos(2π y_j_lz_j_lΔ_n^-1/2)z. Firstly, for l̃∉{j_1, …, j_l}, we have ∫_a_k_j_1-1 ^a_k_j_1cos(2π y_j_1z_j_1Δ_n^-1/2) z_j_1⋯∫_a_k_j_l-1 ^a_k_j_lcos(2π y_j_lz_j_lΔ_n^-1/2) z_j_l∫_a_k_j_l̃-1 ^a_k_j_l̃(z_j_l̃-√(Δ)_nk_j_l̃) z_j_l̃=0,since it holds that∫_√(Δ)_n(k̃-1/2)^√(Δ)_n(k̃+1/2)(x-√(Δ)_nk̃) x =∫_-√(Δ)_n/2^√(Δ)_n/2x=0,for a k̃∈. Suppose l̃∈{j_1, …, j_l}, then we obtain:∫_a_k_j_1-1 ^a_k_j_1cos(2π y_j_1z_j_1Δ_n^-1/2) z_j_1⋯∫_a_k_j_l̃-1 ^a_k_j_l̃(z_j_l̃-√(Δ)_nk_j_l̃)cos(2π y_j_l̃z_j_l̃Δ_n^-1/2) z_j_l̃ ⋯∫_a_k_j_l-1 ^a_k_j_lcos(2π y_j_lz_j_lΔ_n^-1/2) z_j_l.For k̃∈ and ỹ∈[δ,1-δ] we have ∫_√(Δ)_n(k̃-1/2)^√(Δ)_n(k̃+1/2)cos(2πỹxΔ_n^-1/2) x=√(Δ)_ncos(2πỹk̃)sin(πỹ)/πỹ=(Δ_n^1/2/δ),and ∫_√(Δ)_n(k̃-1/2)^√(Δ)_n(k̃+1/2)(x-√(Δ)_nk̃)cos(2πỹxΔ_n^-1/2) x =∫_-√(Δ)_n/2^√(Δ)_n/2xcos(2πỹ(x+√(Δ)_nk̃)Δ_n^-1/2) x=Δ_n(πỹcos(πỹ)-sin(πỹ))/2π^2 ỹ^2sin(2πk̃ỹ). Hence, we get for 1≤ l ≤ d-1 thatS_1 =(Δ_n^(d+1)/2/δ∑_i=1^l∑_k∈^dg'_z_j_i(√(Δ)_nk)cos(2π y_j_1k_j_1)⋯cos(2π y_j_i-1k_j_i-1)sin(2π k_j_iy_j_i) ×cos(2π y_j_i+1k_j_i+1)⋯cos(2π y_j_lk_j_l))+(Δ_n/δ^l∫_√(Δ)_n^1 r^d+1f”(r^2) r∨Δ_n/δ^l∫_√(Δ)_n^1 r^d-1f'(r^2) r),where we set y_j_0=y_j_l+1=0. It remains to determine the order of the series.Therefore, we use the following identity:sin(x_1)cos(x_2)⋯cos(x_n)=1/2^n-1∑_u∈ C_nsin(u^⊤x),where x=(x_1,…,x_n)∈^n and C_n={1}×{-1,1}^n-1. This identity can be proven similarly to identity in display (<ref>). Without loss of generality, we set the coordinates of the sine term to be j_1, leading to the expression:∑_k∈^dg'_z_j_1(√(Δ)_nk)sin(2π k_j_1y_j_1)cos(2π y_j_2k_j_2)⋯cos(2π y_j_lk_j_l) =1/2^l-1∑_u∈ C_l∑_k∈^dg'_z_j_1(√(Δ)_nk)sin(2π(u^⊤(yk)_j,l),where (yk)_j,l:=(k_j_1y_j_1,…,k_j_ly_j_l).By following similar steps as in display (<ref>), we find thatΔ_n^(d+1)/2∑_k∈^dg'_z_j_1(√(Δ)_nk)sin(2π k_j_1y_j_1)cos(2π y_j_2k_j_2)⋯cos(2π y_j_lk_j_l)=∑_u∈ C_l(Δ_n^1/2π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)ℱ[∑_k∈^dg'_z_j_1(√(Δ)_nk)1_(a_k-1,a_k]](-2πχ(uy_j,l)Δ_n^-1/2))=:U_1+U_2-U_3,where U_i:=∑_u∈ C_lU_i,u for i=1,2,3 andU_1,u :=(Δ_n^1/2π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)ℱ[∑_k∈^dg'_z_j_1(√(Δ)_nk)1_(a_k-1,a_k]-g'_z_j_11_(√(Δ)_n/2,∞)^d](-2πχ(uy_j,l)Δ_n^-1/2)),U_2,u :=(Δ_n^1/2π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)ℱ[g'_z_j_11_(√(Δ)_n/2,∞)^d∪ B_γ_j,l](-2πχ(uy_j,l)Δ_n^-1/2)), U_3,u :=(Δ_n^1/2π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)ℱ[g'_z_j_11_ B_γ_j,l](-2πχ(uy_j,l)Δ_n^-1/2)). By employing the inequality ℱ[f]_∞≤f_ℒ_1, we obtain, for a u∈ C_l, thatU_1,u ≤Δ_n^1/2π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)||∑_k∈^dg'_z_j_1(√(Δ)_nk)1_(a_k-1,a_k]-g'_z_j_11_(√(Δ)_n/2,∞)^d||_ℒ_1(^d)≤Δ_n^1/2π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)∑_k∈^d∫_^d| g'_z_j_1(√(Δ)_nk)-g'_z_j_1(z)|1_(a_k-1,a_k](z)z.Applying Taylor's expansion, we find thatU_1,u ≤Δ_n^1/2π^l∏_i=1^ly_j_i/2^l-1∏_i=1^lsin(π y_j_i)∑_k∈^d∫_a_k-1^a_k|∇ g'_z_j_1(ξ_k)^⊤(z-√(Δ)_nk)|z.Following analogous steps as for the term T_1, we have for k∈[a_k-1,a_k] that∇ g'_z_j_1(z)^⊤(z-√(Δ)_nk) =∑_l=1^d ∂^2/∂ z_j_1∂ z_lg(z)(z_l-√(Δ)_nk_l)≤ C√(Δ)_n(f”(π^2ηz_2^2)(dz_j_1^2+z_2^2)+f'(π^2ηz_2^2)),and therefore it holds thatU_1,u =(Δ_n/δ^l∫_[√(Δ)_n/2,∞)^dz_2^2f”(π^2ηz_2^2)z+Δ_n/δ^l∫_[√(Δ)_n/2,∞)^df'(π^2ηz_2^2)z)=(Δ_n/δ^l∫_√(Δ)_n^1 r^d+1f”(r^2) r∨Δ_n/δ^l∫_√(Δ)_n^1 r^d-1f'(r^2) r).Note, that U_1 is of the same order as U_1,u. Using display (<ref>) with q=1 we have for U_2,u thatU_2,u =(Δ_nπ^l-1∏_i=2^ly_j_i/2^l∏_i=1^lsin(π y_j_i)||∂^2/∂ z_j_1^2g1_(√(Δ)_n/2,∞)^d∪ B_γ_j,l||_ℒ_1).Utilizing the order of the term S_2 yields the following:U_2 =(max_k=0,…,lΔ_n^k/2+1/δ^l∫_√(Δ)_n^1r^d-k+1f”(r^2) r∨max_k=0,…,lΔ_n^k/2+1/δ^l∫_√(Δ)_n^1 r^d-k-1f'(r^2) r).For the last term U_3 we have with the equations (<ref>) and (<ref>) thatU_3,u =(Δ_nπ^l-1∏_i=2^ly_j_i/2^l∏_i=1^lsin(π y_j_i)||∂^2/∂ y_j_1^2g1_ B_γ_j,l||_ℒ_1)=(Δ_n^l/2+1/δ^l∫_√(Δ)_n^1 r^d-l+1f”(r) r∨Δ_n^l/2+1/δ^l∫_√(Δ)_n^1 r^d-l-1f'(r) r).Hence, we find:S_1 =(max_k=0,…,lΔ_n^k/2+1/δ^l+1∫_√(Δ)_n^1r^d-k+1f”(r^2) r∨max_k=0,…,lΔ_n^k/2+1/δ^l+1∫_√(Δ)_n^1 r^d-k-1f'(r^2) r)=S_2.and Δ_n^d/2∑_k∈^df(λ_kΔ_n)cos(2π k_j_1y_j_1)·…·cos(2π k_j_ly_j_l)=T_2+T_3+(Δ_n)=(-1)^l∫_B_γ_j,lg(z)z +(Δ_n^(l+1)/2∫_√(Δ)_n^1 r^d+1-lf'(r^2) r ∨Δ_n^(l+1)/2/δ^l∫_√(Δ)_n^1 r^d-lf'(r^2) r) +(max_k=0,…,lΔ_n^k/2+1/δ^l+1∫_√(Δ)_n^1r^d-k+1f”(r^2) r∨max_k=0,…,lΔ_n^k/2+1/δ^l+1∫_√(Δ)_n^1 r^d-k-1f'(r^2) r),which completes the proof of (ii).For the proof of (iii), we proceed in a manner similar to the proof of (ii). Firstly, for a γ∈{0,1}^d, with γ_1 = d-1, we find thatΔ_n^d/2∑_k∈^df(λ_kΔ_n)cos(2π k_1y_1)·…·cos(2π k_dy_d)=:T_2+T_3-T_4+(Δ_n),where we redefine T_i:=∑_u∈ C_dT_i,u, with i=2,3,4, by the following:T_2,u :=(π^d∏_i=1^dy_i/2^d-1∏_i=1^dsin(πy_i)ℱ[∑_k∈^dg(√(Δ)_nk)1_(a_k-1,a_k]-g1_[√(Δ)_n/2,∞)^d](-2π (uy)Δ_n^-1/2))T_3,u :=(π^d∏_i=1^dy_i/2^d-1∏_i=1^dsin(π y_i)ℱ[g1_[√(Δ)_n/2,∞)^d∪ B_γ](-2π (uy)Δ_n^-1/2))T_4,u :=(π^d∏_i=1^dy_i/2^d-1∏_i=1^dsin(π y_i)ℱ[g1_ B_γ](-2π (uy)Δ_n^-1/2)), where y=(y_1,…,y_d)∈[δ,1-δ]^d.For T_2, we apply the same procedure as for S_1 in part (ii) to obtain:T_2 =(Δ_n^(d+1)/2/δ∑_i=1^d∑_k∈^dg'_z_i(√(Δ)_nk)cos(2π y_1k_1)⋯cos(2π y_i-1k_i-1)sin(2π k_iy_i) ×cos(2π y_i+1k_i+1)⋯cos(2π y_dk_d))+(Δ_n/δ^d∫_√(Δ)_n^1 r^d+1f”(r^2) r∨Δ_n/δ^d∫_√(Δ)_n^1 r^d-1f'(r^2) r)=(Δ_n^(d+1)/2/δ∑_k∈^dg'_z_1(√(Δ)_nk)sin(2π y_1k_1)cos(2π y_2k_2)⋯cos(2π y_dk_d)) +(Δ_n/δ^d∫_√(Δ)_n^1 r^d+1f”(r^2) r∨Δ_n/δ^d∫_√(Δ)_n^1 r^d-1f'(r^2) r).Furthermore, it holds thatΔ_n^(d+1)/2∑_k∈^dg'_z_1(√(Δ)_nk)sin(2π k_1y_1)cos(2π y_2k_2)⋯cos(2π y_dk_d)=∑_u∈ C_l(Δ_n^1/2π^d∏_i=1^dy_i/2^d-1∏_i=1^dsin(π y_i)ℱ[∑_k∈^dg'_z_1(√(Δ)_nk)1_(a_k-1,a_k]](-2π(uy)Δ_n^-1/2))=:U_1+U_2-U_3,where we redefine U_i:=∑_u∈ C_lU_i,u, for i=1,2,3, by the following terms:U_1,u :=(Δ_n^1/2π^d∏_i=1^dy_i/2^d-1∏_i=1^dsin(π y_i)ℱ[∑_k∈^dg'_z_j_1(√(Δ)_nk)1_(a_k-1,a_k]-g'_z_j_11_(√(Δ)_n/2,∞)^d](-2π(uy)Δ_n^-1/2)),U_2,u :=(Δ_n^1/2π^d∏_i=1^dy_i/2^d-1∏_i=1^dsin(π y_i)ℱ[g'_z_j_11_(√(Δ)_n/2,∞)^d∪ B_γ](-2π(uy)Δ_n^-1/2)), U_3,u :=-(Δ_n^1/2π^d∏_i=1^dy_i/2^d-1∏_i=1^dsin(π y_i)ℱ[g'_z_j_11_ B_γ](-2π(uy)Δ_n^-1/2)).For the term U_1, U_2 and U_3 we obtain the same order as in part (ii), resulting in:U_1 =(Δ_n/δ^d∫_√(Δ)_n^1 r^d+1f”(r^2) r∨Δ_n/δ^d∫_√(Δ)_n^1 r^d-1f'(r^2) r), U_2 =(max_k=0,…,d-1Δ_n^k/2+1/δ^d∫_√(Δ)_n^1r^d-k+1f”(r^2) r∨max_k=0,…,d-1Δ_n^k/2+1/δ^d∫_√(Δ)_n^1 r^d-k-1f'(r^2) r), U_3 =(Δ_n^(d+1)/2/δ^d∫_√(Δ)_n^1 r^2f”(r) r∨Δ_n^(d+1)/2/δ^d∫_√(Δ)_n^1 f'(r) r).Hence, we have:T_2 =(max_k=0,…,d-1Δ_n^k/2+1/δ^d+1∫_√(Δ)_n^1r^d-k+1f”(r^2) r∨max_k=0,…,d-1Δ_n^k/2+1/δ^d+1∫_√(Δ)_n^1 r^d-k-1f'(r^2) r).For T_3 we infer the same order as for S_2 in equation (<ref>) and have T_3=(T_2). For T_4 we set without loss of generality that γ={1,…,1,0}∈{0,1}^d and have T_4 =π^d∏_i=1^dy_i/∏_i=1^dsin(π y_i)∫_√(Δ)_n/2^∞cos(2π y_dz_dΔ_n^-1/2)∫_0^√(Δ)_n/2cos(2π y_d-1z_d-1Δ_n^-1/2)⋯ ×∫_0^√(Δ)_n/2 g(z)cos(2π y_1z_1Δ_n^-1/2) z_1⋯ z_d-1 z_d.Using analogous steps as in equations (<ref>) and (<ref>), we have T_4 =(Δ_n^1/2/2)^d-1π y_d/sin(π y_d)∫_√(Δ)_n/2^∞g̃(√(Δ)_n/2,…,√(Δ)_n/2,z_d)cos(2π y_dz_dΔ_n^-1/2) z_d +(Δ_n^d/2/δ^d∫_√(Δ)_n^1 rf'(r^2) r).Integration by parts yields:π y_d/sin(π y_d)∫_√(Δ)_n/2^∞g̃(√(Δ)_n/2,…,√(Δ)_n/2,z_d)cos(2π y_dz_dΔ_n^-1/2) z_d=Δ_n^1/2/2sin(π y_d)[g̃(√(Δ)_n/2,…,√(Δ)_n/2,z_d)sin(2π y_dz_dΔ_n^-1/2)]_√(Δ)_n/2^∞ -Δ_n^1/2/2sin(π y_d)∫_√(Δ)_n/2^∞∂/∂ z_dg̃(√(Δ)_n/2,…,√(Δ)_n/2,z_d)sin(2π y_dz_dΔ_n^-1/2) z_d=(Δ_n^1/2g(√(Δ)_n/2,…,√(Δ)_n/2))-I_d,where I_d:=Δ_n^1/2/2sin(π y_d)∫_√(Δ)_n/2^∞∂/∂ z_dg̃(√(Δ)_n/2,…,√(Δ)_n/2,z_d)sin(2π y_dz_dΔ_n^-1/2) z_d.Furthermore, we have:I_d =(Δ_n^1/2/δ∫_√(Δ)_n/2^∞ z_df'((d-1)Δ_n/4+z_d^2)z_d)=(Δ_n^1/2/δ∫_√(Δ)_n/2^1 rf'(r^2)r)and therefore we find thatT_4 =(Δ_n^d/2f(Δ_n))+(Δ_n^d/2/δ^d∫_√(Δ)_n^1 rf'(r^2) r).Finally, we obtain thatΔ_n^d/2∑_k∈^df(λ_kΔ_n)cos(2π k_1y_1)·…·cos(2π k_dy_d)=(Δ_n^d/2f(Δ_n))+(Δ_n^d/2/δ^d∫_√(Δ)_n^1 rf'(r^2) r) +(max_k=0,…,d-1Δ_n^k/2+1/δ^d+1∫_√(Δ)_n^1r^d-k+1f”(r^2) r∨max_k=0,…,d-1Δ_n^k/2+1/δ^d+1∫_√(Δ)_n^1 r^d-k-1f'(r^2) r),which completes the proof.Consider the class following of functions, given by𝒬_β :={f:[0,∞)| fis twice differentiable, x^d-1f(x^2)_ℒ^1([0,∞)), x^df^(1)(x^2)_ℒ^1([1,∞)), x^d+1f^(2)(x^2)_ℒ^1([1,∞)) and lim sup_x→ 0f^(j)(x^2)/x^-β_j≤ C<∞,for j=0,1,2},where f^(j) denotes the j-th derivative and β=(β_0,β_1,β_2)∈(0,∞).Then, Lemma <ref> can be expressed by the following corollary.Let f∈𝒬_β for β=(β_0,β_1,β_2)∈(0,∞), then it holds that (i) Δ_n^d/2∑_k∈^df(λ_kΔ_n) =1/2^d(πη)^d/2Γ(d/2)∫_0^∞ x^d/2-1f(x) x-∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γf(π^2ηz^2)z +(Δ_n∨Δ_n^(d-β_0)/2∨Δ_n^(d+2-β_1)/2∨Δ_n^(d+4-β_2)/2),where B_γ is defined in equation (<ref>).(ii) For {j_1,…,j_l}⊂{1,…,d}, γ_j,l∈{0,1}^d, where (γ_j,l)_i=1_i∈{j_1,…,j_l}, with i=1,…,d and l=1,…,(d-1), we have Δ_n^d/2∑_k∈^df(λ_kΔ_n)cos(2π k_j_1y_j_1)·…·cos(2π k_j_ly_j_l)=(-1)^l∫_B_γ_j,lf(π^2ηz^2)z + (δ^-(l+1)Δ_n∨δ^-lΔ_n^(l+1)/2∨δ^-(l+1)Δ_n^(d+2-β_1)/2∨δ^-(l+1)Δ_n^(d+4-β_2)/2). (iii) For {j_1,…,j_l}={1,…,d}, i.e. l=d, we have Δ_n^d/2∑_k∈^df(λ_kΔ_n)cos(2π k_1y_1)·…·cos(2π k_dy_d)=(δ^-(d+1)Δ_n∨Δ_n^(d-β_0)/2 ∨δ^-(d+1)Δ_n^(d+2-β_1)/2∨δ^-(d+1)Δ_n^(d+4-β_2)/2). In particular, it holds for a γ̃∈{0,1}^d with γ̃_1=l and 1≤ l≤ d-1 that∫_B_γ̃f(π^2ηz^2)z=(Δ_n^l/2∨Δ_n^(d-β_0)/2)and ∑_γ_1=1γ∈{0,1}^d^d∫_B_γf(π^2ηz^2)z=(Δ_n^1/2∨Δ_n^(d-β_0)/2). As the proof of the latter corollary is straightforward, we omit it.The following two functions:f_α(x):=1-e^-x/x^1+α and g_α,τ(x)=(1-e^-x)^2/2x^1+αe^-τ x,for α,τ>0 play a crucial role in the forthcoming analysis, particularly in calculating the realized volatility. In order to utilize Corollary <ref> for these functions, we need to verify their belonging to the class 𝒬_β and determine the corresponding parameter β. The following lemma serves this purpose.It holds: f_α∈𝒬_β_1 and g_α,τ∈𝒬_β_2, where β_1=(2α,2(1+α),2(2+α)) and β_2=(2α,2(1+α),2(1+α)).We provide the proof for the function f_α since the proof for g_α,τ follows in an analogous manner. First, it holds with integration by parts that∫_0^∞e^- cx/x^m x =c^m-1/1-mΓ(2-m)=c^m-1Γ(1-m),for m<1 and c>0, where Γ(z)=∫_0^∞ t^z-1e^-t t denotes the Gamma function for z∈ℂ and (z)∉{0,-1,-2,…}. Note, that Γ(1+z)=zΓ(z). By utilizing equation (<ref>), we find:∫_0^∞e^-cx^2/x^m x =c^(m-1)/2/2Γ(1/2-m/2),where m<1 and c>0 and∫_0^∞1-e^-cx^2/x^m x =-c^(m-1)/2/2Γ(1/2-m/2)<C,for 1<m<3, c>0 and a constant 0<C<∞.We begin by examining the conditions of the class 𝒬_β for the functions f_α and g_α,τ. First and foremost, both functions f_α and g_α,τ are evidently twice continuously differentiable. Here, we find:f'_α(x) =e^-x/x^1+α-(1+α)1-e^-x/x^2+α,f”_α(x) =-e^-x/x^1+α-(1+α)2e^-x/x^2+α+(1+α)(2+α)1-e^-x/x^3+α,g_α,τ'(x) = e^-x(τ+1)f_α(x)-1+α/xg_α,τ(x)-τ g_α,τ(x), g_α,τ”(x) =-(τ+1)e^-x(τ+1)f_α(x)+e^-x(τ+1)f_α'(x)+1+α/x^2g_α,τ(x)-1+α/xg'_α,τ(x)-τ g'_α,τ(x).Furthermore, we obtain:∫_1^∞ x^me^-x^2 x=1/2∫_1^∞ x^(m-1)/2e^-x=1/2Γ((m+1)/2,1)≤ C,and∫_1^∞ x^m(1-e^-x^2) x =[x^m+1/m+1(1-e^-x^2)]_1^∞-2/m+1∫_1^∞ x^m+2e^-x^2 x=[x^m+1/m+1(1-e^-x^2)]_1^∞-1/m+1Γ((m+3)/2,1)≤ C,if m<-1. Here, Γ(z,s)=∫_s^∞ t^z-1e^-z z denotes the upper incomplete Gamma function.For the left limit, we obtain in general:lim_x 01-e^-x^2/x^m-β=lim_x 02e^-x^2/(m-β)x^m-β-2<∞⇔ m-2≤β,and lim_x 0e^-x^2/x^m-β<∞⇔ m≤β. Concerning the integration criteria for f_α, we have by equation (<ref>) that x^d-1f_α(x^2)_ℒ^1([0,∞)) since 1<1+2α'<3. The integration criteria for the first and second derivative, f' and f”, are established based on the equations (<ref>) and (<ref>), as d-4-2α=-2-α'<-1 and (d+1)-6-2α=-3-2α'<-1. Therefore, it remains to determine the parameters β_0,β_1,β_2 which are associated to f,f' and f”, respectively.Using the displays (<ref>) and (<ref>), we have f∈𝒬_β, withβ_0=2α, β_1=2(1+α), and β_2=2(2+α).On Assumptions <ref> and <ref> it holds thatΔ_n^d/2∑_k∈^df_α(λ_kΔ_n) =Γ(1-α')/2^d(πη)^d/2α'Γ(d/2)-∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γf_α(π^2ηz^2)z+(Δ_n^1-α'),where Γ denotes the Gamma function. Furthermore, it holds thatΔ_n^d/2∑_k∈^dg_α,τ(λ_kΔ_n) = 1/2(-τ ^α'+2 (τ +1)^α'-(τ +2)^α') Γ(1-α')/2^d(πη)^d/2α'Γ(d/2) -∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γg_α,τ(π^2ηz^2)z+(Δ_n^1-α').Given that Lemma <ref> establishes f_α∈𝒬_β_1 and g_α,τ∈𝒬_β_2, with β_1=(2α,2(1+α),2(2+α)) and β_2=(2α,2(1+α),2(1+α)), we can employ Corollary <ref> on these functions.In addition, by utilizing analogous steps as in equation (<ref>), we find:∫_0^∞1-e^-cx/x^m=c^m-1/m-1Γ(2-m),for 1<m<2 and c>0. Considering α=d/2-1+α', where α'∈(0,1), and equation (<ref>), we obtain the following:Δ_n^d/2∑_k∈^df_α(λ_kΔ_n) =1/2^d(πη)^d/2Γ(d/2)∫_0^∞ x^d/2-11-e^-x/x^1+α x+R_n,1=1/2^d(πη)^d/2Γ(d/2)∫_0^∞1-e^-x/x^1+α' x+R_n,1=1/2^d(πη)^d/2Γ(d/2)·Γ(1-α')/α'+R_n,1,whereR_n,1 :=-∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γf(π^2ηz^2)z+(Δ_n∨Δ_n^(d-2α)/2∨Δ_n^(d+2-2(1+α))/2∨Δ_n^(d+4-2(2+α))/2)=-∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γf(π^2ηz^2)z+(Δ_n^1-α').For statement (ii), we have Δ_n^d/2∑_k∈^dg_α,τ(λ_kΔ_n) =1/2^d(πη)^d/2Γ(d/2)∫_0^∞(1-e^-x)^2/2x^1+α'e^-τ x x+R_n,2=1/2^d+1(πη)^d/2Γ(d/2)(∫_0^∞e^-τ x/x^1+α'-2∫_0^∞e^-x(1+τ)/x^1+α'+∫_0^∞e^-x(2+τ)/x^1+α)+R_n,2. By using equation (<ref>), we find thatΔ_n^d/2∑_k∈^dg_α,τ(λ_kΔ_n)=Γ(1-α')/2^d+1(πη)^d/2α'Γ(d/2)(-τ^α'+2(1+τ)^α'-(2+τ)^α')+R_n,2,where R_n,2 := -∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γf(π^2ηz^2)z+(Δ_n∨Δ_n^(d-2α)/2∨Δ_n^(d+2-2(1+α))/2∨Δ_n^(d+4-2(1+α))/2)=-∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γf(π^2ηz^2)z+(Δ_n^1-α').The proof follows by utilizing the following identity for half-integer arguments:Γ(n/2)=(n-2)!!√(π)/2^(n-1)/2. On Assumptions <ref> and <ref>, we have [(Δ_iX)^2(y)] =σ^2 2^de^-κy_1∑_k∈^d𝒟_i,ksin^2(π k_1y_1)·…·sin^2(π k_dy_d)+r_n,i,where r_n,i is a sequence satisfying ∑_i=1^nr_n,i=(Δ_n^α') and 𝒟_i,k =Δ_n^d/2+α'(1-e^-λ_kΔ_n/(λ_kΔ_n)^1+α-(1-e^-λ_kΔ_n)^2/2(λ_kΔ_n)^1+αe^-2λ_k),with α'=1+α-d/2∈(0,1). First, A_i,k, B_i,k, and C_i,k are independent of each other, where i=1,…,n and k∈^d. Exploiting the fact that (W^k)_k∈^d are independent Brownian motions, the -integrals B_i,k and C_i,k are also independent and centred. Thus, we have[(Δ_iX)^2(y)] =∑_k_1∈^d∑_k_2∈^de_k_1(y)e_k_2(y)[Δ_ix_k_1Δ_ix_k_2]=∑_k∈^de_k^2(y)([B_i,k^2]+[C_i,k^2])+r_n,i,where r_n,i:=∑_k_1,k_2∈^de_k_1(y)e_k_2(y)[A_i,k_1A_i,k_2]. -isometry yields the following:[B_i,k^2] =[(σλ_k^-α/2∫_0^ e^-λ_k(-s)(e^-λ_kΔ_n-1) W_s^k)^2]=σ^2(1-e^-λ_kΔ_n)^21-e^-2λ_k/2λ_k^1+α, [C_i,k^2] =σ^2λ_k^-α∫_(i-1)Δ_n^iΔ_ne^-2λ_k(iΔ_n-s) s = σ^2 1-e^-2λ_kΔ_n/2λ_k^1+α.Additionally, we possess the following expression for the remainder r_n,i:[A_i,k_1A_i,k_2]=(e^-λ_k_1-λ_k_1Δ_n-e^-λ_k_1) (e^-λ_k_2-λ_k_2Δ_n-e^-λ_k_2) ×[⟨ξ,e_k_1⟩_ϑ⟨ξ,e_k_2⟩_ϑ]=(1-e^-λ_k_1Δ_n) (1-e^-λ_k_2Δ_n)e^-(λ_k_1+λ_k_2)[⟨ξ,e_k_1⟩_ϑ⟨ξ,e_k_2⟩_ϑ].Hence, we obtain the representation:[(Δ_iX)^2(y)]=σ^2 2^de^-∑_l=1^dκ_ly_l∑_k∈^d((1-e^-λ_kΔ_n)^21-e^-2λ_k/2λ_k^1+α+1-e^-2λ_kΔ_n/2λ_k^1+α) ×sin^2(π k_1y_1)·…·sin^2(π k_dy_d)+r_n,i.In addition, let us define:𝒟_i,k :=Δ_n^1+α((1-e^-λ_kΔ_n)^21-e^-2λ_k/2(λ_kΔ_n)^1+α+1-e^-2λ_kΔ_n/2(λ_kΔ_n)^1+α)=Δ_n^d/2+α'(1-e^-λ_kΔ_n/(λ_kΔ_n)^1+α-(1-e^-λ_kΔ_n)^2/2(λ_kΔ_n)^1+αe^-2λ_k),where α'∈(0,1). Then, we have [(Δ_iX)^2(y)] =σ^2 2^de^-κ y_1∑_k∈^d𝒟_i,ksin^2(π k_1y_1)·…·sin^2(π k_dy_d)+r_n,i.The analysis of the remainder r_n,i remains to be conducted. Here, we haver_n,i=∑_k_1,k_2∈^de_k_1(y)e_k_2(y)(1-e^-λ_k_1Δ_n) (1-e^-λ_k_2Δ_n)e^-(λ_k_1+λ_k_2)[⟨ξ,e_k_1⟩_ϑ⟨ξ,e_k_2⟩_ϑ]. To demonstrate that ∑_i=1^n r_n,i = 𝒪(Δ_n^α'), we use Assumption <ref>. Under the conditions [⟨ξ,e_k⟩_ϑ]=0 and sup_k∈^dλ_k^1+α[⟨ξ,e_k⟩_ϑ^2] < ∞, we can find a constant C > 0 such that [⟨ξ,e_k⟩_ϑ^2] ≤ C/λ_k^1+α for all k∈^d. Consequently, given that (⟨ξ,e_k⟩_ϑ)_k∈^d are independent, we have r_n,i=∑_k∈^d(1-e^-λ_kΔ_n)^2 e^-2λ_ke_k^2(y)[⟨ξ,e_k⟩^2_ϑ]≤ C∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+α e^-2λ_ke_k^2(y).Assuming the second alternative in Assumption <ref>, where [A_ϑ^(1+α)/2ξ_ϑ^2]<∞, we can proceed with the following steps. Exploiting the self-adjointness of A_ϑ on H_ϑ and employing the Cauchy-Schwarz inequality, we obtain:r_n,i =[(∑_k∈^d(1-e^-λ_kΔ_n)e^-λ_k⟨ξ,e_k⟩_ϑ e_k(y))^2]≤∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+αe^-2λ_ke_k^2(y)[∑_k∈^∞⟨ A_ϑ^(1+α)/2ξ,e_k⟩_ϑ^2 ].Applying Parseval's identity on the expected value gives us:r_n,i≤[A_ϑ^(1+α)/2ξ_ϑ^2]∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+αe^-2λ_ke_k^2(y).Since we can uniformly bound the eigenfunctions (e_k)_k∈^d, it is sufficient to bound the following expression∑_i=1^n r_n,i ≤ C∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+α∑_i=1^n e^-2λ_k≤C∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+α(1-e^-2λ_kΔ_n)≤C∑_k∈^d1-e^-λ_kΔ_n/λ_k^1+α=CΔ_n^d/2+α'∑_k∈^d1-e^-λ_kΔ_n/(λ_kΔ_n)^1+α,for both cases in Assumption <ref>, where we have used the closed form formula of the geometric series and a suitable constant C>0. Utilizing Lemma <ref>, we obtain: Δ_n^d/2∑_k∈^d1-e^-λ_kΔ_n/(λ_kΔ_n)^1+α=Δ_n^d/2∑_k∈^d f_α'(λ_kΔ_n)=C+(1),with a suitable constant C>0. Hence, we have∑_i=1^nr_n,i=(Δ_n^α'),which completes the proof. It follows the proof of Proposition <ref>.We begin by recalling Lemma <ref>:[(Δ_iX)^2(y)] =σ^2 e^-κy_1𝒯_i+r_n,i,where𝒯_i:= 2^d∑_k∈^dsin^2(π k_1y_1)·…·sin^2(π k_dy_d)𝒟_i,k=2^d∑_k∈^d1-cos(2π k_1y_1)/2·…·1-cos(2π k_dy_d)/2𝒟_i,k=∑_k∈^d𝒟_i,k+∑_l=1^d-1∑_1≤ j_1<…<j_l≤ d(-1)^l ∑_k∈^d𝒟_i,kcos(2π k_j_1y_j_1)·…·cos(2π k_j_ly_j_l) +(-1)^d∑_k∈^d𝒟_i,kcos(2π k_1y_1)⋯cos(2π k_dy_d),where 𝒟_i,k is defined as in display (<ref>). Furthermore, we define:h_α,τ(x):=(1-e^-x/x^1+α-(1-e^-x)^2/2x^1+αe^-xτ).Note, that h_α,τ(x)=f_α(x)-g_α,τ(x), where f_α and g_α,τ are defined as in equation (<ref>). By Lemma <ref> we have h_α,τ∈𝒬_β, where β=(2α,2(1+α),2(2+α)). Then, we obtain:Δ_n^-α'∑_k∈^d𝒟_i,k =Δ_n^d/2∑_k∈^dh_α,2(i-1)(λ_kΔ_n)=1/2^d(πη)^d/2Γ(d/2)∫_0^∞ x^d/2-1h_α,2(i-1)(x) x-∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γh_α,2(i-1)(π^2ηz^2)z+(Δ_n^1-α'),and Δ_n^d/2∑_l=1^d-1∑_1≤ j_1<…<j_l≤ d(-1)^l ∑_k∈^d h_α,2(i-1)(λ_kΔ_n)cos(2π k_j_1y_j_1)·…·cos(2π k_j_ly_j_l)=∑_γ_1=1γ∈{0,1}^d^d-1∫_B_γ̃_lh_α,2(i-1)(π^2ηz_2^2)z+(Δ_n^1-α').Thus, by using Corollary <ref> we have∑_k∈^d𝒟_i,k+∑_l=1^d-1∑_1≤ j_1<…<j_l≤ d(-1)^l ∑_k∈^d𝒟_i,kcos(2π k_j_1y_j_1)·…·cos(2π k_j_ly_j_l) +(-1)^d∑_k∈^d𝒟_i,kcos(2π k_1y_1)⋯cos(2π k_dy_d)=Δ_n^α'/2^d(πη)^d/2Γ(d/2)∫_0^∞ x^d/2-1h_α,2(i-1)(x) x+(Δ_n).Utilizing Lemma <ref> yields1/2^d(πη)^d/2Γ(d/2)(∫_0^∞ f_α'(x) x-∫_0^∞ g_α',2(i-1)(x) x)=Γ(1-α')/2^d(πη)^d/2α'Γ(d/2)(1+1/2(2(i-1))^α'-(1+2(i-1))^α'+1/2(2+2(i-1))^α').Therefore, we have with Lemma <ref> that[(Δ_i X)^2(y)] =σ^2e^-κy_1Δ_n^α'Γ(1-α')/2^d(πη)^d/2α'Γ(d/2)(1+1/2(2(i-1))^α'-(1+2(i-1))^α'+1/2(2+2(i-1))^α') +r_n,i+(Δ_n)=Δ_n^α'σ^2e^-κy_1Γ(1-α')/2^d(πη)^d/2α'Γ(d/2)+r̃_n,i+(Δ_n),where r̃_n,i includes r_n,i and the i dependent term from the last display. For this i-dependent term we define the following function: t(x):=x^α'/2(-1+2(1+1/x)^α'-(1+2/x)^α'),and have with x=2(i-1) that∑_i=1^nt(2(i-1))=(∑_i=0^∞1/i^2-α')=(1).Hence, we have by Lemma <ref> that ∑_i=1^nr̃_n,i=(Δ_n^α'), which completes the proof. Next, we proof Proposition <ref>.We begin with the following expression:(Δ_iX(y),Δ_jX(y)) =∑_k_1,k_2∈^d (Δ_ix_k_1,Δ_jx_k_2)e_k_1(y)e_k_2(y)=∑_k∈^d (A_i,k+B_i,k+C_i,k,A_j,k+B_j,k+C_j,k)e_k^2(y). Since (⟨ξ,e_k⟩_ϑ)k∈^d are independent by Assumption <ref>, we can use the independence of A_i,k and B_i,k and analyse the covariance of the remaining terms. Here, we have, by the Itô-isometry and i<j:Σ_i,j^B,k :=(B_i,k,B_j,k)=σ^2λ_k^-α(1-e^-λ_kΔ_n)^2 e^-λ_k(i+j-2)Δ_n(∫_0^(i-1)Δ_ne^λ_ks W_s^k,∫_0^(i-1)Δ_ne^λ_ks W_s^k)=σ^2 (e^-λ_kΔ_n(j-i)-e^-λ_k(i+j-2)Δ_n)(1-e^-λ_kΔ_n)^2 /2λ_k^1+α.Therefore, it follows for 1≤ i,j≤ n thatΣ_i,j^B,k=σ^2 (e^-λ_kΔ_ni-j-e^-λ_k(i+j-2)Δ_n)(1-e^-λ_kΔ_n)^2 /2λ_k^1+α.Next, we have Σ_i,j^C,k=(C_i,k,C_j,k)=0, for i≠ j, and we derive the following:Σ_i,j^C,k = 1_{j=i}(C_i,k,C_i,k)=1_{j=i}σ^2 1-e^-2λ_kΔ_n/2λ_k^1+α.It remains to analyse the covariance of B_i,k and C_j,k. Since Σ_i,j^BC,k:=(B_i,k,C_j,k)=0 for i≤ j, we analyse the following:Σ_i,j^BC,k =1_{i>j}(B_i,k,C_j,k)=1_{i>j}σ^2λ_k^-α(e^-λ_kΔ_n-1)e^-λ_k(i-1)Δ_ne^-λ_kjΔ_n(∫_(j-1)Δ_n^jΔ_ne^λ_ks W_s^k,∫_(j-1)Δ_n^jΔ_ne^λ_ks W_s^k)=1_{i>j}σ^2e^-λ_kΔ_n(i-j)(e^λ_kΔ_n-e^-λ_kΔ_n)e^-λ_kΔ_n-1/2λ_k^1+α.Similarly, we have Σ_j,i^BC,k:=Σ_i,j^CB,k:=(C_i,k,B_j,k)=1_{i<j}σ^2e^-λ_kΔ_n(j-i)(e^λ_kΔ_n-e^-λ_kΔ_n)e^-λ_kΔ_n-1/2λ_k^1+α.For i<j we obtain:(Δ_iX(y),Δ_jX(y)) =∑_k∈^d (A_i,k+B_i,k+C_i,k,A_j,k+B_j,k+C_j,k)e_k^2(y) = ∑_k∈^d(Σ_i,j^B,k+Σ_i,j^CB,k)e_k^2(y)+r_i,j,wherer_i,j :=∑_k∈^d(A_i,k,A_j,k)e_k^2(y)=∑_k∈^d(e^-λ_kiΔ_n-e^-λ_k(i-1)Δ_n)(e^-λ_kjΔ_n-e^-λ_k(j-1)Δ_n)(⟨ξ,e_k⟩_ϑ)e_k^2(y)=∑_k∈^d e^-λ_kΔ_n(i+j-2)(e^-λ_kΔ_n-1)^2(⟨ξ,e_k⟩_ϑ)e_k^2(y).We use that the operator A_ϑ is self-adjoint on H_ϑ, such that -λ_k^(1+α)/2⟨ξ,e_k⟩=⟨ A_ϑ^(1+α)/2ξ,e_k⟩ and derive the following inequality for the remainder:r_i,j ≤∑_k∈^d e^-λ_kΔ_n(i+j-2)(e^-λ_kΔ_n-1)^2/λ_k^1+α[(λ_k^(1+α)/2⟨ξ,e_k⟩_ϑ)^2]e_k^2(y)≤ Csup_k∈[⟨ A_ϑ^(1+α )/2ξ,e_k⟩_ϑ^2]∑_k∈^d e^-λ_kΔ_n(i+j-2)(1-e^-λ_kΔ_n)^2/λ_k^1+α.Furthermore, for i<j we have(Δ_iX(y),Δ_jX(y)) =∑_k∈^d(Σ_i,j^B,k+Σ_i,j^CB,k)e_k^2(y)+r_i,j=σ^2∑_k∈^de_k^2(y) e^-λ_kΔ_n(j-i)(1-e^-λ_kΔ_n)^2 +(e^λ_kΔ_n-e^-λ_kΔ_n)(e^-λ_kΔ_n-1)/2λ_k^1+α -σ^2∑_k∈^de_k^2(y) e^-λ_k(i+j-2)Δ_n(1-e^-λ_kΔ_n)^2 /2λ_k^1+α+r_i,j.We define the second remainder as:s_i,j:=-σ^2∑_k∈^de_k^2(y) e^-λ_k(i+j-2)Δ_n(1-e^-λ_kΔ_n)^2 /2λ_k^1+α. Using the identity sin^2(x)=(1-cos(2x))/2, we arrive at:(Δ_iX(y),Δ_jX(y))= σ^2∑_k∈^de_k^2(y) e^-λ_kΔ_n(j-i)2-e^-λ_kΔ_n-e^λ_kΔ_n/2λ_k^1+α+ s_i,j+r_i,j=- σ^2Δ_n^1+α∑_k∈^d^∞e_k^2(y) e^-λ_kΔ_n(j-i-1)(1-e^-λ_kΔ_n)^2/2(λ_kΔ_n)^1+α+ s_i,j+r_i,j=-σ^2 e^-κy_1Δ_n^1+α∑_k∈^d e^-λ_kΔ_n(j-i-1)(1-e^-λ_kΔ_n)^2/2(λ_kΔ_n)^1+α∏_l=1^d(1-cos(2π k_ly_l)) + s_i,j+r_i,j.By defining the following expression:𝒮_i,k:=e^-λ_kΔ_n(j-i-1)(1-e^-λ_kΔ_n)^2/2(λ_kΔ_n)^1+α=g_α,(j-i-1)(λ_kΔ_n),we obtain that-σ^2 e^-κy_1Δ_n^1+α∑_k∈^d e^-λ_kΔ_n(j-i-1)(1-e^-λ_kΔ_n)^2/2(λ_kΔ_n)^1+α∏_l=1^d(1-cos(2π k_ly_l))=-σ^2 e^-κy_1Δ_n^1+α∑_k∈^d(𝒮_i,k+𝒮_i,k∑_l=1^d(-1)^l∑_1≤ j_1<…<j_l≤ ncos(2π k_j_1y_j_1)⋯cos(2π k_j_ly_j_l)).Since we know by Lemma <ref> that g_α,τ∈𝒬_β, with β=(2α,2(1+α),2(1+α)), we have with Lemma <ref> and Corollary <ref> thatΔ_n^d/2∑_k∈^dg_α,τ(λ_kΔ_n) =Γ(1-α')/2^d(πη)^d/2α'Γ(d/2)(-1/2τ^α'+(1+τ)^α'-1/2(2+τ)^α') -∑_γ_1=1 γ∈{0,1}^d ^d-1∫_B_γg_α,τ(π^2ηz_2^2)z+(Δ_n^1-α'),and Δ_n^d/2∑_l=1^d(-1)^l∑_1≤ j_1<…<j_l≤ n∑_k∈^dg_α,τ(λ_kΔ_n)cos(2π k_j_1y_j_1)⋯cos(2π k_j_ly_j_l)=∑_γ_1=1 γ∈{0,1}^d ^d-1∫_B_γg_α,τ(π^2ηz_2^2)z+(Δ_n^1-α'). In line with Proposition <ref> and with τ=(j-i-1), we have (Δ_iX(y),Δ_jX(y))=-σ^2 e^-κy_1Δ_n^α'Γ(1-α')/2^d(πη)^d/2α'Γ(d/2)(-1/2(j-i-1)^α'+(j-i)^α'-1/2(j-i+1)^α') + s_i,j+r_i,j+(Δ_n).It remains to show that ∑_i,j=1^n(s_i,j+r_i,j)=(1). Therefore, we use display (<ref>) and obtain:∑_i,j=1^n(s_i,j+r_i,j)≤ C(σ^2+sup_k∈[⟨ A_ϑ^(1+α )/2ξ,e_k⟩_ϑ^2])∑_i,j=1^n∑_k∈^d e^-λ_kΔ_n(i+j-2)(1-e^-λ_kΔ_n)^2/λ_k^1+α= C(σ^2+sup_k∈[⟨ A_ϑ^(1+α )/2ξ,e_k⟩_ϑ^2])∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+α(1-e^-λ_knΔ_n/1-e^-λ_kΔ_n)^2≤ C(σ^2+sup_k∈[⟨ A_ϑ^(1+α )/2ξ,e_k⟩_ϑ^2])∑_k∈^d1/λ_k^1+α=(1).Analogous computations for i>j complete the proof.§.§.§ Proof of the central limit theorem from Proposition <ref>We begin this section by decomposing a temporal increment of a mild solution X̃_t with a stationary initial condition ⟨ξ,e_k⟩_ϑ∼𝒩(0,σ^2/(2λ_k^1+α)). Analogously to <cit.>, we decompose the coordinate processes as follows: Δ_ix̃_k = Ã_i,k+B_i,k+C_i,k=σλ_k^-α/2∫_-∞^0e^-λ_k(-s)(e^-λ_kΔ_n-1)W_s^k +σλ_k^-α/2∫_0^ e^-λ_k(-s)(e^-λ_kΔ_n-1)W_s^k +C_i,k= B̃_i,k+C_i,k,where B̃_i,k :=σλ_k^-α/2∫_-∞^e^-λ_k(-s)(e^-λ_kΔ_n-1)W_s^k,C_i,k =σλ_k^-α/2∫_(i-1)Δ_n^iΔ_ne^-λ_k(iΔ_n-s) W_s^k.Thus, (Δ_iX̃)(y) is centred, Gaussian and stationary.To prove a central limit theorem for the volatility estimators, utilize the following theorem.Let (Z_k_n,i)_1≤ i≤ k_n a centred triangular array, with a sequence (k_n)_n∈. Then it holds:∑_i=1^k_n Z_k_n,id⟶𝒩(0,υ^2),with υ^2=lim_n(∑_i=1^k_nZ_k_n,i)<∞ if the following conditions hold: (I) (∑_i=a^bZ_k_n,i)≤ C∑_i=a^b(Z_k_n,i), for all 1≤ a≤ b≤ k_n,(II) lim sup_n∑_i=1^k_n[Z_k_n,i^2]<∞,(III) ∑_i=1^k_n[Z_k_n,i^21_{|Z_k_n,i|>ε}]n⟶0, for all ε>0,(IV) (e^ t∑_i=a^bZ_k_n,i,e^ t∑_i=b+u^c Z_k_n,i)≤ρ_t(u)∑_i=a^c(Z_k_n,i), for all 1≤ a≤ b< b+u≤ c≤ k_n and t∈,where C>0 is a universal constant and ρ_t(u)≥ 0 is a function with ∑_j=1^∞ρ_t(2^j)<∞. The associated preliminary triangular arrays for the volatility estimator from (<ref>) is defined as follows:ξ_n,i:=2^d(πη)^d/2α'Γ(d/2)/√(nm)Δ_n^α'Γ(1-α')∑_j=1^m(Δ_iX)^2(y_j)e^κy_j_1.In the following lemma, we proof that working with triangular arrays based on a SPDEs with a stationary initial condition, i.e.:ξ̃_n,i:=2^d(πη)^d/2α'Γ(d/2)/√(nm)Δ_n^α'Γ(1-α')∑_j=1^m(Δ_iX̃)^2(y_j)e^κy_j_1,is sufficient. On Assumptions <ref> und <ref>, it holds that√(m)_n∑_i=1^n((Δ_iX̃)^2(y)-(Δ_iX)^2(y))0,for n. We initiate the proof with the following:(Δ_iX̃)^2(y)-(Δ_iX)^2(y)=∑_k_1,k_2∈(Δ_ix̃_k_1Δ_ix̃_k_2-Δ_ix_k_1Δ_ix_k_2)e_k_1(y)e_k_2(y)= T̃_i-T_i,where we define:T̃_i:=∑_k_1,k_2∈^d(Ã_i,k_1Ã_i,k_2+Ã_i,k_1(B_i,k_2+C_i,k_2)+Ã_i,k_2(B_i,k_1+C_i,k_1))e_k_1(y)e_k_2(y), T_i :=∑_k_1,k_2∈^d(A_i,k_1A_i,k_2+A_i,k_1(B_i,k_2+C_i,k_2)+A_i,k_2(B_i,k_1+C_i,k_1))e_k_1(y)e_k_2(y).It remains to show that √(m)_n∑_i=1^nT_i0, since this implies √(m)_n∑_i=1^nT̃_i0. Here, we have the following:∑_i=1^nT_i=∑_i=1^n(∑_k∈^d A_i,ke_k(y))^2+2∑_i=1^n(∑_k∈^d A_i,ke_k(y))(∑_k∈^d(B_i,k+C_i,k)e_k(y)).Using Hölder's inequality we obtain:[∑_i=1^n(∑_k∈^d A_i,ke_k(y))^2] =[∑_i=1^n∑_k∈^d A_i,k^2e_k^2(y)]+[∑_i=1^n∑_k_1,k_2∈^d k_1≠k_2 A_i,k_1A_i,k_2e_k_1(y)e_k_2(y)] ≤ C [∑_i=1^n∑_k∈^d A_i,k^2]+[|∑_i=1^n∑_k_1,k_2∈^d k_1≠k_2 A_i,k_1A_i,k_2e_k_1(y)e_k_2(y)|^2]^1/2,where C>0 is a suitable constant. Let C_ξ:=sup_k∈^dλ_k^1+α[⟨ξ,e_k⟩_ϑ^2]. With analogous steps as in Lemma <ref>, we find:∑_i=1^n∑_k∈^d[ A_i,k^2] = ∑_i=1^n∑_k∈^d(e^-λ_k-e^-λ_k)^2[⟨ξ,e_k⟩^2_ϑ]≤ C_ξ∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+α∑_i=1^n e^-2λ_k≤ C_ξ∑_k∈^d1-e^-λ_kΔ_n/λ_k^1+α=𝒪(Δ_n^α').Furthermore, we have[|∑_i=1^n∑_k_1,k_2∈^d k_1≠k_2 A_i,k_1A_i,k_2e_k_1(y)e_k_2(y)|^2]= ∑_i,j=1^n∑_k_1,k_2∈^d k_1≠k_2∑_k_3,k_4∈^d k_3≠k_4 e_k_1(y)e_k_2(y)e_k_3(y)e_k_4(y)[A_i,k_1A_i,k_2A_j,k_3A_j,k_4].Let us assume that [⟨ξ,e_k⟩_ϑ]=0 from Assumption <ref>. Then, for k_1=k_3 and k_2=k_4, we have ∑_i,j=1^n∑_k_1,k_2∈^d k_1≠k_2[A_i,k_1A_i,k_2A_j,k_1 A_j,k_2] =∑_i,j=1^n∑_k_1,k_2∈^d k_1≠k_2(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2 × e^-λ_k_1(i+j-2)Δ_n-λ_k_2(i+j-2)Δ_n[⟨ξ,e_k_1⟩^2_ϑ] [⟨ξ,e_k_2⟩^2_ϑ] ≤ C_ξ^2∑_k_1,k_2∈^d k_1≠k_2(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/λ_k_1^1+αλ_k_2^1+α∑_i,j=1^ne^-(λ_k_1+λ_k_2)(i+j-2)Δ_n,where the case k_1=k_4 and k_2=k_3 works analogously. By using the geometric series, we obtain:∑_i,j=1^ne^-(λ_k_1+λ_k_2)(i+j-2)Δ_n ≤1/(1-e^-(λ_k_1+λ _k_2)Δ_n)^2,and therefore, we have ∑_i,j=1^n∑_k_1,k_2∈^d k_1≠k_2[A_i,k_1A_i,k_2A_j,k_1A_j,k_2]≤ C_ξ^2∑_k_1,k_2∈^d(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/λ_k_1^1+αλ_k_2^1+α(1-e^-(λ_k_1+λ_k_2)Δ_n)^2≤ C_ξ^2∑_k_1,k_2∈^d(1-e^-λ_k_1Δ_n)(1-e^-λ_k_2Δ_n)/λ_k_1^1+αλ_k_2^1+α=𝒪(Δ_n^2α'),where we have used (1-p)(1-q)/(1-pq)≤ 1-p, for 0≤ p,q<1. For the second option in Assumption <ref>, we use an analogous procedure as in Lemma <ref>. Here, we have with C_ξ':=∑_k∈λ_k^1+α[⟨ξ,e_k⟩_ϑ^2]<∞ and Parseval's identity that∑_i,j=1^n∑_k_1,k_2∈^d k_1≠k_2∑_k_3,k_4∈^d k_3≠k_4 [A_i,k_1A_i,k_2A_j,k_3A_j,k_4]≤(∑_i=1^n(∑_k∈^d[A_i,k] )^2)^2 ≤(∑_i=1^n(∑_k∈^d(e^-λ_kΔ_n-1)e^-λ_k/λ_k^(1+α)/2λ_k^(1+α)/2[⟨ξ,e_k⟩_ϑ^2] ^1/2)^2)^2≤(∑_i=1^n(∑_k∈^d(1-e^-λ_kΔ_n)^2e^-2λ_k/λ_k^1+α)(∑_k∈^dλ_k^1+α[⟨ξ,e_k⟩_ϑ^2]))^2 ≤ C_ξ'^2(∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+α(1-e^-2λ_kΔ_n))^2=𝒪(Δ_n^2α').By using Markov's inequality, we conclude with∑_i=1^n(∑_k∈^d A_i,ke_k(y))^2 = 𝒪_(Δ_n^α').Continuing, we proceed to bound the following term:2∑_i=1^n(∑_k∈^d A_i,ke_k(y))(∑_k∈^d(B_i,k+C_i,k)e_k(y)).We can make use of the independence of A_i,k,B_i,k and C_i,k to show:[|∑_i=1^n(∑_k∈^d A_i,ke_k(y))(∑_k∈^d(B_i,k+C_i,k)e_k(y))|^2]=∑_i,j=1^n(∑_k_1,k_2∈^d[A_i,k_1 A_j,k_2]e_k_1(y)e_k_2(y)) (∑_k∈^d[(B_i,k+C_i,k)(B_j,k+C_j,k)]e_k^2(y)) =:∑_i,j=1^nR_i,jS_i,j,where R_i,j :=∑_k_1,k_2∈^d^∞[A_i,k_1 A_j,k_2]e_k_1(y)e_k_2(y),S_i,j :=∑_k∈^d[(B_i,k+C_i,k)(B_j,k+C_j,k)]e_k^2(y). Assuming the first option in Assumption <ref> holds, we can analogously obtain, as in equation (<ref>), thatR_i,j = ∑_k∈^d[A_i,k A_j,k]e_k^2(y)≤ CC_ξ∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+αe^-λ_k(i+j-2)Δ_n=𝒪(Δ_n^α'),and therefore it holds that ∑_i,j=1^nR_i,j=𝒪(Δ_n^α') and sup_i,j=1,…,nR_i,j=𝒪(Δ_n^α') as well assup_j=1,…,n∑_i=1^n R_i,j=𝒪(Δ_n^α'). For the second option in Assumption <ref>, we find:R_i,j ≤ C∑_k_1,k_2∈^d(1-e^-λ_k_1Δ_n)(1-e^-λ_k_2Δ_n)/λ_k_1^(1+α)/2λ_k_2^(1+α)/2e^-(λ_k_1(i-1)+λ_k_2(j-1))Δ_n ×λ_k_1^(1+α)/2[⟨ξ,e_k_1⟩_ϑ] λ_k_2^(1+α)/2[⟨ξ,e_k_2⟩_ϑ] ≤ C∑_k_1,k_2∈^d(1-e^-λ_k_1Δ_n)(1-e^-λ_k_2Δ_n)/λ_k_1^(1+α)/2λ_k_2^(1+α)/2e^-(λ_k_1(i-1)+λ_k_2(j-1))Δ_n ×λ_k_1^(1+α)/2[⟨ξ,e_k_1⟩_ϑ^2 ]^1/2λ_k_2^(1+α)/2[⟨ξ,e_k_2⟩_ϑ^2 ]^1/2=C∑_k_1∈^d(1-e^-λ_k_1Δ_n)/λ_k_1^(1+α)/2e^-λ_k_1(i-1)Δ_nλ_k_1^(1+α)/2[⟨ξ,e_k_1⟩_ϑ ^2]^1/2∑_k_2∈^d(1-e^-λ_k_2Δ_n)/λ_k_2^(1+α)/2 × e^-λ_k_2(j-1)Δ_nλ_k_2^(1+α)/2[⟨ξ,e_k_2⟩_ϑ^2 ]^1/2≤ CC_ξ'(∑_k_1∈^d(1-e^-λ_k_1Δ_n)^2/λ_k_1^1+αe^-2λ_k_1(i-1)Δ_n∑_k_2∈^d(1-e^-λ_k_2Δ_n)^2/λ_k_2^1+αe^-2λ_k_2(j-1)Δ_n)^1/2,and therefore, we have∑_i,j=1^nR_i,j ≤ CC_ξ'∑_k∈^d1-e^-λ_kΔ_n/λ_k^1+α=(Δ_n^α').Thus, we infer for both options in Assumption <ref>, that sup_i,jR_i,j=𝒪(Δ_n^α') and sup_j∑_i=1^n R_i,j=𝒪(Δ_n^α'). For the term S_i,j, we obtain:S_i,j =∑_k∈^d[(B_i,k+C_i,k)(B_j,k+C_j,k)]e_k^2(y)=∑_k∈^d(Σ_i,j^B,k+Σ_i,j^BC,k+Σ_j,i^BC,k+Σ_i,j^C,k)e_k^2(y),where we used the notation of the proof of Proposition <ref>, where Σ_i,j^B,k:=(B_i,k,B_j,k), Σ_i,j^BC,k:=(B_i,k,C_j,k), Σ_i,j^C,k:=(C_i,k,C_j,k).Upon inserting the calculations of Proposition <ref>, we infer for i<j thatS_i,j =∑_k∈^d(Σ_i,j^B,k+Σ_j,i^BC,k)e_k^2(y) ≤ -σ^2 e^-κy_1Δ_n^α'Γ(1-α')/2^d(πη)^d/2α'Γ(d/2)(-1/2(j-i-1)^α'+(j-i)^α'-1/2(j-i+1)^α') +Cσ^2∑_k∈^d e^-λ_k(i+j-2)Δ_n(1-e^-λ_kΔ_n)^2 /λ_k^1+α+𝒪(Δ_n).For i=j we obtain:S_i,i =∑_k∈^d(Σ_ii^B,k+Σ_ii^C,k)e_k^2(y)≤ Cσ^2∑_k∈^d( (1-e^-λ_kΔ_n)^2 /2λ_k^1+α+1-e^-2λ_kΔ_n/2λ_k^1+α)=Cσ^2∑_k∈^d1-e^-λ_kΔ_n/λ_k^1+α . Utilizing equation (<ref>) we find that∑_i,j=1^nR_i,jS_i,j ≤ C∑_i,j=1^n(∑_k∈^d(1-e^-λ_kΔ_n)^2/λ_k^1+αe^-λ_k(i+j-2)Δ_n)(1_{i≠ j}Δ_n^α'i-j^α'-2 +∑_k∈^d( (1-e^-λ_kΔ_n)^2 /λ_k^1+αe^-λ_k(i+j-2)Δ_n+1_{i=j}1-e^-λ_kΔ_n/λ_k^1+α)+(Δ_n))=𝒪(Δ_n^2α'∑_j=1^∞ j^α'-2+Δ_n^2α')=𝒪(Δ_n^2α'), where C>0 is a suitable constant. From the analysis above, we find that both terms in display (<ref>) are of order _(Δ_n^α'). Therefore, we conclude that √(m)_n∑_i=1^nT_i0, which completes the proof.Thanks to the previous lemma, we have ∑_i=1^n(ξ̃_n,i-ξ_n,i)0,as n, which allows us to investigate a mild solution under a stationary condition from now on.We follow up by investigating the variance-covariance structure of the following term:V_p,Δ_n(y):=1/pΔ_n^α'∑_i=1^p(Δ_iX̃)^2(y)e^κy_1,for y∈[δ,1-δ]^d. We refer to this expression as rescaled realized volatility. On the Assumptions <ref> and <ref>, we have for the rescaled realized volatility in two spacial coordinates y_1,y_2∈[δ,1-δ]^d that(V_p,Δ_n(y_1),V_p,Δ_n(y_2)) =1_{y_1=y_2}Υ_α'/p(Γ(1-α')σ^2/2^d(πη)^d/2α'Γ(d/2))^2(1+𝒪(Δ_n^1/2∨Δ_n^1-α'/δ^d+1∨1/p)) +(Δ_n^1-α'/p(1_{y_1≠y_2}y_1-y_2_0^-(d+1)+δ^-(d+1))),where Υ_α' is a numerical constant depending on α'∈(0,1), defined in equation (<ref>).In particular we have (V_n,Δ_n(y))=Υ_α'/n(Γ(1-α')σ^2/2^d(πη)^d/2α'Γ(d/2))^2(1+(Δ_n^1/2∨Δ_n^1-α')).It holds that(V_p,Δ_n(y_1),V_p,Δ_n(y_2))=2e^κ(y_1+y_2)_1/p^2Δ_n^2α'∑_i,j=1^p(∑_k_1,k_2∈^de_k_1(y_1)e_k_1(y_2)e_k_2(y_1)e_k_2(y_2)(Δ_ix̃_k_1Δ_ix̃_k_2, Δ_jx̃_k_1Δ_jx̃_k_2))=2e^κ(y_1+y_2)_1/pΔ_n^2α'∑_k_1,k_2∈^de_k_1(y_1)e_k_1(y_2)e_k_2(y_1)e_k_2(y_2)D_k_1,k_2,where D_k_1,k_2:=1/p∑_i,j=1^p((B̃_i,k_1+C_i,k_1)(B̃_i,k_2+C_i,k_2), (B̃_j,k_1+C_j,k_1)(B̃_j,k_2+C_j,k_2)).Consider (Z_k)_k∈^d as independent standard normal distributed random variables, which are independent to B_i,k. We can express B̃_i,k as:B̃_i,k=B_i,k+1/(2λ_k^1+α)^1/2σ( e^-λ_kΔ_n-1)e^-λ_k(i-1)Δ_nZ_k.Hence, we derive the following covariance structures:(B̃_i,k,C_j,k) =(B_i,k,C_j,k)=Σ_i,j^BC,k, (B̃_i,k,B̃_j,k) =(B_i,k,B_j,k)+σ^2/2λ_k^1+α(e^-λ_kΔ_n-1)^2e^-λ_k(i+j-2)Δ_n (Z_k)=σ^2/2λ_k^1+α(e^-λ_kΔ_n-1)^2e^-λ_kΔ_ni-j=:Σ̃_i,j^B,k,where we have applied equation (<ref>). As B̃_i,k+C_i,k is centred normally distributed, we can use Isserlis' theorem to deduce thatD_k_1,k_2 =1/p∑_i,j=1^p ([(B̃_i,k_1+C_i,k_1)(B̃_j,k_1+C_j,k_1)] [(B̃_i,k_2+C_i,k_2)(B̃_j,k_2+C_j,k_2)] +[(B̃_i,k_1+C_i,k_1) (B̃_j,k_2+C_j,k_2)][(B̃_i,k_2+C_i,k_2)(B̃_j,k_1+C_j,k_1)]).For further reading on the Isserlis theorem, we recommend referring to <cit.>. Assume k_1≠k_2, then we haveD_k_1,k_2 =1/p∑_i,j=1^p [(B̃_i,k_1+C_i,k_1)(B̃_j,k_1+C_j,k_1)] [(B̃_i,k_2+C_i,k_2)(B̃_j,k_2+C_j,k_2))]=1/p∑_i,j=1^p (Σ̃_i,j^B,k_1+Σ_i,j^BC,k_1+Σ_j,i^BC,k_1+Σ_i,j^C,k_1)(Σ̃_i,j^B,k_2+Σ_i,j^BC,k_2+Σ_j,i^BC,k_2+Σ_i,j^C,k_2). We calculate each combination separately. To do this, we use the following identity:∑_i,j=1^pq^|i-j|=2q^p+1-q/(1-q)^2+p1+q/1-q,for q≠ 1. Then, we have1/p∑_i,j=1^p Σ̃_i,j^B,k_1Σ̃_i,j^B,k_2 =σ^4(e^-λ_k_1Δ_n-1)^2(e^-λ_k_2Δ_n-1)^2/4pλ_k_1^1+αλ_k_2^1+α∑_i,j=1^p e^-(λ_k_1+λ_k_2)Δ_ni-j=σ^4(e^-λ_k_1Δ_n-1)^2(e^-λ_k_2Δ_n-1)^2/4λ_k_1^1+αλ_k_2^1+α·1+e^-(λ_k_1+λ_k_2)Δ_n/1-e^-(λ_k_1+λ_k_2)Δ_n ×(1+𝒪(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_n)). By utilizing equation (<ref>), we obtain:1/p∑_i,j=1^p Σ_i,j^C,k_1Σ_i,j^C,k_2 =σ^4 (1-e^-2λ_k_1Δ_n)(1-e^-2λ_k_2Δ_n)/4λ_k_1^1+αλ_k_2^1+α.Using equation (<ref>) and the identity∑_i,j=1^p1_{i>j}q^i-j=pq/1-q+q-q^p+1/(1-q)^2,yields that1/p∑_i,j=1^p Σ_i,j^BC,k_1Σ_i,j^BC,k_2 =1/p∑_i,j=1^p1_{i>j}σ^4(e^-λ_k_1Δ_n-1)(e^-λ_k_2Δ_n-1)/4λ_k_1^1+αλ_k_2^1+α × e^-(λ_k_1+λ_k_2)Δ_n(i-j)(e^λ_k_1Δ_n-e^-λ_k_1Δ_n) (e^λ_k_2Δ_n-e^-λ_k_2Δ_n)=σ^4(e^-λ_k_1Δ_n-1)(e^-λ_k_2Δ_n-1)/4λ_k_1^1+αλ_k_2^1+α·(1-e^-2λ_k_1Δ_n)(1-e^-2λ_k_2Δ_n)/1-e^-(λ_k_1+λ_k_2)Δ_n ×(1+𝒪(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_n)).The same calculations apply to Σ_j,i^BC,k_1Σ_j,i^BC,k_2. As for the cross-terms, we obtain:1/p∑_i,j=1^p Σ̃_i,j^B,k_1(Σ_i,j^BC,k_2+Σ_j,i^BC,k_2) =σ^4(e^-λ_k_1Δ_n-1)^2(e^-λ_k_2Δ_n-1)/4λ_k_1^1+αλ_k_2^1+α(e^λ_k_2Δ_n-e^-λ_k_2Δ_n) 1/p∑_i,j=1^p1_{i>j} e^-(λ_k_1+λ_k_2)Δ_n(i-j) + σ^4(e^-λ_k_1Δ_n-1)^2(e^-λ_k_2Δ_n-1)/4λ_k_1^1+αλ_k_2^1+α(e^λ_k_2Δ_n-e^-λ_k_2Δ_n) 1/p∑_i,j=1^p1_{j>i} e^-(λ_k_1+λ_k_2)Δ_n(j-i)=σ^4(e^-λ_k_1Δ_n-1)^2(e^-λ_k_2Δ_n-1)/2λ_k_1^1+αλ_k_2^1+α e^-λ_k_1Δ_n1-e^-2λ_k_2Δ_n/1-e^-(λ_k_1+λ_k_2)Δ_n(1+𝒪(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_n)),and1/p∑_i,j=1^p Σ̃_i,j^B,k_1Σ_i,j^C,k_2 = 1/p∑_i,j=1^pσ^2/2λ_k_1^1+α(e^-λ_k_1Δ_n-1)^2e^-λ_k_1Δ_ni-j1_{i=j}σ^2 1-e^-2λ_k_2Δ_n/2λ_k_2^1+α=σ^4(e^-λ_k_1Δ_n-1)^2(1-e^-2λ_k_2Δ_n)/4λ_k_1^1+αλ_k_2^1+α.Furthermore, the following cross-terms vanish:1/p∑_i,j=1^p Σ_i,j^BC,k_1Σ_i,j^C,k_2=1/p∑_i,j=1^p Σ_j,i^BC,k_1Σ_i,j^C,k_2=1/p∑_i,j=1^p Σ_i,j^BC,k_1Σ_j,i^BC,k_2=0.Inserting the auxiliary calculations into equation (<ref>) results in:D_k_1,k_2 =1/p∑_i,j=1^p (Σ̃_i,j^B,k_1+Σ_i,j^BC,k_1+Σ_j,i^BC,k_1+Σ_i,j^C,k_1) (Σ̃_i,j^B,k_2+Σ_i,j^BC,k_2+Σ_j,i^BC,k_2+Σ_i,j^C,k_2) =1/p∑_i,j=1^p(Σ̃_i,j^B,k_1Σ̃_i,j^B,k_2+Σ̃_i,j^B,k_1(Σ_i,j^BC,k_2+Σ_j,i^BC,k_2)+Σ̃_i,j^B,k_1Σ_i,j^C,k_2+(Σ_i,j^BC,k_1+Σ_j,i^BC,k_1)Σ̃_i,j^B,k_2 +Σ_i,j^BC,k_1Σ_i,j^BC,k_2+Σ_j,i^BC,k_1Σ_j,i^BC,k_2+Σ_i,j^C,k_1Σ̃_i,j^B,k_2+Σ_i,j^C,k_1Σ_i,j^C,k_2)=σ^4( (e^-λ_k_1Δ_n-1)^2(e^-λ_k_2Δ_n-1)^2/4λ_k_1^1+αλ_k_2^1+α(1+e^-(λ_k_1+λ_k_2)Δ_n/1-e^-(λ_k_1+λ_k_2)Δ_n +e^-λ_k_1Δ_n(1-e^-2λ_k_2Δ_n)/1-e^-(λ_k_1+λ_k_2)Δ_n·2/e^-λ_k_2Δ_n-1+e^-λ_k_2Δ_n(1-e^-2λ_k_1Δ_n)/1-e^-(λ_k_1+λ_k_2)Δ_n·2/e^-λ_k_1Δ_n-1 +(1-e^-2λ_k_1Δ_n)(1-e^-2λ_k_2Δ_n)/1-e^-(λ_k_1+λ_k_2)Δ_n·2/(e^-λ_k_1Δ_n-1)(e^-λ_k_2Δ_n-1)) +(e^-λ_k_1Δ_n-1)^2(1-e^-2λ_k_2Δ_n)/4λ_k_1^1+αλ_k_2^1+α+(e^-λ_k_2Δ_n-1)^2(1-e^-2λ_k_1Δ_n)/4λ_k_1^1+αλ_k_2^1+α +(1-e^-2λ_k_1Δ_n)(1-e^-2λ_k_2Δ_n)/4λ_k_1^1+αλ_k_2^1+α)(1+𝒪(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_n)).Using the identity (e^2x-1)/(e^x-1)=e^x+1, we haveD_k_1,k_2 =σ^4((e^-λ_k_1Δ_n-1)^2(e^-λ_k_2Δ_n-1)^2/4λ_k_1^1+αλ_k_2^1+α·3-e^-(λ_k_1+λ_k_2)Δ_n/1-e^-(λ_k_1+λ_k_2)Δ_n +(e^-λ_k_1Δ_n-1)^2(1-e^-2λ_k_2Δ_n)/4λ_k_1^1+αλ_k_2^1+α+(e^-λ_k_2Δ_n-1)^2(1-e^-2λ_k_1Δ_n)/4λ_k_1^1+αλ_k_2^1+α +(1-e^-2λ_k_1Δ_n)(1-e^-2λ_k_2Δ_n)/4λ_k_1^1+αλ_k_2^1+α)×(1+𝒪(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_n))=σ^4((e^-λ_k_1Δ_n-1)^2(e^-λ_k_2Δ_n-1)^2/4λ_k_1^1+αλ_k_2^1+α·4-2e^-(λ_k_1+λ_k_2)Δ_n/1-e^-(λ_k_1+λ_k_2)Δ_n+(1-e^-λ_k_1Δ_n)(1-e^-λ_k_2Δ_n)/4λ_k_1^1+αλ_k_2^1+α ×2(2-(1-e^-λ_k_1Δ_n)(1-e^-λ_k_2Δ_n)))(1+𝒪(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_n))=σ^4((1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/2λ_k_1^1+αλ_k_2^1+α·1/1-e^-(λ_k_1+λ_k_2)Δ_n+(1-e^-λ_k_1Δ_n)(1-e^-λ_k_2Δ_n)/λ_k_1^1+αλ_k_2^1+α) ×(1+𝒪(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_n)).Recalling the calculations of the covariance yields:(V_p,Δ_n(y_1),V_p,Δ_n(y_2))=2e^κ(y_1+y_2)_1σ^4/pΔ_n^2α'∑_k_1,k_2∈^dk_1 ≠k_2e_k_1(y_1)e_k_1(y_2)e_k_2(y_1)e_k_2(y_2)D̅_k_1,k_2(1+𝒪(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_n)) +2e^κ(y_1+y_2)_1/pΔ_n^2α'∑_k∈^de^2_k(y_1)e^2_k(y_2)D_k,k ,where we define:D̅_k_1,k_2:=(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/2λ_k_1^1+αλ_k_2^1+α·1/1-e^-(λ_k_1+λ_k_2)Δ_n+(1-e^-λ_k_1Δ_n)(1-e^-λ_k_2Δ_n)/λ_k_1^1+αλ_k_2^1+α. Regarding the remainder, we utilize the inequality (1-e^-(x+y))^-1≤ (1-e^-x)^-1/2(1-e^-y)^-1/2. For a sufficiently large p, we deduce that1/p^2Δ_n^2α'∑_k_1,k_2∈^d k_1≠k_2D_k_1,k_2/1-e^-(λ_k_1+λ_k_2)Δ_n ≤3/p^2Δ_n^2α'(Δ_n^1+α∑_k∈^d(1-e^-λ_kΔ_n)^1/2/2(λ_kΔ_n)^1+α)^2=3/p^2(Δ_n^d/2∑_k∈^d(1-e^-λ_kΔ_n)^1/2/2(λ_kΔ_n)^1+α)^2. Thanks to Lemma <ref>, we obtain the convergence of the series, such that1/p^2Δ_n^2α'∑_k_1,k_2∈^d k_1≠k_2D_k_1,k_2/1-e^-(λ_k_1+λ_k_2)Δ_n=(1/p^2(∫_0^∞√(1-e^-x)/x^1+α' x)^2)=(p^-2). For small p we always obtain a bound of order (p^-1), and obtain:2e^κ(y_1+y_2)_1σ^4/pΔ_n^2α'∑_k_1,k_2∈^dk_1≠k_2e_k_1(y_1)e_k_1(y_2)e_k_2(y_1)e_k_2(y_2)D̅_k_1,k_2·𝒪(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_n)=(1/p(1∧1/p)).Thus, we find:(V_p,Δ_n(y_1),V_p,Δ_n(y_2)) =2e^κ(y_1+y_2)_1σ^4/pΔ_n^2α'∑_k_1,k_2∈^dk_1 ≠k_2e_k_1(y_1)e_k_1(y_2)e_k_2(y_1)e_k_2(y_2)D̅_k_1,k_2 +2e^κ(y_1+y_2)_1/pΔ_n^2α'∑_k∈^de^2_k(y_1)e^2_k(y_2)D_k,k+(1/p(1∧1/p)).For k_1=k_2=k we haveD_k,k =1/p(∑_i,j=1^p[(B̃_i,k+C_i,k) (B̃_i,k+C_i,k)(B̃_j,k+C_j,k)(B̃_j,k+C_j,k))] -[(B̃_i,k+C_i,k) (B̃_i,k+C_i,k)][(B̃_j,k+C_j,k)(B̃_j,k+C_j,k))])≤4/p∑_i,j=1^p (Σ̃_i,j^B,k)^2+2(Σ_i,j^BC,k)^2+(Σ_i,j^C,k)^2. Calculating the covariance terms results in:1/p∑_i,j=1^p Σ̃_i,j^B,kΣ̃_i,j^B,k =σ^4(1-e^-λ_kΔ_n)^4/4λ_k^2(1+α)1+e^-2λ_kΔ_n/1-e^-2λ_kΔ_n(1+𝒪(1∧p^-1/1-e^-2λ_kΔ_n)),1/p∑_i,j=1^p Σ_i,j^BC,kΣ_i,j^BC,k =σ^4(1-e^-λ_kΔ_n)^2/4λ_k^2(1+α)·(1-e^-2λ_kΔ_n)^2/1-e^-2λ_kΔ_n(1+𝒪(1∧p^-1/1-e^-2λ_kΔ_n)), 1/p∑_i,j=1^p Σ_i,j^C,kΣ_i,j^C,k =σ^4 (1-e^-2λ_kΔ_n)^2/4λ_k^2(1+α),where we used analogous steps as for k_1≠k_2. For k_1=k_2=k we derive thatD_k,k ≤σ^4((1-e^-λ_kΔ_n)^4/λ_k^2(1+α)1+e^-2λ_kΔ_n/1-e^-2λ_kΔ_n +2(1-e^-λ_kΔ_n)^2/λ_k^2(1+α)(1-e^-2λ_kΔ_n)+ (1-e^-2λ_kΔ_n)^2/λ_k^2(1+α))(1+𝒪(1∧p^-1/1-e^-2λ_kΔ_n)),where we define:D_k,k:=(1-e^-λ_kΔ_n)^4/λ_k^2(1+α)1+e^-2λ_kΔ_n/1-e^-2λ_kΔ_n +2(1-e^-λ_kΔ_n)^2/λ_k^2(1+α)(1-e^-2λ_kΔ_n)+(1-e^-2λ_kΔ_n)^2/λ_k^2(1+α). We demonstrate that D_k,k is negligible, as can be seen by the following:1/pΔ_n^2α'∑_k∈^dD_k,k =1/pΔ_n^2α'∑_k∈^d(1-e^-2λ_kΔ_n)^2/λ_k^2(1+α)((1-e^-λ_kΔ_n)^4(1+e^-2λ_kΔ_n)/(1-e^-2λ_kΔ_n)^3+2(1-e^-λ_kΔ_n)^2/1-e^-2λ_kΔ_n+1)≤4/pΔ_n^2α'∑_k∈^d(1-e^-2λ_kΔ_n)^2/λ_k^2(1+α)=4Δ_n^d/2/pΔ_n^d/2∑_k∈^d(1-e^-2λ_kΔ_n/(λ_kΔ_n)^1+α)^2= (p^-1Δ_n^2(1-α')), where we can use analogous steps as in Lemma <ref> to show that(1-e^-2x/x^1+α)^2=f_α^2(x)∈𝒬_β, with β=(4α,1+4α,2+4α).Hence, we have(V_p,Δ_n(y_1),V_p,Δ_n(y_2))=2σ^4e^κ(y_1+y_2)_1/pΔ_n^2α'∑_k_1,k_2∈^d k_1≠k_2e_k_1(y_1)e_k_1(y_2)e_k_2(y_1)e_k_2(y_2)D̅_k_1,k_2 +𝒪(1/p(Δ_n^2(1-α')+1/p∧1)). We can represent the term D̅_k_1,k_2 from equation (<ref>) as:D̅_k_1,k_2=(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/2λ_k_1^1+αλ_k_2^1+α∑_r=0^∞ e^-r(λ_k_1+λ_k_2)Δ_n+(1-e^-λ_k_1Δ_n)(1-e^-λ_k_2Δ_n)/λ_k_1^1+αλ_k_2^1+α,and decompose as follows:D̅_k_1,k_2^1 :=∑_r=0^∞(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/2λ_k_1^1+αλ_k_2^1+αe^-r(λ_k_1+λ_k_2)Δ_n, D̅_k_1,k_2^2 :=(1-e^-λ_k_1Δ_n)(1-e^-λ_k_2Δ_n)/λ_k_1^1+αλ_k_2^1+α.Assume y_1≠y_2, then we have e_k(y_1)e_k(y_2)=e^-κ (y_1+y_2)_1∏_l=1^d(cos(π k_l(y^(1)_l-y^(2)_l))-cos(π k_l(y^(1)_l+y^(2)_l))).Let x_l^(1),x_l^(2)∈{(y^(1)_l-y^(2)_l)/2,(y^(1)_l+y^(2)_l)/2}, then we find: 1/pΔ_n^2α'∑_k_1,k_2∈^dD̅_k_1,k_2^1∏_l=1^dcos(2π k^(1)_lx_l^(1))cos(2π k^(2)_lx_l^(2))=2/p∑_r=0^∞(Δ_n^d/2∑_k_1∈^dg_α,r(λ_k_1Δ_n)∏_l=1^dcos(2π k^(1)_lx_l^(1)))(Δ_n^d/2∑_k_2∈^dg_α,r(λ_k_2Δ_n)∏_l=1^dcos(2π k^(2)_lx_l^(2))).Note, that y_1≠y_2 only implies that one coordinate y_l^(1)≠ y_l^(2) differs. To analyse the order of one of the series in the last display, we can utilize Corollary <ref> (ii) and (iii) on the function g_α,τ∈𝒬_(2α,2(1+α),2(1+α)) from display (<ref>), which gives the following:Δ_n^d/2∑_k_2∈^dg_α,r(λ_k_2Δ_n)∏_l=1^dcos(2π k^(2)_lx_l^(2)) =(Δ_n^1-α'/y_1-y_2_0^d+1+Δ_n^1-α'/δ^d+1). Here, we considered the case when y_1≠y_2 differing in every component, i.e., we used the order from Lemma <ref> (iii) and took into account that x_l can exceed and fall below the limit of 1-δ and δ, respectively, by inserting the bounds y_1-y_2_0 and δ. Hence, we have 2/p∑_r=0^∞(Δ_n^d/2∑_k_1∈^dg_α,r(λ_k_1Δ_n)∏_l=1^dcos(2π k^(1)_lx_l^(1)))(Δ_n^d/2∑_k_2∈^dg_α,r(λ_k_2Δ_n)∏_l=1^dcos(2π k^(2)_lx_l^(2)))=(Δ_n^1-α'/p(y_1-y_2_0^-(d+1)+δ^-(d+1))∑_r=0^∞Δ_n^d/2∑_k∈^d|g_α,r(λ_kΔ_n)|)=(Δ_n^1-α'/p(y_1-y_2_0^-(d+1)+δ^-(d+1))(Δ_n^d/2∑_k∈^d1-e^-λ_kΔ_n/(λ_kΔ_n)^1+α))=(Δ_n^1-α'/p(y_1-y_2_0^-(d+1)+δ^-(d+1))). Analogously, we consider the second term D̅^2 with the function f_α from equation (<ref>), which gives us the following:(V_p,Δ_n(y_1),V_p,Δ_n(y_2)) =(Δ_n^1-α'/p(y_1-y_2_0^-(d+1)+δ^-(d+1)))+𝒪(1/p(Δ_n^2(1-α')+1/p∧1))=(Δ_n^1-α'/p(y_1-y_2_0^-(d+1)+δ^-(d+1))),for y_1≠y_2. Thus, it remains to compute the variance, where y_1=y_2=y∈[δ,1-δ]^d. Again, utilizinge_k(y)e_k(y) =e^-2κy_1∏_l=1^d(cos(0)-cos(2π k_ly_l)),and having x_l^(1),x_l^(2)∈{0,y_l}, we infer analogously to display (<ref>) that1/pΔ_n^2α'∑_k_1,k_2∈^dD̅_k_1,k_2^1∏_l=1^dcos(2π k^(1)_lx_l^(1))cos(2π k^(2)_lx_l^(2))=2/p∑_r=0^∞(Δ_n^d/2∑_k_1∈^dg_α,r(λ_k_1Δ_n)∏_l=1^dcos(2π k^(1)_lx_l^(1)))(Δ_n^d/2∑_k_2∈^dg_α,r(λ_k_2Δ_n)∏_l=1^dcos(2π k^(2)_lx_l^(2))).Now assume, without loss of generality, that ∑_j=1^d1_{x_j^(1)≠ 0}=l, for 1≤ l≤ d. Then, by Corollary <ref> (ii) and (iii), we haveΔ_n^d/2∑_k_2∈^dg_α,r(λ_k_2Δ_n)∏_l=1^dcos(2π k^(2)_lx_l^(2)) =(Δ_n^l/2∨Δ_n^1-α'/δ^d+1).Hence, within this setting, we conclude that1/pΔ_n^2α'∑_k_1,k_2∈^dD̅_k_1,k_2^1∏_l=1^dcos(2π k^(1)_lx_l^(1))cos(2π k^(2)_lx_l^(2))=(1/p(Δ_n^1/2∨Δ_n^1-α'/δ^d+1)),and it follows that(V_p,Δ_n(y)) =2σ^4e^2κy_1/pΔ_n^2α'∑_k_1,k_2∈^d k_1≠k_2e^2_k_1(y)e^2_k_2(y)D̅_k_1,k_2+𝒪(1/p(Δ_n^2(1-α')+1/p∧1))=2σ^4/pΔ_n^2α'∑_k_1,k_2∈^d k_1≠k_2D̅_k_1,k_2+𝒪(1/p(Δ_n^1/2∨Δ_n^1-α'/δ^d+1+1/p∧1)).For the leading term we obtain:2σ^4/pΔ_n^2α'∑_k_1,k_2∈^d k_1≠k_2D̅_k_1,k_2 =σ^4/p(∑_r=0^∞(2Δ_n^d/2∑_k∈^d(1-e^-λ_kΔ_n)^2/2(λ_kΔ_n)^1+αe^-rλ_kΔ_n)^2+2(Δ_n^d/2∑_k∈^d1-e^-λ_kΔ_n/(λ_kΔ_n)^1+α)^2)=σ^4/p(∑_r=0^∞(2Δ_n^d/2∑_k∈^dg_α,r(λ_kΔ_n))^2+2(Δ_n^d/2∑_k∈^df_α(λ_kΔ_n))^2),and by Lemma <ref> we have (V_p,Δ_n(y)) =1/p(Γ(1-α')σ^2/2^d(πη)^d/2α'Γ(d/2))^2(∑_r=0^∞(-r ^α'+2 (r +1)^α'-(r +2)^α')^2 +2) +𝒪(1/p(Δ_n^1/2∨Δ_n^1-α'/δ^d+1+1/p∧1)).Defining the constantΥ_α':=(∑_r=0^∞(-r ^α'+2 (r +1)^α'-(r +2)^α')^2 +2)completes the proof.The following proposition and corollary prove the general mixing-type Condition (IV) from Proposition <ref>.Grant the Assumptions <ref> and <ref>. Let y∈ [δ,1-δ]^d for a δ>0, 1≤ r<r+u≤ v≤ n natural numbers and Q_1^r=∑_i=1^r(Δ_iX̃)^2(y), Q_r+u^v=∑_i=r+u^v(Δ_iX̃)^2(y),then there exists a constant C, where 0<C<∞, such that it holds for all t∈ that(e^ t(Q_1^r-[Q_1^r]), e^ t(Q_r+u^v-[Q_r+u^v]))≤Ct^2/u^1-α'/2√((Q_1^r)(Q_r+u^v)).Assume Q^v_r+u=A_1+A_2 with some A_2 which is independent to Q_1^r. Then we know by <cit.> that(e^ tQ̅_1^r,e^ tQ̅_r+u^v)≤ 2t^2[(Q̅_1^r)^2]^1/2[(A̅_1)^2] ^1/2,where X̅=X-[X]. For r≤ i-1 we obtain:Δ_iX̃(y) =∑_k∈^d(σλ_k^-α/2∫_-∞^rΔ_ne^-λ_k(-s)(e^-λ_kΔ_n-1)W_s^k)e_k(y) +∑_k∈^d(σλ_k^-α/2∫_rΔ_n^e^-λ_k(-s)(e^-λ_kΔ_n-1)W_s^k + σλ_k^-α/2∫_^e^-λ_k(-s) W_s ^k)e_k(y) =∑_k∈^d D_1^k,ie_k(y)+∑_k∈^d D_2^k,ie_k(y),whereD_1^k,i := σλ_k^-α/2∫_-∞^rΔ_ne^-λ_k(-s)(e^-λ_kΔ_n-1)W_s^k,D_2^k,i :=σλ_k^-α/2∫_rΔ_n^e^-λ_k(-s)(e^-λ_kΔ_n-1)W_s^k + σλ_k^-α/2∫_^e^-λ_k(-s) W_s ^k. We can establish that D_1^k,i and D_2^k,i are independent, thus yielding the following result:Q_r+u^v=∑_i=r+u^v (∑_k∈^d D_1^k,ie_k(y))^2+ 2∑_i=r+u^v (∑_k∈^d D_1^k,ie_k(y))(∑_k∈^d D_2^k,ie_k(y)) +∑_i=r+u^v (∑_k∈^d D_2^k,ie_k(y))^2,which implies the following decomposition:A_1 := ∑_i=r+u^v (∑_k∈^d D_1^k,ie_k(y))^2+ 2∑_i=r+u^v (∑_k∈^d D_1^k,ie_k(y))(∑_k∈^d^∞ D_2^k,ie_k(y)), A_2 :=∑_i=r+u^v (∑_k∈^d D_2^k,ie_k(y))^2,where A_2 is independent to Q_1^r. Hence, our focus shifts to bounding the term [A̅_1^2], which is equivalent to computing (A_1). We begin with the following considerations:[A̅_1^2] ≤[A_1^2]= [(∑_i=r+u^v (∑_k∈^d D_1^k,ie_k(y))^2+ 2∑_i=r+u^v (∑_k∈^d D_1^k,ie_k(y))(∑_k∈^d D_2^k,ie_k(y)))^2] = ∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y))^2(∑_k∈^d D_1^k,je_k(y))^2] + 4∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y))^2(∑_k∈^d D_1^k,je_k(y))(∑_k∈^d D_2^k,je_k(y))] +4∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y))(∑_k∈^d D_2^k,ie_k(y)) (∑_k∈^d D_1^k,je_k(y))(∑_k∈^d D_2^k,je_k(y))],where the cross-term between D_1^k,i,D_2^k,i vanishes as both terms are centred normally distributed. Therefore, we use [A̅_1^2]≤ T_1+4T_2, where we define:T_1 :=∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y))^2(∑_k∈^d D_1^k,je_k(y))^2],T_2 :=∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y))(∑_k∈^d D_2^k,ie_k(y)) (∑_k∈^d D_1^k,je_k(y))(∑_k∈^d D_2^k,je_k(y))]. To bound the term T_1, we can utilize the expression D_1^k,i=e^-λ_k(i-r-1)Δ_nB̃_r+1,k, where B̃_i,k is defined in equation (<ref>), leading to the following calculation:T_1=∑_i,j=r+u^v ∑_k_1,k_2,k_3,k_4∈^d[e^-λ_k_1(i-r-1)Δ_nB̃_r+1,k_1e_k_1(y)e^-λ_k_2(i-r-1)Δ_nB̃_r+1,k_2e_k_2(y) × e^-λ_k_3(j-r-1)Δ_nB̃_r+1,k_3e_k_3(y) e^-λ_k_4(j-r-1)Δ_nB̃_r+1,k_4e_k_4(y)]. Note, that any combination of indices results in a value of zero, unless, exactly two indices are the same, or all four indices are equal. Thus, we obtain for k_1=…=k_4=k that∑_i,j=r+u^v ∑_k∈^d e^-2λ_k(i+j-2r-2)Δ_n[B̃_r+1,k^4]e_k^4(y).For k_1=k_2 and k_3=k_4, with k_1≠k_3 we find that∑_i,j=r+u^v∑_k_1,k_2∈^d k_1≠k_2 e^-2λ_k_1(i-r-1)Δ_ne^-2λ_k_2(j-r-1)Δ_n[B̃_r+1,k_1^2B̃_r+1,k_2^2]e_k_1^2(y)e_k_2^2(y) =∑_i,j=r+u^v∑_k_1,k_2∈^d k_1≠k_2 e^-2λ_k_1(i-r-1)Δ_n-2λ_k_2(j-r-1)Δ_n[B̃_r+1,k_1^2][B̃_r+1,k_2^2]e_k_1^2(y)e_k_2^2(y).The remaining combinations yield the following:∑_i,j=r+u^v∑_k_1,k_2∈^d k_1≠k_2 e^-λ_k_1(i+j-2r-2)Δ_ne^-λ_k_2(i+j-2r-2)Δ_n[B̃_r+1,k_1^2B̃_r+1,k_2^2]e_k_1^2(y)e_k_2^2(y) =∑_i,j=r+u^v∑_k_1,k_2∈^d k_1≠k_2^∞ e^-(λ_k_1+λ_k_2)(i+j-2r-2)Δ_n[B̃_r+1,k_1^2][B̃_r+1,k_2^2]e_k_1^2(y)e_k_2^2(y),and we observe:T_1 = ∑_i,j=r+u^v∑_k_1,k_2∈^d k_1≠k_2 e^-2λ_k_1(i-r-1)Δ_n-2λ_k_2(j-r-1)Δ_n[B̃_r+1,k_1^2][B̃_r+1,k_2^2]e_k_1^2(y)e_k_2^2(y) +2∑_i,j=r+u^v∑_k_1,k_2∈^d k_1≠k_2 e^-(λ_k_1+λ_k_2)(i+j-2r-2)Δ_n[B̃_r+1,k_1^2][B̃_r+1,k_2^2]e_k_1^2(y)e_k_2^2(y) +∑_i,j=r+u^v ∑_k∈^d e^-2λ_k(i+j-2r-2)Δ_n[B̃_r+1,k^4]e_k^4(y) =σ^4∑_k_1,k_2∈^d k_1≠k_2(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/4λ_k_1^1+αλ_k_2^1+α(∑_i=r+u^v e^-2λ_k_1(i-r-1)Δ_n)(∑_j=r+u^v e^-2λ_k_2(j-r-1)Δ_n) × e_k_1^2(y)e_k_2^2(y) +σ^4∑_k_1,k_2∈^d k_1≠k_2(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/4λ_k_1^1+αλ_k_2^1+α2(∑_i=r+u^v e^-(λ_k_1+λ_k_2)(i-r-1)Δ_n)^2 e_k_1^2(y)e_k_2^2(y) +3σ^4∑_k∈^d^∞(1-e^-λ_kΔ_n)^4/4λ_k^2(1+α)(∑_i=r+u^v e^-2λ_k(i-r-1)Δ_n)^2e_k^4(y), where we used equation (<ref>), which implies:[B̃_r+1,k^2]=σ^2/2λ_k^1+α(1-e^-λ_kΔ_n)^2.Let p̅=v-r-u+1 and u≥ 2. We begin by bounding the eigenfunctions (e_k) with a suitable constant C>0. Additionally, we have the following:∑_i=r+u^v e^-2λ_k(i-r-1)Δ_n = e^-2λ_k(u-1)Δ_n1-e^-2λ_kΔ_np̅/1-e^-2λ_kΔ_n,∑_i=r+u^v e^-(λ_k_1+λ_k_2)(i-r-1)Δ_n =e^-(λ_k_1+λ_k_2)(u-1)Δ_n1-e^-(λ_k_1+λ_k_2)Δ_n p̅/1-e^-(λ_k_1+λ_k_2)Δ_n.Thus, we obtain:(∑_i=r+u^v e^-2λ_k_1(i-r-1)Δ_n)(∑_j=r+u^v e^-2λ_k_2(j-r-1)Δ_n)≤ e^-2(λ_k_1+λ_k_2)(u-1)Δ_n1-e^-2λ_k_2Δ_np̅/(1-e^-2λ_k_1Δ_n)(1-e^-2λ_k_2Δ_n)≤ e^-2(λ_k_1+λ_k_2)(u-1)Δ_np̅/(1-e^-2λ_k_1Δ_n),as well as (∑_i=r+u^v e^-(λ_k_1+λ_k_2)(i-r-1)Δ_n)^2≤ e^-2(λ_k_1+λ_k_2)(u-1)Δ_np̅/1-e^-(λ_k_1+λ_k_2)Δ_n,(∑_i=r+u^v e^-2λ_k(i-r-1)Δ_n)^2≤ e^-4λ_k(u-1)Δ_np̅/1-e^-2λ_kΔ_n.Finally, we conclude with the following calculations:T_1 ≤ C^4σ^4(∑_k_1,k_2∈^d k_1≠k_2(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/4λ_k_1^1+αλ_k_2^1+αe^-2(λ_k_1+λ_k_2)(u-1)Δ_np̅/(1-e^-2λ_k_1Δ_n) +∑_k_1,k_2∈^d k_1≠k_2(1-e^-λ_k_1Δ_n)^2(1-e^-λ_k_2Δ_n)^2/4λ_k_1^1+αλ_k_2^1+αe^-2(λ_k_1+λ_k_2)(u-1)Δ_n2p̅/1-e^-(λ_k_1+λ_k_2)Δ_n +3∑_k∈^d^∞(1-e^-λ_kΔ_n)^4/4λ_k^2(1+α)e^-4λ_k(u-1)Δ_np̅/1-e^-2λ_kΔ_n)≤ C^4σ^43p̅(∑_k_1∈^d(1-e^-λ_k_1Δ_n)/2λ_k_1^1+αe^-2λ_k_1(u-1)Δ_n)(∑_k_2∈^d(1-e^-λ_k_2Δ_n)^2/2λ_k_2^1+αe^-2λ_k_2(u-1)Δ_n)≤ C'σ^4p̅Δ_n^2α'(∫_0^∞ x^d/2-1(1-e^-x)/2x^1+αe^-2x(u-1) x)(∫_0^∞ x^d/2-1(1-e^-x)^2/2x^1+αe^-2x(u-1) x).Utilizing analogous steps as for Lemma <ref>, we obtain for both integrals that∫_0^∞ x^d/2-1(1-e^-x)^l/2x^1+αe^-2xτ x=(1/τ^l-α'),where l=1,2. Therefore, we concludeT_1≤ Cσ^4p̅Δ_n^2α'/(u-1)^3-2α',for a suitable C>0.For the term T_2, according to equation (<ref>), we have the following expression:T_2 =∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y))(∑_k∈^d D_2^k,ie_k(y)) (∑_k∈^d D_1^k,je_k(y))(∑_k∈^d D_2^k,je_k(y))]=∑_i,j=r+u^v(∑_k∈^d[D_1^k,iD_1^k,j]e_k^2(y))(∑_k∈^d[D_2^k,iD_2^k,j]e_k^2(y)).For the first expected value, we find that[D_1^k,iD_1^k,j] =σ^2λ_k^-α(1-e^-λ_kΔ_n)^2e^-λ_k(i+j-2)Δ_n∫_-∞^rΔ_ne^2λ_ks s=σ^2(1-e^-λ_kΔ_n)^2/2λ_k^1+αe^-λ_k(i+j-2r-2)Δ_n.The second expected value calculates for i≤ j as follows:[D_2^k,iD_2^k,j] =[(σλ_k^-α/2∫_rΔ_n^e^-λ_k(-s)(e^-λ_kΔ_n-1)W_s^k + C_i,k) ×(σλ_k^-α/2∫_rΔ_n^(j-1)Δ_ne^-λ_k((j-1)Δ_n-s)(e^-λ_kΔ_n-1)W_s^k +C_j,k)]=σ^2(1-e^-λ_kΔ_n)^2/2λ_k^1+α(e^-λ_k(j-i)Δ_n-e^-λ_k(i+j-2r-2)Δ_n)+Σ_j,i^BC,k+Σ_i,j^C,k. As discussed in Proposition <ref>, we find that Σ_j,i^BC,k=0, when i=j, and Σ_i,j^C,k=0, when i≠ j. In particular, for the case when i<j, we have the following expression:[D_2^k,iD_2^k,j] =σ^2(1-e^-λ_kΔ_n)^2/2λ_k^1+α(e^-λ_k(j-i)Δ_n-e^-λ_k(i+j-2r-2)Δ_n) +σ^2e^-λ_kΔ_n(j-i)(e^λ_kΔ_n-e^-λ_kΔ_n)e^-λ_kΔ_n-1/2λ_k^1+α≤σ^2e^-λ_k(j-i)Δ_n1-e^-λ_kΔ_n/2λ_k^1+α(1-e^λ_kΔ_n)≤ 0. Using this calculations along with equation (<ref>), we can derive the following:T_2 ≤ C^4σ^4∑_i=r+u^v(∑_k∈^d(1-e^-λ_kΔ_n)^2/2λ_k^1+αe^-2λ_k(i-r-1)Δ_n) ×(∑_k∈^d(1-e^-λ_kΔ_n)^2/2λ_k^1+α(1-e^-2λ_k(i-r-1)Δ_n)+σ^-2Σ_i,i^C,k) +2∑_i,j=r+u i<j^v(∑_k∈^d(1-e^-λ_kΔ_n)^2/2λ_k^1+αe^-λ_k(i+j-2r-2)Δ_ne_k^2(y))(∑_k∈^d[D_2^k,iD_2^k,j]e_k^2(y))≤C^4σ^4∑_i=r+u^v(∑_k∈^d(1-e^-λ_kΔ_n)^2/2λ_k^1+αe^-2λ_k(i-r-1)Δ_n)(∑_k∈^d(1-e^-λ_kΔ_n)^2+1-e^-2λ_kΔ_n/2λ_k^1+α)≤ C^4σ^4 Δ_n^2α'p̅(Δ_n^d/2∑_k∈^d(1-e^-λ_kΔ_n)^2/2(λ_kΔ_n)^1+αe^-2λ_k(u-1)Δ_n)(Δ_n^d/2∑_k∈^d1-e^-λ_kΔ_n/(λ_kΔ_n)^1+α).By utilizing analogous steps as for the term T_1, we obtain the following expression for T_2:T_2≤ Cσ^4p̅Δ_n^2α'/(u-1)^2-α',with a suitable constant C>0. Thereby, we conclude for u≥ 2 that[A̅_1^2]≤ Cσ^4p̅Δ_n^2α'/(u-1)^2-α'.Finally, using Proposition <ref>, we find that((Q_r+u^v)^2)≥ C[(Q_r+u^v)^2]=C[(∑_i=r+u^v(Δ_iX̃)^2(y))^2]≥ C∑_i=r+u^v[(Δ_iX̃)^4(y)]≥ C”σ^4p̅Δ_n^2α'.This, and a simple bound for u=1 complete the proof.On the assumptions of Proposition <ref>, it holds for 1≤ r<r+u≤ v≤ n andQ̃_1^r=∑_i=1^r ξ̃_n,i, Q̃_r+u^v=∑_i=r+u^v ξ̃_n,i,that there is a constant C, with 0<C<∞ and ξ̃_n,i from equation (<ref>), such that for all t∈ it holds:(e^ t(Q̃_1^r-[Q̃_1^r]), e^ t(Q̃_r+u^v-[Q̃_r+u^v]))≤Ct^2/u^1-α'/2√((Q̃_1^r)(Q̃_r+u^v)). We present the proof analogously to Proposition <ref> and begin by decomposing the term Q̃_r+u^v as follows:Q̃_r+u^v=∑_i=r+u^v ξ̃_n,i=2^d(πη)^d/2α'Γ(d/2)/√(nm)Δ_n^α'Γ(1-α')∑_j=1^m(A_1(y_j)+A_2(y_j))e^κy_j_1,whereA_1(y) := ∑_i=r+u^v (∑_k∈^d D_1^k,ie_k(y))^2+ 2∑_i=r+u^v (∑_k∈^d D_1^k,ie_k(y))(∑_k∈^d^∞ D_2^k,ie_k(y)), A_2(y) :=∑_i=r+u^v (∑_k∈^d D_2^k,ie_k(y))^2,and an analogous definition of D_1^k,i and D_2^k,i as in the equations (<ref>) and (<ref>). Thereby, we need to bound the following expression:(K/√(nm)Δ_n^α'∑_j=1^m A_1(y_j)e^κy_j_1) =K^2/nmΔ_n^2α'∑_j=1^m(A_1(y_j)) e^2κy_j_1 +K^2/nmΔ_n^2α'∑_j_1,j_2=1 j_1≠ j_2^m(A_1(y_j_1),A_1(y_j_2))e^κ (y_j_1+y_j_2)_1 ,whereK:=2^d(πη)^d/2α'Γ(d/2)/Γ(1-α').Let p̅=v-r-u+1 and u≥ 2. Thanks to Proposition <ref>, we obtain the following:K^2/nmΔ_n^2α'∑_j=1^m(A_1(y_j)) e^2κy_j_1 ≤ Cσ^4p̅K^2Δ_n/(u-1)^2-α',where we used the bound for [A̅_1^2] from display (<ref>). For the covariance, we exploit the independence of D_1^k,i and D_2^k,i, along with both terms being centred normals. This allows us to derive the following:(A_1(y_1),A_1(y_2)) = ∑_i,j=r+u^v[ (∑_k∈^d D_1^k,ie_k(y_1))^2 (∑_k∈^d D_1^k,je_k(y_2))^2] +4∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y_1))(∑_k∈^d^∞ D_2^k,ie_k(y_1)) ×(∑_k∈^d D_1^k,je_k(y_2))(∑_k∈^d^∞ D_2^k,je_k(y_2))] -∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y_1))^2][(∑_k∈^d D_1^k,je_k(y_2))^2]. Since we can bound the eigenfunctions (e_k) by a suitable constant C>0 for all k∈^d, we observe that the covariance includes the terms T_1 and T_2 from the displays (<ref>) and (<ref>), respectively. Therefore, we can repeat the calculations from Proposition <ref> concerning the eigenfunctions, leading to the following:T̃_1≤σ^4p̅(∑_k∈^d(1-e^-λ_kΔ_n)/2λ_k^1+αe^-2λ_k(u-1)Δ_ne_k^2(y_1))(∑_k∈^d(1-e^-λ_kΔ_n)^2/2λ_k^1+αe^-2λ_k(u-1)Δ_ne_k^2(y_2)) +σ^42p̅(∑_k∈^d(1-e^-λ_kΔ_n)/2λ_k^1+αe^-2λ_k(u-1)Δ_ne_k(y_1)e_k(y_2)) ×(∑_k∈^d(1-e^-λ_kΔ_n)^2/2λ_k^1+αe^-2λ_k(u-1)Δ_ne_k(y_1)e_k(y_2)).Furthermore, we obtain that T̃_1-∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y_1))^2][(∑_k∈^d D_1^k,je_k(y_2))^2]≤σ^42p̅(∑_k∈^d(1-e^-λ_kΔ_n)/2λ_k^1+αe^-2λ_k(u-1)Δ_ne_k(y_1)e_k(y_2))(∑_k∈^d(1-e^-λ_kΔ_n)^2/2λ_k^1+αe^-2λ_k(u-1)Δ_ne_k(y_1)e_k(y_2)).Thus, we can bound the latter term by using display (<ref>) and Lemma <ref>. Similar to Proposition <ref>, we find that T̃_1-∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y_1))^2][(∑_k∈^d D_1^k,je_k(y_2))^2]=(σ^4p̅Δ_n^2α'/(u-1)^2-α'Δ_n ^1-α'y_1-y_2_0^-(d+1)),where we used analogous steps as in display (<ref>). For the last term in the covariance, we redefine:T̃_2 :=∑_i,j=r+u^v[(∑_k∈^d D_1^k,ie_k(y_1))(∑_k∈^d^∞ D_2^k,ie_k(y_1)) ×(∑_k∈^d D_1^k,je_k(y_2))(∑_k∈^d^∞ D_2^k,je_k(y_2))]=∑_i,j=r+u^v(∑_k∈^d[D_1^k,iD_1^k,j]e_k(y_1)e_k(y_2))(∑_k∈^d[D_2^k,iD_2^k,j]e_k(y_1)e_k(y_2)). With similar steps as in Proposition <ref>, we obtain:T̃_2≤σ^4 Δ_n^2α'p̅(Δ_n^d/2∑_k∈^d(1-e^-λ_kΔ_n)^2/2(λ_kΔ_n)^1+αe^-2λ_k(u-1)Δ_ne_k(y_1)e_k(y_2))(Δ_n^d/2∑_k∈^d1-e^-λ_kΔ_n/(λ_kΔ_n)^1+αe_k(y_1)e_k(y_2))=(σ^4p̅Δ_n^2α'Δ_n^1-α'y_1-y_2_0^-(d+1)(Δ_n^d/2∑_k∈^d(1-e^-λ_kΔ_n)^2/(λ_kΔ_n)^1+αe^-2λ_k(u-1)Δ_n))=(T̃_1).Hence, we haveK^2/nmΔ_n^2α'∑_j_1,j_2=1 j_1≠ j_2^m(A_1(y_j_1),A_1(y_j_2)) = (σ^4p̅Δ_n/(u-1)^2-α'·Δ_n^1-α'/m∑_j_1,j_2=1 j_1≠ j_2^m1/y_j_1-y_j_2_0^d+1). According to Assumption <ref>, the distance between any two arbitrary spatial coordinates is bounded from below, leading to the following order:∑_j_1,j_2=1 j_1≠ j_2^m(1/y_j_1-y_j_2_0)^d+1=(m^d+1∑_j_1,j_2=1 j_1≠ j_2^m(1/my_j_1-y_j_2_0)^d+1)=(m^d+3).Thus, we conclude thatK^2/nmΔ_n^2α'∑_j_1,j_2=1 j_1≠ j_2^m(A_1(y_j_1),A_1(y_j_2)) = (σ^4p̅Δ_n/(u-1)^2-α'Δ_n^1-α'm^d+2).The proof follows with:[(Q_r+u^v)^2]≥∑_i=r+u^v[ξ_n,i^2]≥ CK^2p̅/nmΔ_n^2α'∑_j=1^m[(Δ_iX̃)^4(y_j)]≥ C'σ^4Δ_np̅.Now we are able to prove the central limit theorem from Proposition <ref>. To prove this central limit theorem, we will employ Proposition <ref>. Hence, we define:Ξ_n,i:=ξ̃_n,i-[ξ̃_n,i],where ξ̃_n,i is defined in equation (<ref>). The asymptotic variance is given by:(∑_i=1^nΞ_n,i) =(∑_i=1^nξ̃_n,i)=K^2/nm_nΔ_n^2α'(∑_j=1^m_n∑_i=1^n(Δ_iX̃)^2(y_j)e^κy_j_1)=K^2n^2Δ_n^2α'/nm_nΔ_n^2α'(∑_j=1^m_n(V_n,Δ_n(y_j))+∑_j_1,j_2=1 j_1≠ j_2^m_n(V_n,Δ_n(y_j_1),V_n,Δ_n(y_j_2))=K^2n/m_n·m_nΥ_α'σ^4/n(Γ(1-α')/2^d(πη)^d/2α'Γ(d/2))^2(1+(Δ_n^1/2∨Δ_n^1-α')) +(K^2n/m_n·Δ_n^1-α'/n∑_j_1,j_2=1 j_1≠ j_2^m_n( y_1-y_2_0^-(d+1)+δ^-(d+1)) )=Υ_α'σ^4(1+(Δ_n^1/2∨Δ_n^1-α'))+(m_n^d+2Δ_n^1-α')n⟶Υ_α'σ^4,where we used Proposition <ref> and equation (<ref>) and K defined in (<ref>). It remains to prove the Conditions (I)-(III) from Proposition <ref>, since the last condition is proved by Corollary <ref>. (I) By Proposition <ref> we have∑_i=a^b(Ξ_n,i) =∑_i=a^b(ξ̃_n,i)=K^2Δ_n^2α'/nm_nΔ_n^2α'∑_i=a^b (∑_j=1^m_n1/Δ_n^α'(Δ_iX̃)(y_j)e^κy_j_1)=K^2/nm_n∑_i=a^b(∑_j=1^m_n(V_1,Δ_n(y_j))+∑_j_1,j_2=1j_1≠ j_2^m_n(V_1,Δ_n(y_j_1),V_1,Δ_n(y_j_2)))=(Δ_n(b-a+1)+Δ_n(b-a+1)Δ_n^1-α'm_n^d+2)=(Δ_n(b-a+1)). We utilize the calculations for the asymptotic variance as shown in this proof and thus conclude:(∑_i=a^bΞ_n,i) =(∑_i=a^bξ̃_n,i)=(K^2(b-a+1)^2/nm_n·m_n/(b-a+1)K^2+(b-a+1)^2/nm_n·Δ_n^1-α'm_n^d+3/(b-a+1))=(Δ_n(b-a+1)),which shows the first condition as well as the second condition.(III) We prove that a Lyapunov condition is satisfied. By using the Cauchy-Schwarz inequality, we have[ξ̃_n,i^4] =K^4/n^2m_n^2Δ_n^4α'∑_j_1,…,j_4=1^m_n e^κ (y_j_1+…+y_j_4)_1[(Δ_iX̃)^2(y_j_1)⋯(Δ_iX̃)^2(y_j_4)]≤K^4/n^2m_n^2Δ_n^4α'∑_j_1,…,j_4=1^m_n e^κ (y_j_1+…+y_j_4)_1[(Δ_iX̃)^8(y_j_1)]^1/4⋯[(Δ_iX̃)^8(y_j_4)]^1/4≤K^4/n^2Δ_n^4α'm_n^2 e^4κ_1max_y∈{y_1,…y_m_n}[(Δ_iX̃)^8(y)].Since (Δ_nX̃)(y) is a centred Gaussian random variable, we can infer that [(Δ_iX̃)^8(y)]=(Δ_n^4α) by using Proposition <ref>. Thus, we have∑_i=1^n[ξ̃_n,i^4]=(Δ_nm^2)=(1),which shows the third condition. §.§ Proofs of Section 4We proceed to tackle the methodology section for the estimator Ψ̂ by deriving the corresponding multidimensional triangular array for Ψ̂. Notably, demonstrating a central limit theorem for the estimator Ψ̂ alone suffices, as the estimator υ̂ is a transformation of Ψ̂. This enables us to deduce a central limit theorem for υ̂ using the multidimensional delta method. To construct the multidimensional triangular array, we leverage the Taylor expansion for log(a+x) and obtain thatlog((y)/nΔ_n^α') =log(σ_0^2K)-κy_1+∑_i=1^n(Δ_iX̃)^2(y)/nΔ_n^α'σ_0^2Ke^κy_1+(Δ_n^1-α')+_(((y)/nΔ_n^α')^2),where the constant K is defined in equation (<ref>). Utilizing Proposition <ref> we conclude that log((y)/nΔ_n^α') =log(σ_0^2K)-κy_1+∑_i=1^n(Δ_iX̃)^2(y)/nΔ_n^α'σ_0^2Ke^κy_1+(Δ_n^1-α')+_(Δ_n). The previous expression simplifies the analysis by allowing us to focus on the term:log(σ_0^2K)-κy_1+∑_i=1^n(Δ_iX̃)^2(y)/nΔ_n^α'σ_0^2Ke^κy_1, where the last component represents the model error.Our goal is to establish a central limit theorem in the form of √(nm)(Ψ̂-Ψ). To achieve this, we can develop the triangular array associated with the estimator Ψ̂ by employing equation (<ref>). This triangular array is defined as Ξ_n,i:=ξ_n,i-[ξ_n,i], where:ξ_n,i :=√(nm)·1-2δ/m(1-2δ/mX^⊤ X)^-1X^⊤[ (Δ_iX̃)^2(y_1)/nΔ_n^α'σ_0^2Ke^κy_1_1;⋮; (Δ_iX̃)^2(y_m)/nΔ_n^α'σ_0^2Ke^κy_m_1 ]=√(n)(1-2δ)/√(m)Kσ_0^2(1-2δ/mX^⊤ X)^-1X^⊤[ (Δ_iX̃)^2(y_1)/nΔ_n^α'e^κy_1_1;⋮; (Δ_iX̃)^2(y_m)/nΔ_n^α'e^κy_m_1 ]. With the triangular array Ξ_n,i in place, we can now proceed to the preparations for a CLT. For proving Proposition <ref>, we utilize a generalization of Proposition <ref>, which directly follows by the Cramér-Wold theorem.Let (Z_k_n,i)_1≤ i≤ k_n a centred triangular array, with a sequence k_n, where Z_n,k_n∈^d are random vectors. Then it holds that ∑_i=1^k_n Z_n,id⟶𝒩(0,Σ),for n and Σ denotes a variance-covariance matrix, which satisfies the equation:lim_n(∑_i=1^k_nβ^⊤ Z_n,i)=β^⊤Σβ<∞,for any β∈^d, if the following conditions hold for any β∈^d: (I) (∑_i=a^bβ^⊤ Z_n,i)≤ C∑_i=a^b(β^⊤ Z_n,i), for all 1≤ a≤ b≤ k_n ,(II) lim sup_n∑_i=1^k_n[β^⊤ Z_n,i^2]<∞,(III) ∑_i=1^k_n[β^⊤ Z_k_n,i^21_{|β^⊤ Z_k_n,i|>ε}]n⟶0, for all ε>0,(IV) (e^ t∑_i=a^bβ^⊤ Z_n,i,e^ t∑_i=b+u^c β^⊤ Z_n,i)≤ρ_t(u)∑_i=a^c(β^⊤ Z_n,i), for all 1≤ a≤ b≤ b+u≤ c≤ k_n and t∈,where C>0 is a universal constant and ρ_t(u)≥ 0 is a function with ∑_j=1^∞ρ_t(2^j)<∞.We start by calculating the asymptotic variance. On the Assumptions <ref>, <ref> and <ref>, we have lim_n(∑_i=1^nγ^⊤Ξ_n,i) = (1-2δ)Υ_α'γ^⊤Σ^-1γ,where Ξ_n,i is defined in equation (<ref>), Υ_α' defined in equation (<ref>), Σ^-1 from equation (<ref>) and γ∈^d+1 arbitrary but fixed.Consider an arbitrary but fixed vector γ∈ℝ^d+1. We initiate this proof by performing the following calculations:(∑_i=1^nγ^⊤Ξ_n,i) =γ^⊤(∑_i=1^n ξ_n,i)γ=γ^⊤n(1-2δ)^2/mK^2σ_0^4(1-2δ/mX^⊤ X)^-1X^⊤(Ỹ_n)X(1-2δ/mX^⊤ X)^-1γ,whereỸ_n:=[ ∑_i=1^n(Δ_iX̃)^2(y_1)/nΔ_n^α'e^κy_1_1; ⋮; ∑_i=1^n(Δ_iX̃)^2(y_m)/nΔ_n^α'e^κy_m_1 ]=[ V_n,Δ_n(y_1);⋮; V_n,Δ_n(y_m) ]∈^m. Using Proposition <ref>, we can determine the components of the variance-covariance matrix V_n,m:=(Ỹ_n) of Ỹ_n,i, yielding:(V_n,m)_j_1,j_2=Υ_α'/nK^2σ_0^4(1+Δ_n^1/2∨Δ_n^1-α'),if1≤ j_1=j_2≤ m(Δ_n^2-α'(y_j_1-y_j_2_0^-(d+1)+δ^-(d+1))), if1≤ j_1,j_2≤ mfor j_1≠ j_2 ,for 1≤ j_1,j_2≤ m. Hence, we have(∑_i=1^nγ^⊤Ξ_n,i) =γ^⊤(1-2δ)^2/m(1-2δ/mX^⊤ X)^-1X^⊤(n/K^2σ_0^4V_n,m)X(1-2δ/mX^⊤ X)^-1γ,where we define:n/K^2σ_0^4V_n,m=:V_n,m,1+V_n,m,2,withV_n,m,1:=Υ_α'(1+Δ_n^1/2∨Δ_n^1-α') E_m,where E_m denotes the m× m dimensional identity matrix and V_n,m,2:= 0,if1≤ j_1=j_2≤ m(Δ_n^1-α'(y_j_1-y_j_2)_0^-(d+1)+δ^-(d+1))), if1≤ j_1,j_2≤ mfor j_1≠ j_2 .We conclude that (∑_i=1^nγ^⊤Ξ_n,i) =γ^⊤((1-2δ)^2/m(1-2δ/mX^⊤ X)^-1X^⊤ V_n,m,1X(1-2δ/mX^⊤ X)^-1 +(1-2δ)^2/m(1-2δ/mX^⊤ X)^-1X^⊤ V_n,m,2X(1-2δ/mX^⊤ X)^-1)γ=γ^⊤((1-2δ)Υ_α'(1+Δ_n^1/2∨Δ_n^1-α')(1-2δ/mX^⊤ X)^-1 ×(1-2δ/mX^⊤ X)(1-2δ/mX^⊤ X)^-1 +(1-2δ)(1-2δ/mX^⊤ X)^-1(1-2δ/mX^⊤ V_n,m,2 X)(1-2δ/mX^⊤ X)^-1)γ=γ^⊤((1-2δ)Υ_α'(1+Δ_n^1/2∨Δ_n^1-α')(1-2δ/mX^⊤ X)^-1 +(1-2δ)(1-2δ/mX^⊤ X)^-1(1-2δ/mX^⊤ V_n,m,2 X)(1-2δ/mX^⊤ X)^-1)γ.Let m=m_n be in accordance with Assumption <ref>. As the convergence of (1-2δ)/m_n· X^⊤ X is established for n→∞, the focus shifts on demonstrating the convergence of m_n^-1 (X^⊤ V_n,m_n,2X) towards the zero matrix 0.Consider matrices A∈ℝ^a× b, B∈ℝ^b × b and C∈ℝ^b× a, where (B)_i_1,i_2≥ 0 and (A)_l,i,(C)_i,l∈[0,1] for all 1≤ i,i_1,i_2≤ b,1≤ l≤ a. In such a scenario we obtain:(ABC)_i,l≤ (1_a, bB1_b, a)_i,l,for each 1≤ i,l≤ a, where 1_a,b={1}^a× b denotes the matrix with each entry being one. Thus, we find:(1/mX^⊤ V_n,m,2X)_i,l ≤(1/m1_(d+1),m V_n,m,21_m,(d+1))_i,l,for each 1≤ i,l≤ (d+1). It holds for 1≤ i≤(d+1) and 1≤ l≤ m that(1/m1_(d+1),m V_n,m,2)_i,l =(Δ_m^1-α'/m(∑_j_1=1j_1≠ l^m y_j_1-y_l_0^-(d+1) +(m-1)δ^-(d+1))),and therefore we have for 1≤ i,l≤ (d+1) that (1/m1_(d+1),m V_n,m,21_m,(d+1))_i,l =(Δ_m^1-α'/m(∑_j_2=1^m∑_j_1=1j_1≠ j_2^m y_j_1-y_j_2_0^-(d+1) +m(m-1)δ^-(d+1)))==(Δ_n^1-α'm^d+2). Utilizing Assumption <ref>, we can establish that(1/m_nX^⊤ V_n,m_n,2X)_i,l=(Δ_n^1-α'm_n^d+2)n⟶0, for all 1≤ i,l≤ (d+1). This, in turn, implies:1/m_nX^⊤ V_n,m_n,2Xn⟶0.The conclusion follows accordingly. The preceding lemma demonstrated that the estimator Ψ̂ for the parameter Ψ from display (<ref>) possesses an asymptotic variance of (1-2δ)Υ_α'Σ^-1, as assumed. We will now present a lemma, which helps proving the Conditions (I) and (II) from Corollary <ref>. On the Assumptions <ref>, <ref> and <ref>, we have (∑_i=a^bγ^⊤Ξ_n,i)≤ C ∑_i=a^b(γ^⊤Ξ_n,i),for all 1≤ a≤ b≤ n, Ξ_n,i defined in equation (<ref>), an universal constant C>0 and γ∈^d+1 arbitrary but fixed.For an arbitrary but fixed vector γ∈ℝ^d+1, we can establish, in a manner analogous to Lemma <ref>, that(∑_i=a^bγ^⊤Ξ_n,i) =γ^⊤(∑_i=a^b ξ_n,i)γ=γ^⊤(b-a+1)^2(1-2δ)^2/nmK^2σ_0^4(1-2δ/mX^⊤ X)^-1X^⊤(Ỹ_a,b)X(1-2δ/mX^⊤ X)^-1γ,whereỸ_a,b:=[ ∑_i=a^b(Δ_iX̃)^2(y_1)/(b-a+1)Δ_n^α'e^κy_1_1; ⋮; ∑_i=a^b(Δ_iX̃)^2(y_m)/(b-a+1)Δ_n^α'e^κy_m_1 ]∈^m.For the variance (Ỹ_a,b):=V_a,b,n,m:=V_a,b,n,m,1+V_a,b,n,m,2 we find:(V_a,b,n,m,1)_j_1,j_2 :=Υ_α'/(b-a+1)K^2σ_0^4(1+Δ_n^1/2∨Δ_n^1-α'),if1≤ j_1=j_2≤ m 0, if1≤ j_1,j_2≤ mfor j_1≠ j_2 (V_a,b,n,m,2)_j_1,j_2 := 0,if1≤ j_1=j_2≤ m(1/b-a+1Δ_n^1-α'(y_j_1-y_j_2_0^-(d+1)+δ^-(d+1))), if1≤ j_1,j_2≤ mfor j_1≠ j_2 ,and thus, we have (∑_i=a^bγ^⊤Ξ_n,i)=((b-a+1)^2(1-2δ)/nK^2σ_0^4·K^2σ_0^4Υ_α'/b-a+1γ^⊤(1-2δ/mX^⊤ X)^-1(1-2δ/mX^⊤ X)(1-2δ/mX^⊤ X)^-1γ) +(b-a+1)(1-2δ)^2/nK^2σ_0^4γ^⊤(1-2δ/mX^⊤ X)^-1(b-a+1/mX^⊤ V_a,b,n,m,2 X)(1-2δ/mX^⊤ X)^-1γ. Similar to the proof of Lemma <ref>, it can be deduced that(b-a+1/mX^⊤ V_a,b,n,m,2 X)_i,l=(Δ_n^1-α'm^d+2),where we used that ((1-2δ)/m_n· X^⊤ X)^-1→Σ^-1, as n, and γ^⊤Σ^-1γ=(γ_∞). Therefore, it holds:(∑_i=a^bγ^⊤Ξ_n,i) =(γ_∞Δ_n(b-a+1)+γ_∞Δ_n(b-a+1)Δ_n^1-α'm^d+2)=(γ_∞Δ_n(b-a+1)). Applying a similar approach to (γ^⊤Ξ_n,i) yields:(γ^⊤Ξ_n,i)=γ^⊤(1-2δ)^2/nmK^2σ_0^4(1-2δ/mX^⊤ X)^-1X^⊤(Ỹ_i)X(1-2δ/mX^⊤ X)^-1γ,whereỸ_i:=[ (Δ_iX̃)^2(y_1)/Δ_n^α'e^κy_1_1; ⋮; (Δ_iX̃)^2(y_m)/Δ_n^α'e^κy_m_1 ]∈^m.Defining (Ỹ_i):=V_i,n,m:=V_i,n,m,1+V_i,n,m,2, where(V_i,n,m,1)_j_1,j_2 :=Υ_α'K^2σ_0^4(1+Δ_n^1/2∨Δ_n^1-α'),if1≤ j_1=j_2≤ m 0, if1≤ j_1,j_2≤ mfor j_1≠ j_2 , (V_i,n,m,2)_j_1,j_2 := 0,if1≤ j_1=j_2≤ m(Δ_n^1-α'(y_j_1-y_j_2)_0^-(d+1)+δ^-(d+1)), if1≤ j_1,j_2≤ mfor j_1≠ j_2 ,yields:(γ^⊤Ξ_n,i)=(γ_∞Δ_n+γ_∞Δ_nΔ_n^1-α'm^d+2)=(γ_∞Δ_n). Consequently, we obtain ∑_i=a^b(γ^⊤Ξ_n,i)=(γ_∞Δ_n(b-a+1)), which concludes the proof. The subsequent lemma establishes the proof for the third condition of Corollary <ref>. On the Assumptions <ref>, <ref> and <ref>, it holds that∑_i=1^n[(γ^⊤Ξ_n,i)^4]=(γ_∞^4Δ_nm^2),where Ξ_n,i defined in equation (<ref>) and γ∈^d+1 is arbitrary but fixed. We initiate the proof by examining:[(γ^⊤Ξ_n,i)^4] =([(γ^⊤ξ_n,i)^4]). Thus, we proceed with analysing [(γ^⊤ξ_n,i)^4]. By utilizing the Cauchy-Schwarz inequality, we obtain:[(γ^⊤ξ_n,i)^4] =[∑_l_1,…,l_4=1^d+1γ_l_1(ξ_n,i)_l_1⋯γ_l_4(ξ_n,i)_l_4]≤∑_l_1,…,l_4=1^d+1γ_l_1⋯γ_l_4[(ξ_n,i)_l_1^4]^1/4⋯[(ξ_n,i)_l_4^4]^1/4≤γ_∞^4 (d+1)^4 max_l=1,…,d+1[(ξ_n,i)_l^4]. We exploit the fact that X≤1_m,d+1, where 1_a,b∈^a× b represents the matrix of ones, which leads to:ξ_n,i ≤(1-2δ)e^κ_1/√(nm)Δ_n^α'Kσ_0^2(1-2δ/mX^⊤ X)^-11_(d+1),m[ (Δ_iX̃)^2(y_1);⋮; (Δ_iX̃)^2(y_m) ]=(1-2δ)e^κ_1/√(nm)Δ_n^α'Kσ_0^2(1-2δ/mX^⊤ X)^-1[∑_j=1^m(Δ_iX̃)^2(y_j);⋮; ∑_j=1^m(Δ_iX̃)^2(y_j)) ].Thus, we find that[(γ^⊤ξ_n,i)^4] ≤γ_∞^4(1-2δ)^4e^4κ_1(d+1)^4/n^2m^2Δ_n^4α'K^4σ_0^8max_l=1,…,d+1[(∑_j=1^m(Δ_iX̃)^2(y_j))^4((1-2δ/mX^⊤ X)^-11_d+1,1)_l^4]= γ_∞^4(1-2δ)^4e^4κ_1(d+1)^4/n^2m^2Δ_n^4α'K^4σ_0^8[(∑_j=1^m(Δ_iX̃)^2(y_j))^4] max_l=1,…,d+1((1-2δ/mX^⊤ X)^-11_d+1,1)_l^4. Given that the matrix ((1-2δ)m_n^-1(X^⊤ X))^-1 is converging to Σ^-1, as n, we can constrain:max_l=1,…,d+1((1-2δ/mX^⊤ X)^-11_d+1,1)_l^4≤((d+1)||(1-2δ/m_nX^⊤ X)^-1||_∞)^4<∞,for all n∈ and especially for n. As a result, employing the Cauchy-Schwarz inequality yields:[(γ^⊤ξ_n,i)^4] ≤ Cγ_∞^4(1-2δ)^4e^4κ_1(d+1)^4/n^2m^2Δ_n^4α'K^4σ_0^8[(∑_j=1^m(Δ_iX̃)^2(y_j))^4]≤ Cγ_∞^4m^2(1-2δ)^4e^4κ_1(d+1)^4/n^2Δ_n^4α'K^4σ_0^8max_j=1,…,m[(Δ_iX̃)^8(y_j)]. Similarly to the demonstration of Condition (III) in Proposition <ref>, we have [(Δ_iX̃)^8(y)]=(Δ_n^4α). This leads us to:∑_i=1^n[(γ^⊤ξ_n,i)^4] =(γ_∞^4Δ_nm^2),which completes the proof. The following corollary establishes that the temporal dependencies within the triangular array, as outlined in Condition (IV) of Corollary <ref>, can be bounded.On the Assumptions <ref>, <ref> and <ref>, it holds for 1≤ r<r+u≤ v≤ n andQ̃_1^r=∑_i=1^r γ^⊤ξ_n,i, Q̃_r+u^v=∑_i=r+u^v γ^⊤ξ_n,i,where ξ_n,i is defined in equation (<ref>), that there is a constant C, with 0<C<∞, such that for all t∈ it holds:(e^ t(Q̃_1^r-[Q̃_1^r]), e^ t(Q̃_r+u^v-[Q̃_r+u^v]))≤Ct^2/u^3/4√((Q̃_1^r)(Q̃_r+u^v)).We follow a similar approach as in display (<ref>), resulting in:γ^⊤ξ_n,i ≤√(n)(1-2δ)/√(m)Kσ_0^2γ^⊤(1-2δ/mX^⊤ X)^-11_(d+1),m[ (Δ_iX̃)^2(y_1)/nΔ_n^α'e^κy_1_1;⋮; (Δ_iX̃)^2(y_m)/nΔ_n^α'e^κy_m_1 ]= 1-2δ/√(nm)Δ_n^α'Kσ_0^2∑_j=1^m(Δ_iX̃)^2(y_j)e^κy_1_1γ^⊤(1-2δ/mX^⊤ X)^-11_(d+1),1≤ Cγ_∞σ^2η^d/2/√(nm)Δ_n^α'K∑_j=1^m(Δ_iX̃)^2(y_j)e^κy_1_1. With reference to Corollary <ref>, it is evident that the statement holds for:η^d/2/√(nm)Δ_n^α'K∑_j=1^m(Δ_iX̃)^2(y_j)e^κy_j_1,which completes the proof.To prove Proposition <ref>, we leverage Corollary <ref>. The asymptotic variance is provided by Lemma <ref>. Condition (I) is fulfilled as demonstrated in Lemma <ref>. In order to establish Condition (II), it suffices to consider the (Ξ_n,i), as Ξ is centred. Revisiting Lemma <ref> confirms Condition (II). The fulfillment of Conditions (III) and (IV) is validated by Lemma <ref> and Corollary <ref>, respectively, which concludes the proof. We close this section by providing the prove for Corollary <ref>.Utilizing the multivariate delta method on the central limit theorem presented in Proposition <ref> and employing the function h^-1(x)=(e^x_1/K,-x_2,…,-x_d+1), as defined in equation (<ref>), yields:√(nm_n)(υ̂-υ)=√(nm_n)(h^-1(Ψ̂)-h^-1(Ψ))d⟶𝒩(0,Υ_α'(1-2δ)J_h^-1(Ψ)Σ^-1J_h^-1(Ψ)^⊤),where J_h^-1 denotes the Jacobian matrix of h^-1, given by:J_h^-1(x)=[ e^x_1/K 0 0 0; 0-1 0 0; 0 0-1 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0-1 ].Defining the following matrix:J_σ_0^2:=J_h^-1(Ψ)=[ σ_0^2 0 0 0; 0-1 0 0; 0 0-1 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0-1 ],completes the proof.We close this section by providing the covariance structure between quadratic increments and consecutive temporal increments.Therefore, we introduce the following definitions:V_p_1,Δ_n(y):=1/p_1Δ_n^α'∑_i=1^p_1(Δ_n,iX̃)^2(y)e^||κy||_1 and V_p_2,Δ_2n(y):=1/p_2Δ_2n^α'∑_i=1^p_2(Δ_2n,iX̃)^2(y)e^||κy||_1,with 1≤ p_1≤ n and 1≤ p_2≤ 2n. Furthermore, utilizing (<ref>) yields thatV_n,Δ_n(y) =e^||κy||/nΔ_n^α'RV_n(y)=2^1-α'V_2n,Δ_2n(y)+4nΔ_2n^α'/nΔ_n^α'W_2n,Δ_2n,where we define for 1≤ p≤ 2n:W_p,Δ_2n(y):=1/pΔ_2n^α'∑_i=1^p1_2(i)(Δ_2n,iX̃)(y)(Δ_2n,i-1X̃)(y)e^||κy||.Note that we have V_p,Δ_n(y)=2^1-α'V_2p,Δ_2n(y)+2^2-α'W_2p,Δ_2n(y),for a 1≤ p≤ n. Hence, the estimator α̂' from equation (<ref>) can be decomposed as follows:α̂_2n,m' =α'+1/log(2)mσ_0^2 K∑_j=1^m((2^1-α'-1)V_2n,Δ_2n(y_j)+2^2-α'(W_2n,Δ_2n(y_j))+(Δ_n)+_(Δ_n)The corresponding triangular array is given by Ξ_2n,i:=ξ_2n,i-[ξ_2n,i], whereξ_2n,i :=∑_j=1^me^κy_j_1/log(2)√(2nm)Δ_2n^α'σ_0^2 K((2^1-α'-1)(Δ_2n,iX̃)^2(y_j)+2^2-α'1_2(i)(Δ_2n,iX̃)(y_j)(Δ_2n,i-1X̃)(y_j))=ξ_2n,i^1+ξ_2n,i^2,with ξ_2n,i^1 := 2^1-α'-1/log(2)√(2nm)Δ_2n^α'σ_0^2 K∑_j=1^m(Δ_2n,iX̃)^2(y_j)e^κy_j_1,ξ_2n,i^2 :=1_2(i)2^2-α'/log(2)√(2nm)Δ_2n^α'σ_0^2 K∑_j=1^m(Δ_2n,iX̃)(y_j)(Δ_2n,i-1X̃)(y_j)e^κy_j_1.We now provide a necessary proposition for deriving the asymptotic variance of the triangular array from equation (<ref>). Combining these result with Proposition <ref> and using analogous techniques as used for proofing Corollary <ref> and Proposition <ref>, we can conclude the CLT from Proposition <ref>. On the Assumptions <ref> and <ref>, we have for the covariance structure of the two temporal resolutions Δ_n and Δ_2n that(V_p,Δ_2n(y_1),W_p,Δ_2n(y_2)) = Λ_α'/2p(Γ(1-α')σ^4/2^d(πη)^d/2α'Γ(d/2))^2(1+𝒪(Δ_2n^1/2∨Δ_2n^1-α'/δ^d+1∨1/p)) +(Δ_2n^1-α'/p(1_{y_1≠y_2}y_1-y_2_0^-(d+1)+δ^-(d+1))),where y_1,y_2∈[δ,1-δ]^d, Λ_α' is a numerical constant depending on α'∈(0,1), defined in equation (<ref>) and 2≤ p≤ 2n.Analogously to Proposition <ref>, we first obtain that (V_p,Δ_2n(y_1),W_p,Δ_2n(y_2)) =2e^κ(y_1+y_2)_1/pΔ_2n^2α'∑_k_1,k_2∈^de_k_1(y_1)e_k_1(y_2)e_k_2(y_1)e_k_2(y_2)D_k_1,k_2,where we redefineD_k_1,k_2 :=1/p∑_i,j=1^p1_2(j)((B̃_i,k_1+C_i,k_1)(B̃_i,k_2+C_i,k_2), (B̃_j,k_1+C_j,k_1)(B̃_j-1,k_2+C_j-1,k_2))=1/p∑_i,j=1^p 1_2(j)([(B̃_i,k_1+C_i,k_1)(B̃_j,k_1+C_j,k_1)] [(B̃_i,k_2+C_i,k_2)(B̃_j-1,k_2+C_j-1,k_2)] +[(B̃_i,k_1+C_i,k_1) (B̃_j-1,k_2+C_j-1,k_2)][(B̃_i,k_2+C_i,k_2)(B̃_j,k_1+C_j,k_1)]).Assume k_1≠k_2, then we have D_k_1,k_2 =1/p∑_i,j=1^p 1_2(j)([(B̃_i,k_1+C_i,k_1)(B̃_j,k_1+C_j,k_1)] [(B̃_i,k_2+C_i,k_2)(B̃_j-1,k_2+C_j-1,k_2)]=1/p∑_i,j=1^p1_2(j) (Σ̃_i,j^B,k_1+Σ_i,j^BC,k_1+Σ_j,i^BC,k_1+Σ_i,j^C,k_1)(Σ̃_i,j-1^B,k_2+Σ_i,j-1^BC,k_2+Σ_j-1,i^BC,k_2+Σ_i,j-1^C,k_2).For the covariance terms we have by Proposition <ref>, that 1/p∑_i,j=1^p1_2(j)Σ̃_i,j^B,k_1Σ̃_i,j-1^B,k_2 =σ^4(1-e^-λ_k_1Δ_2n)^2(1-e^-λ_k_2Δ_2n)^2 /4λ_k_1^1+αλ_k_2^1+α1/p∑_i,j=1^p1_2(j) e^-λ_k_1Δ_2ni-j e^-λ_k_2Δ_2ni-j+1.For the geometric sum in the latter display, we obtain:∑_i,j=1^pq_1^i-jq_2^i-j+11_2(j)=q_2∑_i=2^p(q_1q_2)^i∑_j=2^i(q_1q_2)^-j1_2(j)+q_2^-1∑_j=2^p(q_1q_2)^j1_2(j)∑_i=1^j-1(q_1q_2)^-i,where q_1,q_2≠ 0. Furthermore, for a q≠ 1 it holds by analogous computations as for the partial sum of the geometric series, that∑_i=0^nq^i1_2(i)=1-q^n+2/1-q^2 , if n is even 1-q^n+1/1-q^2 , if n is odd ,where we consider zero as even. Hence, we get:∑_i,j=1^pq_1^i-jq_2^i-j+11_2(j) =q_2/(q_1q_2)^2(1-(q_1q_2)^-2)(∑_i=2^p (q_1q_2)^i(1-(q_1q_2)^-i) 1_2(i) +∑_i=2^p (q_1q_2)^i(1-(q_1q_2)^-(i-1)) 1_(2)^∁(i)) +q_2^-1/q_1q_2(1-(q_1q_2)^-1)∑_j=2^p (q_1q_2)^j(1-(q_1q_2)^-(j-1))1_2(j)Now using, that q_1,q_2<1 and that it holds for the floor function by the Fourier representation that1/p⌊ cp⌋ = c,ifcp∈c-1/2p+1/pπ∑_k=1^∞sin(2π k cp)/k, ifcp∉ , ,for c≠ 0 and p∈, we observe the following: ∑_i,j=1^pq_1^i-jq_2^i-j+11_2(j) =(q_2/1-(q_1q_2)^2( p/2+p/2q_1q_2)+q_2^-1/1-q_1q_2·p/2q_1q_2)(1+(p^-1/1-q_1q_2))=q_1+q_2/2(1-q_1q_2)(1+(p^-1/1-q_1q_2)).Therefore, we have 1/p∑_i,j=1^p1_2(j)Σ̃_i,j^B,k_1Σ̃_i,j-1^B,k_2 =σ^4(1-e^-λ_k_1Δ_2n)^2(1-e^-λ_k_2Δ_2n)^2 /4λ_k_1^1+αλ_k_2^1+α ×e^-λ_k_1Δ_2n+e^-λ_k_2Δ_2n/2(1-e^-(λ_k_1+λ_k_1)Δ_2n)(1+(1∧p^-1/1-e^-(λ_k_1+λ_k_2))).Furthermore, we have 1/p∑_i,j=1^p1_2(j)Σ_i,j^C,k_1Σ_i,j-1^C,k_2 = σ^4/p∑_i,j=1^p(1-e^-2λ_k_1Δ_2n)(1-e^-2λ_k_2Δ_2n)/4λ_k_1^1+αλ_k_2^1+α1_{j=i}1_{j-1=i}1_2(j)=0,as well as1/p∑_i,j=1^p1_2(j) Σ_i,j^BC,k_1Σ_i,j-1^BC,k_2 =σ^4(1-e^-λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)/4λ_k_1^1+αλ_k_2^1+α(e^λ_k_1Δ_2n-e^-λ_k_1Δ_2n)(e^λ_k_2Δ_2n-e^-λ_k_2Δ_2n) ×1/p∑_i,j=1^p1_{i>j}1_2(j) e^-λ_k_1Δ_2n(i-j)e^-λ_k_2Δ_2n(i-j+1).For the sum structure in the latter display we obtain:1/p∑_i,j=1^p1_{i>j}1_2(j) e^-λ_k_1Δ_2n(i-j)e^-λ_k_2Δ_2n(i-j+1) =e^-λ_k_2Δ_2n/p∑_i,j=1^p1_{i>j}1_2(j) e^-(λ_k_1+λ_k_2)Δ_2n(i-j).Assume q<1, then we have1/p∑_i,j=1^p 1_{i>j}1_2(j)q^i-j =q/2(1-q)(1+(p^-1/1-q)),where we used analogous steps leading to display (<ref>). Hence, we get:1/p∑_i,j=1^p1_2(j) Σ_i,j^BC,k_1Σ_i,j-1^BC,k_2 =σ^4(1-e^-λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)/4λ_k_1^1+αλ_k_2^1+α(1-e^-2λ_k_1Δ_2n)(1-e^-2λ_k_2Δ_2n) ×e^-λ_k_2Δ_2n/2(1-e^-(λ_k_1+λ_k_2)Δ_2n)(1+(1 ∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_2n)).Moreover, by analogous steps, we have 1/p∑_i,j=1^p1_2(j) Σ_j,i^BC,k_1Σ_j-1,i^BC,k_2 =σ^4(1-e^-λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)/4λ_k_1^1+αλ_k_2^1+α(1-e^-2λ_k_1Δ_2n)(1-e^-2λ_k_2Δ_2n) ×e^-λ_k_1Δ_2n/2(1-e^-(λ_k_1+λ_k_2)Δ_2n)(1+(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_2n)).where we used that1/p∑_i,j=1^p 1_{i<j-1}1_2(j)q^j-i =q^2/2(1-q)(1+(p^-1/1-q)).For the cross-terms we obtain that1/p∑_i,j=1^p1_2(j) Σ̃_i,j^B,k_1(Σ_i,j-1^BC,k_2+Σ_j-1,i^BC,k_2) =σ^4(1-e^-λ_k_1Δ_2n)^2(e^-λ_k_2Δ_2n-1)/4λ_k_1^1+αλ_k_2^1+α(e^λ_k_2Δ_2n-e^-λ_k_2Δ_2n) ×( e^-λ_k_2Δ_2n/p∑_i,j=1^p1_2(j) 1_{i>j-1}e^-(λ_k_1+λ_k_2)Δ_2n(i-j)+e^λ_k_2Δ_2n/p∑_i,j=1^p1_2(j) 1_{i<j-1}e^-(λ_k_1+λ_k_2)Δ_2n(j-i))) .Analogously to equation (<ref>), we have 1/p∑_i,j=1^p 1_{i>j-1}1_2(j)q^i-j =1/2(1-q)(1+(p^-1/1-q)),which yields in combination with equation (<ref>) that1/p∑_i,j=1^p1_2(j) Σ̃_i,j^B,k_1(Σ_i,j-1^BC,k_2+Σ_j-1,i^BC,k_2) =σ^4(1-e^-λ_k_1Δ_2n)^2(e^-λ_k_2Δ_2n-1)/4λ_k_1^1+αλ_k_2^1+α(1-e^-2λ_k_2Δ_2n) ×1+e^-2λ_k_1Δ_2n/2(1-e^-(λ_k_1+λ_k_2)Δ_2n)(1+(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_2n))Moreover, it holds that1/p∑_i,j=1^p1_2(j) Σ̃_i,j-1^B,k_2(Σ_i,j^BC,k_1+Σ_j,i^BC,k_1)=σ^4(1-e^-λ_k_2Δ_2n)^2(e^-λ_k_1Δ_2n-1)/4λ_k_1^1+αλ_k_2^1+α(e^λ_k_1Δ_2n-e^-λ_k_1Δ_2n) ×(e^-λ_k_2Δ_2n/p∑_i,j=1^p1_2(j)1_{i>j}e^-(λ_k_1+λ_k_2)Δ_2n(i-j)+e^λ_k_2Δ_2n/p∑_i,j=1^p1_2(j)1_{j>i}e^-(λ_k_1+λ_k_2)Δ_2n(j-i))=σ^4(1-e^-λ_k_2Δ_2n)^2(e^-λ_k_1Δ_2n-1)/4λ_k_1^1+αλ_k_2^1+α(1-e^-2λ_k_1Δ_2n) ×(1+e^-2λ_k_2Δ_2n) 1/2(1-e^-(λ_k_1+λ_k_2)Δ_2n)(1+(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_2n)),where we used equation (<ref>) and 1/p∑_i,j=1^p 1_{i<j}1_2(j)q^j-i =q/2(1-q)(1+(p^-1/1-q)).We also observe that1/p∑_i,j=1^p1_2(j) Σ̃_i,j^B,k_1Σ_i,j-1^C,k_2 =σ^4(1-e^-λ_k_1Δ_2n)^2(1-e^-2λ_k_2Δ_2n)/4λ_k_1^1+αλ_k_2^1+α1/p∑_i,j=1^p1_2(j)e^-λ_k_1Δ_2ni-j1_{j-1=i}=σ^4e^-λ_k_1Δ_2n(1-e^-λ_k_1Δ_2n)^2(1-e^-2λ_k_2Δ_2n)/8λ_k_1^1+αλ_k_2^1+α(1+(p^-1)),as well as1/p∑_i,j=1^p1_2(j) Σ̃_i,j-1^B,k_2Σ_i,j^C,k_1 =σ^4 (1-e^-2λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)^2/4λ_k_1^1+αλ_k_2^1+α1/p∑_i,j=1^p1_2(j)1_{j=i}e^-λ_k_2Δ_2ni-j+1=σ^4 e^-λ_k_2Δ_2n(1-e^-2λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)^2/8λ_k_1^1+αλ_k_2^1+α(1+(p^-1)).In comparison to Proposition <ref>, the following structures do not vanish and we get1/p∑_i,j=1^p1_2(j)Σ_j,i^BC,k_1Σ_i,j-1^C,k_2 =σ^4 (1-e^-2λ_k_2Δ_2n)(e^-λ_k_1Δ_2n-1)/8λ_k_1^1+αλ_k_2^1+α(1-e^-2λ_k_1Δ_2n)(1+(p^-1)),as well as1/p∑_i,j=1^p1_2(j)Σ_i,j^C,k_1Σ_i,j-1^BC,k_2 =σ^4 (1-e^-2λ_k_1Δ_2n)(e^-λ_k_2Δ_2n-1)/8λ_k_1^1+αλ_k_2^1+α(1-e^-2λ_k_2Δ_2n)(1+(p^-1)),whereas the following terms still vanish:1/p∑_i,j=1^p1_2(j)Σ_i,j^BC,k_1Σ_j-1,i^BC,k_2 =0,1/p∑_i,j=1^p1_2(j)Σ_i,j^BC,k_1Σ_i,j-1^C,k_2=0, 1/p∑_i,j=1^p1_2(j)Σ_j,i^BC,k_1Σ_i,j-1^BC,k_2 =0,1/p∑_i,j=1^p1_2(j)Σ_i,j^C,k_1Σ_j-1,i^BC,k_2=0.Combining the calculations form the displays (<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>) and (<ref>), yields for k_1≠k_2 thatD_k_1,k_2 =σ^4( (1-e^-λ_k_1Δ_2n)^2(1-e^-λ_k_2Δ_2n)^2 /8λ_k_1^1+αλ_k_2^1+α(e^-λ_k_1Δ_2n+e^-λ_k_2Δ_2n/1-e^-(λ_k_1+λ_k_1)Δ_2n-(1-e^-2λ_k_2Δ_2n)(1+e^-2λ_k_1Δ_2n)/(1-e^-(λ_k_1+λ_k_2)Δ_2n)(1-e^-λ_k_2Δ_2n) - (1-e^-2λ_k_1Δ_2n)(1+e^-2λ_k_2Δ_2n)/(1-e^-λ_k_1Δ_2n)(1-e^-(λ_k_1+λ_k_2)Δ_2n)+ (e^-λ_k_1Δ_2n+e^-λ_k_2Δ_2n)(1-e^-2λ_k_1Δ_2n)(1-e^-2λ_k_2Δ_2n)/(1-e^-λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)(1-e^-(λ_k_1+λ_k_2)Δ_2n)) +(1-e^-λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)/8λ_k_1^1+αλ_k_2^1+α(e^-λ_k_1Δ_2n(1-e^-λ_k_1Δ_2n)(1-e^-2λ_k_2Δ_2n)/(1-e^-λ_k_2Δ_2n) + e^-λ_k_2Δ_2n(1-e^-λ_k_2Δ_2n)(1-e^-2λ_k_1Δ_2n)/(1-e^-λ_k_1Δ_2n)-(1-e^-2λ_k_1Δ_2n)(1-e^-2λ_k_2Δ_2n)/(1-e^-λ_k_2Δ_2n) -(1-e^-2λ_k_1Δ_2n)(1-e^-2λ_k_2Δ_2n)/(1-e^-λ_k_1Δ_2n)))(1+(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_2n))=σ^4( (1-e^-λ_k_1Δ_2n)^2(1-e^-λ_k_2Δ_2n)^2 /8λ_k_1^1+αλ_k_2^1+α(-2+e^-λ_k_1Δ_2n+e^-λ_k_2Δ_2n/1-e^-(λ_k_1+λ_k_2)Δ_2n) -2(1-e^-λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)/8λ_k_1^1+αλ_k_2^1+α(1-e^-(λ_k_1+λ_k_2)Δ_2n))(1+(1∧p^-1/1-e^-(λ_k_1+λ_k_2)Δ_2n)).Recalling the calculations of the covariance yields:(V_p,Δ_2n(y_1),W_p,Δ_2n(y_2)) =2e^κ(y_1+y_2)_1σ^4/pΔ_2n^2α'∑_k_1,k_2∈^dk_1≠k_2e_k_1(y_1)e_k_1(y_2)e_k_2(y_1)e_k_2(y_2)D̅_k_1,k_2 +(1/p^2Δ_2n^2α'∑_k_1,k_2∈^dk_1≠k_2D̅_k_1,k_2/1-e^-(λ_k_1+λ_k_2)Δ_2n) +2e^κ(y_1+y_2)_1/pΔ_2n^2α'∑_k∈^de_k^2(y_1)e^2_k(y_2)D_k,k,whereD̅_k_1,k_2 = (1-e^-λ_k_1Δ_2n)^2(1-e^-λ_k_2Δ_2n)^2 /8λ_k_1^1+αλ_k_2^1+α(-2+e^-λ_k_1Δ_2n+e^-λ_k_2Δ_2n/1-e^-(λ_k_1+λ_k_2)Δ_2n) -2(1-e^-λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)/8λ_k_1^1+αλ_k_2^1+α(1-e^-(λ_k_1+λ_k_2)Δ_2n).First, we obtain for sufficiently large p that(1/p^2Δ_2n^2α'∑_k_1,k_2∈^dk_1≠k_2D̅_k_1,k_2/1-e^-(λ_k_1+λ_k_2)Δ_2n) =(1/p^2(∑_k∈^d1-e^-λ_kΔ_2n/2(λ_kΔ_2n)^1+α)^2)=(p^-2)where we used <ref> and analogous steps as in Proposition <ref>. Hence, we obtain:(1/p^2Δ_2n^2α'∑_k_1,k_2∈^dk_1≠k_2D̅_k_1,k_2/1-e^-(λ_k_1+λ_k_2)Δ_2n) =(1/p(1∧1/p)).Furthermore, we have for k_1=k_2=k thatD_k,k =2/p∑_i,j=1^p 1_2(j)[(B̃_i,k+C_i,k)(B̃_j,k+C_j,k)] [(B̃_i,k+C_i,k)(B̃_j-1,k+C_j-1,k)]=-(1-e^-λ_kΔ_2n)^2/4λ_k^2(1+α)((1-e^-λ_kΔ_2n)^2-e^-λ_kΔ_2n(1-e^-λ_kΔ_2n)^2/1-e^-2λ_kΔ_2n+1-e^-2λ_kΔ_2n) ×(1+(1∧p^-1/1-e^-2λ_kΔ_2n)).Defining the following term: D̅_k,k:= (1-e^-λ_kΔ_2n)^4 /8λ_k^2(1+α)(-2+2e^-λ_kΔ_2n/1-e^-2λ_kΔ_2n)-2(1-e^-λ_kΔ_2n)^2/8λ_k^2(1+α)(1-e^-2λ_kΔ_2n),yields:1/Δ_2n^2α'p∑_k∈^dD̅_k,k=(Δ_2n^d/2/pΔ_2n^d/2∑_k∈^d((1-e^-λ_kΔ_2n)/2(λ_kΔ_2n)^1+α)^2)=(p^-1Δ_2n^2(1-α')),where we used analogous steps as in display (<ref>). We decompose the leading term D̅_k_1,k_2 as follows:D̅_k_1,k_2 =D̅_k_1,k_2^1+D̅_k_1,k_2^2+D̅_k_1,k_2^3+D̅_k_1,k_2^4,whereD̅_k_1,k_2^1 = - (1-e^-λ_k_1Δ_2n)^2(1-e^-λ_k_2Δ_2n)^2 /4λ_k_1^1+αλ_k_2^1+α=-Δ_2n^2(1+α)/4f_2,α(λ_k_1Δ_2n)f_2,α(λ_k_2Δ_2n), D̅_k_1,k_2^2 =(1-e^-λ_k_1Δ_2n)^2(1-e^-λ_k_2Δ_2n)^2 /8λ_k_1^1+αλ_k_2^1+α·e^-λ_k_1Δ_2n+e^-λ_k_2Δ_2n/1-e^-(λ_k_1+λ_k_2)Δ_2n=Δ_2n^2(1+α)/2∑_r=0^∞( g_1,α,r+1(λ_k_1Δ_2n)g_1,α,r(λ_k_2Δ_2n)+ g_1,α,r+1(λ_k_2Δ_2n)g_1,α,r(λ_k_1Δ_2n)), D̅_k_1,k_2^3 =-(1-e^-λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)/4λ_k_1^1+αλ_k_2^1+α=-Δ_2n^2(1+α)/4f_1,α(λ_k_1Δ_2n)f_1,α(λ_k_2Δ_2n), D̅_k_1,k_2^4 = (1-e^-λ_k_1Δ_2n)(1-e^-λ_k_2Δ_2n)/4λ_k_1^1+αλ_k_2^1+αe^-(λ_k_1+λ_k_2)Δ_2n=Δ_2n^2(1+α)g_2,α,1(λ_k_1Δ_2n)g_2,α,1(λ_k_2Δ_2n).Here, we use the following functions defined by: f_1,α(x):=f_α(x)=1-e^-x/x^1+α , f_2,α(x):=(1-e^-x)^2/x^1+αg_1,α,τ(x):=g_α,τ(x)=(1-e^-x)^2/2x^1+αe^-τ x , g_2,α,τ(x):=1-e^-x/2x^1+αe^-τ x.By Lemma <ref>, we know that f_1,α∈𝒬_β_1 and g_1,α,τ∈𝒬_β_2, where β_1=(2α,2(1+α),2(2+2α)) and β_2=(2α,2(1+α),2(1+2α)). By analogous computations as used in Lemma <ref>, we obtain that f_2,α∈𝒬_β_1 and g_2,α,τ∈𝒬_β_1. Assume y_1≠y_2. We can repeat the calculations leading to equation (<ref>) and have (V_p,Δ_2n(y_1),W_p,Δ_2n(y_2)) =(Δ_2n^1-α'/p(y_1-y_2_0^-(d+1)+δ^-(d+1)))+𝒪(1/p(Δ_2n^2(1-α')+1/p∧1))=(Δ_2n^1-α'/p(y_1-y_2_0^-(d+1)+δ^-(d+1))).Therefore, it remains to analyse the case where y_1= y_2. Again, utilizing that the function f_1,α, f_2,α and g_2,α,τ are in the same class 𝒬_β_1 as the function f_α as used in Proposition <ref>, as well as g_1,α,τ=g_α,τ, we can conclude analogous to equation (<ref>) that(V_p,Δ_2n(y_1),W_p,Δ_2n(y_2)) =2σ^4/pΔ_2n^2α'∑_k_1,k_2∈^dk_1≠k_2D̅_k_1,k_2+𝒪(1/p(Δ_2n^1/2∨Δ_2n^1-α'/δ^d+1+1/p∧1)).First, we obtain that1/Δ_2n^2α'∑_k_1,k_2∈^dk_1≠k_2D̅_k_1,k_2^1 =-1/4(Δ_2n^d/2∑_k∈^df_2,α(λ_kΔ_2n))^2:=I_1, 1/Δ_2n^2α'∑_k_1,k_2∈^dk_1≠k_2D̅_k_1,k_2^2 =2/2∑_r=0^∞(Δ_2n^d/2∑_k∈^dg_1,α,r+1(λ_kΔ_2n) )(Δ_2n^d/2∑_k∈^d g_1,α,r(λ_kΔ_2n) ):=I_2, 1/Δ_2n^2α'∑_k_1,k_2∈^dk_1≠k_2D̅_k_1,k_2^3 =-1/4(Δ_2n^d/2∑_k∈^df_1,α(λ_kΔ_2n))^2:=I_3, 1/Δ_2n^2α'∑_k_1,k_2∈^dk_1≠k_2D̅_k_1,k_2^3 =(Δ_2n^d/2∑_k∈^dg_2,α,1(λ_kΔ_2n))^2:=I_4.Using Corollary <ref> and Lemma <ref> yields:I_1 =-1/4(1/2^d(πη)^d/2Γ(d/2))^2((2-2^α)π/Γ (1+α )sin(πα))^2=-1/4(Γ(1-α')/2^d(πη)^d/2α'Γ(d/2))^2(2^α'-2)^2,I_2 =1/4(Γ(1-α')/2^d(πη)^d/2α'Γ(d/2))^2∑_r=0^∞(-(r+1) ^α'+2 (r +2)^α'-(r+3)^α')(-r ^α'+2 (r +1)^α'-(r +2)^α'),I_3 =-1/4(Γ(1-α')/2^d(πη)^d/2α'Γ(d/2))^2,I_4 =1/4(Γ(1-α')/2^d(πη)^d/2α'Γ(d/2))^2(2^α'-1)^2.Hence, we obtain that(V_p,Δ_2n(y_1),W_p,Δ_2n(y_2))=1/2p(Γ(1-α')σ^4/2^d(πη)^d/2α'Γ(d/2))^2((2^α'-1)^2-(2^α'-2)^2-1 +∑_r=0^∞(-(r+1) ^α'+2 (r +2)^α'-(r+3)^α')(-r ^α'+2 (r +1)^α'-(r +2)^α')) +𝒪(1/p(Δ_2n^1/2∨Δ_2n^1-α'/δ^d+1+1/p∧1)).Defining the following constant:Λ_α':= 2(2^α'-2)+∑_r=0^∞(-(r+1) ^α'+2 (r +2)^α'-(r+3)^α')(-r ^α'+2 (r +1)^α'-(r +2)^α'),completes the proof. § ACKNOWLEDGEMENTI wish to express my appreciation to my Ph.D. advisor, Markus Bibinger, for the careful reading of this manuscript and his useful suggestions. tocsectionReferences plain | http://arxiv.org/abs/2310.17828v2 | {
"authors": [
"Patrick Bossert"
],
"categories": [
"math.ST",
"math.PR",
"stat.TH",
"62F12, 62M10, 60H15,"
],
"primary_category": "math.ST",
"published": "20231027003225",
"title": "Parameter estimation for second-order SPDEs in multiple space dimensions"
} |
* Authors contributed equally. Microsoft Redmond WAahmedmagooda, alec.helyar, kyle.jackson, dsullivan, chad.atalla, emilysheng, dan.vann, [email protected] hpalangi, romanlutz, hongliang.kong, xi.yun, eskam, fzarfati, wallach, sarah.bird, [email protected] We present a framework for the automated measurement of responsible AI (RAI) metrics for large language models (LLMs) and associated products and services. Our framework for automatically measuring harms from LLMs builds on existing technical and sociotechnical expertise and leverages the capabilities of state-of-the-art LLMs, such as GPT-4. We use this framework to run through several case studies investigating how different LLMs may violate a range of RAI-related principles. The framework may be employed alongside domain-specific sociotechnical expertise to create measurements for new harm areas in the future. By implementing this framework, we aim to enable more advanced harm measurement efforts and further the responsible use of LLMs.[This is a living document]A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications Ahmed Magooda*, Alec Helyar*, Kyle Jackson*, David Sullivan, Chad Atalla, Emily Sheng, Dan Vann, Richard Edgar, Hamid Palangi, Roman Lutz, Hongliang Kong, Vincent Yun, Eslam Kamal, Federico Zarfati, Hanna Wallach, Sarah Bird, Mei Chen January 14, 2024 ============================================================================================================================================================================================================================================== § INTRODUCTIONRapid advancements in artificial intelligence (AI) and natural language processing (NLP) have led to the development of increasingly sophisticated large language models (LLMs) such as (GPT-4<cit.>, LLama 2<cit.>, Falcon<cit.>, etc.), with advanced text generation capabilities across a wide range of task types. While these models unlock numerous opportunities, there are also serious concerns about models causing harm<cit.>. Manual detection of harms may better account for nuances. However, as the availability and capabilities of LLMs grow, it is increasingly necessary to develop automated frameworks for measuring harms with a speed and scale that can match the pace of the technology's proliferation.Motivated by the need for an automated harm measurement framework which is flexible enough to align with evolving, valid, and reliable definitions of harms, as well as the need for a measurement implementation that could be applied across different types of products and services related to LLMs (e.g., chatbots, summarization systems, etc.), we propose and implement a framework that harnesses the capabilities of LLMs to test other LLMs and assess their potential for causing harm.While our work yields tools for automated measurement, creating the harm-specific measurement resources (e.g., harm measurement definitions) still requires domain-specific expertise. We would like to preface the rest of this paper with an acknowledgment that this is not the final, only, nor necessarily best implementation to measuring harms; however, it is an implementation that allows for flexibility in updating definitions and applying to various products and services. There are still open questions about the risks of employing LLMs to perform parts of the harm measurement process and how much of the measurement pipeline can and should be automated—we discuss this more in Sec. <ref> but mostly leave these important questions to future work. The core of our proposed framework comprises of two key components: (1) data generation from templates and (2) evaluation of generated outputs. First, we introduce a data generation component designed to assess LLM propensity for generating specific types of potential harmful content. This component simulates various real-world LLM products and services, such as question answering, summarization, and conversation. Next, we introduce an evaluation component that uses GPT-4 to assess LLM-generated content according to harm definitions. This component evaluates AI-generated content and produces both quantitative and qualitative outputs, yielding numerical annotations of harm severity and written snippets about annotation reasoning. Our framework enables automatic comparison of different LLM-based products and services against measurement sets built by domain experts for various harms, allowing practitioners to compare strengths and weaknesses. § ARCHITECTURE Our measurement framework comprises of two components that are tailored for assessing LLMs: 1) data generation from templates and parameters, and 2) evaluation of generated outputs via annotation guidelines. The data generation component uses templates and parameters to simulate interactions with the LLM under test to generate data which approximates a user-AI interaction in some product or service. The templates and parameters are separately created by domain experts for each harm to ensure the reliability and validity of the resulting measurements. Next, the evaluation component produces annotations of the LLM's output on the generated data by applying annotation guidelines. The annotation guidelines are provided by domain experts based on the harm definitions they create.The evaluation process is streamlined by treating the LLM under test as a black box which need only accept inputs and yield outputs.Additionally, the implementation of this framework supports two different environments for computation.The first environment involves running the evaluation on a local machine, where prompt construction and model API coordination, model API calling, etc. occur locally. The second environment utilizes the Azure Machine Learning (AML) platform to automatically construct evaluation pipelines and perform evaluations using AML compute resources. Figure <ref> shows an sample AML evaluation pipeline. §.§ Data GenerationThe first part of our framework focuses on simulating a hypothetical user's interaction with a real product or service such as question answering, chat, and document summarization. The goal of this part of the data generation pipeline, referred to as task simulation, is to generate interactions (between the LLM and a hypothetical user) which cover topics or patterns associated with a target harm. To achieve this, we use another LLM to play the role of a hypothetical user, initiating the task and participating in the conversation based on the templates provided by domain experts. We denote the LLM under test as LLM_test and the user-simulating LLM as LLM_user.We provide a set of templates, referred to as persona templates, which provide guidelines for the LLM_user regarding how to behave and which topics or goals to introduce in the interaction with LLM_test. For simplicity and generalizability, we employ Jinja-style parameterized templates. Each template describes the basic structure and theme of the conversations, leaving placeholders for parameters specifying specific topics, groups of people, etc, to be incorporated. Then, each template is combined with each set of corresponding parameters to create one or more completed personas for the LLM_user to leverage in task simulation with the blackbox LLM_test[The templates and parameters are two pieces of the measurement resources that are created by domain experts for each harm. The process of how domain experts create these measurement resources will be examined separately in future work.]. Given these completed personas created by combining templates and parameters, we run task simulation next. Each completed persona serves as instructions for LLM_user, shaping how it interacts with LLM_test. This part injects creativity and is critical for automating and scaling up the process, but it also yields risks. For example, what if the LLM_user does not simulate realistic user behavior in the interaction with LLM_test? We explore these concerns further in section <ref>. Once the task simulation has been run for each completed persona, we are left with a set of generated data which includes simulated user inputs and real LLM_test system outputs (we refer to each simulated interaction as a sample). §.§ EvaluationThe second part of our framework is responsible for evaluation through automated annotation of the samples generated in task simulation. The annotation process uses an LLM by providing it with annotation guidelines which are manually crafted by domain experts and include harm definitions, examples, and a defect definition. The defect definition specifies criteria for determining whether a data sample is considered desirable or allowable in the context of the LLM under test and any product or service it is embedded in. Crafting this definition is a sociotechnical challenge which is deeply entangled with the harm definitions created by domain experts and policy decisions made by the organizations building the AI system under test.The LLM can then annotate the given examples using the provided guidelines. Automated annotation consists of multiple steps: the first step uses the annotation guidelines to annotate each sample. These annotations are initially created in text, where the LLM follows an annotation schema specified by few-shot examples in the annotation guidelines. The next step parses the annotation to extract expected metrics (e.g., defect score,reasoning, etc) according to the provided guidelines. The final step involves aggregating the extracted values and calculating a metric (e.g., defect rate.).For each harm area, human-LLM annotation agreement experiments are conducted during the development of measurement resources. After that, the measurement resources and technical framework can be applied jointly to produce measurements without human annotation.Ultimately, a defect rate is calculated, which represents the proportion of samples which were annotated as matching the defect definition.For example, one way defect definitions may work is through severity thresholds. Consider the case where we may wish to evaluate whether the LLM under test produces extreme violent content. The domain experts may build a severity scale (e.g., on an 1-10 scale where lower is less severe) for violent content, and a defect definition could be a threshold within this severity range or a particular severity scale (e.g., any sample with severity ≥ 7 is a defect).Then, the defect rate can be determined by calculating the ratio of samples that meet the defect definition relative to the total number of samples. In this case, the defect rate can be computed as follows: DefectRate = |x ∈samples : x > threshold|/|samples|§ INTERPRETING MEASUREMENTS By combining this framework with measurement resources (templates, parameters, harm definitions, and annotation guidelines), a repeatable measurement pipeline can be created. Running this measurement pipeline on an AI system yields a defect rate. It is important to interpret this defect carefully and understand the utility of measurements derived this way. All defect rates obtained through application of this technical framework are relative measurements, which do not represent the absolute state of the world. In other words, a 0% defect rate does not mean that there is zero chance of the measured harm occurring in the real world. Instead, a 0% defect rate may be interpreted to mean that the AI system under test did not appear to fail any tests in the current measurement set.Additionally, all resulting measurements are only as reliable and valid as the measurement resources designed for the harm being tested. The process of creating these measurement resources is a complex sociotechnical problem which is fraught with pitfalls and opportunities for reliability and validity to be impacted. If the measurement resources are created with a poorly constructed harm definition, the resulting measurements can range from nonsensical to directly harmful (if development decisions are misled by a poorly designed measurement).With this perspective, these measurements provide significant and targeted utility. These measurements can serve as diagnostic tools. They enable comparison of the efficacy of different mitigations as well as tracking of progress in mitigating known defects over time. Lastly, when using identical measurement sets to test two AI systems, the resulting measurements can be used to compare the relative performance of each system on the challenges represented in the measurement set.§ CASE STUDYBelow we provide a deep dive on Groundedness. Then we provide an example of how this framework can be leveraged to create measurements and compare multiple models. §.§ Deep Dive: GroundednessIn this case study, we consider ungrounded generations from LLM_test to be harmful and refer to this measurement category as groundedness. We first had to build measurement resources for this specific harm.As mentioned earlier, measurement resources must include a set of templates and parameters. For the groundedness case study, the templates and parameters were to yield a set of of questions (prompts to LLM_test) and corresponding contextual files (used by LLM_test to answer the prompt questions). In the first stage of the evaluation pipeline (i.e., data generation with task simulation), we initiate conversations between LLM_test and the simulated LLM_user. LLM_user follows the templates and parameters and asks each question from the provided set. At the same time, we provide LLM_test with access to the context files and provide guidance to answer the questions based solely on the context files. Figure <ref> illustrates the prompt guidance for LLM_test to answer questions while relying exclusively on the context files as a source of information.Following the generation of conversations, we proceed to the evaluation stage to assess generated samples. As part of our measurement resources, we must provide annotation guidelines to an LLM (GPT-4) to evaluate whether a response is grounded or not. In this case, we design a basic annotation guideline to yield a 1 - 5 groundedness score. A score of 1 signifies that the response is not grounded, while a score of 5 indicates that all information in the answer is grounded. Figure <ref> shows the annotation guidelines. The LLM annotator (GPT-4) is then provided with the original question posed by LLM_user, the response from LLM_test, and the context given to LLM_test for formulating its answer. Subsequently, the LLM annotator assigns a groundedness score on a scale of 1 to 5 for each sample. To evaluate the effectiveness of our annotation guidelines, we collected a dataset of 266 examples including questions, responses, and the context used to generate the responses. These examples were annotated by human evaluators using the same scale from 1 to 5 for groundedness. In parallel, we employed our proposed framework utilizing GPT-4 to annotate the same data, also on the same scale from 1 to 5, using the crafted annotation guidelines.Then, we assessed the agreement between the human and GPT-4 annotations using two simple heuristic metrics. The first metric, exact agreement ratio, measures the proportion of instances where the human and GPT-4 scores are identical. The second metric serves more as a loose heuristic: relaxed agreement ratio, which considers agreement in cases where the human and GPT-4 scores differ by no more than 1 point on the scale. Our preliminary analysis revealed an exact agreement ratio of 60% and a relaxed agreement ratio of 80.5% as shown in table <ref>. Figure <ref> presents a confusion matrix illustrating the relationship between the human and GPT-4 annotations. Further work on human-human agreement is required as well to build an understanding of what an acceptable result is on each of these metrics. Additionally, more robust agreement analysis will be performed in future work. This sort of measurement provides a sense of the quality of the annotation guidelines, which allows us to iterate on and improve the guidelines. These preliminary results are also useful for building a rough notion of how confident we can be in resulting measurements. §.§ Experimental Design We conducted a set of experiments to evaluate three LLMs with the proposed evaluation framework. We refer to these three models as model 1, model 2, and model 3.[We anonymized model names for now—more details will be provided in future updates to this manuscript] In all of the reported experiments, we focused on conversation simulation tasks, where we engaged in a synthetic conversation with the LLM under test (LLM_test) to measure its tendency to violate RAI principles in the following aspects:* Succeeding in Jailbreaks* Generating Potentially Harmful Content, including but not limited to:[For these highly sociotechnical harms, the measurement resources were constructed by domain experts, leveraging techniques that are out of scope for this manuscript.] * Hateful or Unfair Content* Sexual Content* Violent Content * Leaking Intellectual Property (IP): * Songs* Books* NewsIn this round of experiments, we used GPT-4 in both the data generation and evaluation components of the pipeline. For data generation, we use GPT-4 to simulate the user agent (LLM_user) that chats with the LLM_test using the provided persona templates. For evaluation, we used GPT-4 as the underlying LLM for the annotation component. This experimental design is intended to roughly illustrate how our proposed framework can be leveraged in assessing the performance of different LLMs to cause different harms or violate RAI principles. §.§ Results As illustrated in Table <ref>, the three models exhibit similar behavior in terms of defect rates when evaluated for the generation of potentially harmful content. This indicates that the models produced content which was annotated as a defect on a similar number of samples, with Model 3 displaying the lowest rate of generating potentially harmful content defects. Notably, the generation of violent and hateful content is more prevalent compared to sexual content.In the context of intellectual property (IP) data leakage, Models 2 and 3 demonstrate identical defect rates across all categories (songs, books, and news), suggesting that these models generate IP-protected content at the same rate when tested on this set of measurement resources. This may hint that the measurement resources should be expanded or improved to provide greater clarity on possible performance differences between the models. Of the different IP categories, songs exhibit the highest leakage rates, followed by books and news. In contrast, Model 1 displays significantly higher defect rates for songs and news compared to Models 2 and 3, with a 45.8% defect rate for songscompared to 17.9% for both Models 2 and 3, and 9.6% defect rate for news compared to 1.1% for both Models 2 and 3. This implies that Model 1 is more susceptible to revealing IP-protected material in product scenarios.Regarding jailbreak evaluations, Models 2 and 3 exhibit comparable defect rates, with leaking guidelines being the most successful attack vector compared to generating adult content or promoting illegal activities. Model 1, however, demonstrates a significantly higher vulnerability to guideline leakage, with an 80% success rate compared to 51% and 53% for Models 2 and 3, respectively.In conclusion, our evaluation reveals that Models 2 and 3 display lower rates of generating IP-protected content and exposing underlying guidelines than Model 1. So, we suggest that Models 2 and 3 may be more suitable as components for real-world AI products and services compared to Model 1. § LIMITATIONSThis framework facilitates rapid and repeated evaluation of different versions of LLMs and associated products and services. However, there are several limitations.Using an LLM to measure harms from another LLM Notably, this work does not adequately address issues related to the risks of using an LLM to measure harms from another LLM, especially given that LLMs are known to cause harms. This is an open research problem, although we note that the evaluation component of our framework is flexible enough to plug in other evaluation methods. This concern can manifest in both the data generation and evaluation components of the framework.In the case of data generation (during task simulation), by using an LLM to mimic user behavior, we run the risk of the LLM failing to simulate realistic conversations. This may impact the ecological validity of the generated data. Additionally, the LLM used in task simulation may fail to represent linguistic patterns of certain demographic groups, causing measurement efforts to underestimate the potential for harms affecting marginalized groups.In the case of evaluation, using an LLM to annotate potential harms from other LLM-generated content may lead to issues. LLMs are known to produce harmful content and can disproportionately produce some specific types of harmful content affecting some specific groups of people. If an LLM is vulnerable to producing some specific type of harmful content, will it be effective in evaluating and annotating that same type of content? This may lead to under-annotation of harms. Simultanesouly, other tendencies of LLMs may lead to over-annotation of harms. LLMs are known to struggle with groundedness, and we have observed cases where the LLM annotator yields a defect score and text reasoning that cites non-existent parts of the sample. How frequent and impactful may ungrounded generations be in the annotation process?Because the real-life consequences of falsely labeling a piece of text as not harmful are perhaps greater than those of falsely labeling text as harmful, the amount of potentially harmful content measured from this framework should be treated as a lower bound for the real amount of potentially harmful content. One heuristic for gauging the impact of the issues described above is human-model annotation agreement. While this practice provides some greater confidence in the reliability of LLM annotations, it cannot be viewed as a completely adequate replacement for the holistic research required to address these concerns. Additionally, measuring generic human-model annotation agreement is not sufficient. This is due to the reality that different groups of humans with different lived experiences will experience different harms and annotate differently.Utility and interpretation Another limitation lies in the utility and interpretation of the resulting measurements. As mentioned in section <ref>, a 0% defect rate cannot be interpreted to mean that the AI system under test does not cause harm. The resulting measurements are relative rather than absolute, so they are useful for diagnostics and comparisons between systems but are not applicable for estimations of absolute risk or absolute likelihood of harm. Validity and reliability Likely the largest challenge of this technical framework is the fact that it requires carefully-constructed measurement resources for sociotechnical problems. Unfortunately, if these measurement resources are created poorly, their usage in the technical framework does not immediately raise any red flags. The usage of poorly constructed or invalid measurement resources may go unnoticed, which can lead to increased harm if practitioners trust the resulting measurements. In our initial case study, we engaged with domain experts to create measurement resources, but future work is required to understand the practices involved in creating reliable and valid measurement resources.Another aspect of reliability deals with the reproducibility and stability of annotations generated by an LLM. We have observed repeated annotations on the same sample leading to different results. In response, we implement a stability factor that runs the annotation process multiple times and uses the majority value generated for each sample. While this can significantly reduce variability, it comes at the cost of increased computation, as it requires running the evaluation multiple times (e.g., 5 or 7), which can leads to longer evaluation times and greater resource requirements. Resources Finally, we recognize that this approach requires many invocations of large models. While access to LLMs is expanding, acquiring the necessary resources to run various LLMs, especially for large tasks, can be challenging and costly. The compute resources required for this method may make it impractical or inaccessible for some practitioners, and the environmental effects associated with the proliferation of this framework must be examined. § CONCLUSION AND FUTURE DIRECTIONSIn this work, we presented a technical framework for the automated evaluation of large language models (LLMs) in various RAI-relevant harm areas such as groundedness, potentially harmful content, and leakage of intellectual property. This framework leverages LLMs to automate the evaluation process, enabling measurement at speeds and scales demanded by the current proliferation of LLM-powered products and services. The proposed framework offers an end-to-end pipeline for testing an LLM (LLM_test) by simulating an interaction with another LLM (LLM_user) and annotating the outputs with another LLM. The framework depends upon various measurement resources that are best created by domain experts for each harm area subject to measurement.Then, we demonstrated the utility of the proposed framework by evaluating three recent LLMs across three distinct categories of harm (leakage of IP content, generation of potentially harmful content, and jailbreak). The resulting measurements enables us to compare the relative performance of these models and serves as an example of how this framework can be used by practitioners making decisions about which model versions to use in their AI products and services. While much more work is required to explore how reliable and valid measurement resources are created for each harm area, this framework provides a viable path to evaluating harms stemming from LLM-based AI systems at a speed and scale that can keep up with the current pace of development. For future work, we will examine the aforementioned limitations to make the measurement approach more reliable, valid, repeatable, objective, and more cost efficient.ACM-Reference-Format | http://arxiv.org/abs/2310.17750v1 | {
"authors": [
"Ahmed Magooda",
"Alec Helyar",
"Kyle Jackson",
"David Sullivan",
"Chad Atalla",
"Emily Sheng",
"Dan Vann",
"Richard Edgar",
"Hamid Palangi",
"Roman Lutz",
"Hongliang Kong",
"Vincent Yun",
"Eslam Kamal",
"Federico Zarfati",
"Hanna Wallach",
"Sarah Bird",
"Mei Chen"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20231026194506",
"title": "A Framework for Automated Measurement of Responsible AI Harms in Generative AI Applications"
} |
Dipartimento di Ingegneria Civile, Informatica e delle Tecnologie Aeronautiche, Università “Roma Tre”, Via Vito Volterra 62, I-00146 Rome, Italy A semi-analytical computational algorithm to model the wavefield generated byparaxial diffractionof a class of Laguerre-Gauss beams by sharp-edge elliptic apertures is here developed.It is conceived as a sort of “numerical laboratory”aimed at exploring, within the simplest possible geometry,some aspects of an intriguing singular optics scenario, involving vortices and caustics interaction.Sharp-Edge Diffraction of Laguerre-Gauss Vortex Beams by Elliptic Apertures Riccardo Borghi January 14, 2024 =========================================================================== Light beams carrying out topological singularities in the form of phase vortices continues to be among the most investigated“optical objects” since the pioneering work by Allen et al. <cit.>. The use of sharp-edge diffraction to unveil the hidden structure of phase singularities carriedout by such vortex beams has inspired a great deal of work, both theoretical and experimental <cit.>. Most of the above papers explore a similar scenario, in which Laguerre-Gauss (LG) vortex beams were diffracted by hard-edge half planes.Sharp-edge diffraction of LG vortex beams has also been investigated for polygonal as well as circular apertures <cit.>, although most of such investigations have been limited to far-field (Fraunhofer) analysis.A differentscenario, which seems to be still largely unexplored so far, has to do with sharp-edge diffraction of vortex beams by noncircular smooth apertures.To emphasize its relevance, a few words by Michael Berry <cit.> could be quoted, “for general smooth boundaries, focusing of edge waves occurs on caustic curves (envelope of normals to the boundary).” Berry's words must be contextualized within the framework of the so-called Catastrophe Optics <cit.>, where caustics generated fromplane-wave diffraction by smooth sharp-edge apertures are arranged approximately along the geometric evolute of the aperture rim <cit.>.The possibility of using elliptic apertures to explore paraxial vortex sharp-edge diffraction represents the main idea of the present letter.Ellipses have smooth, curved rims whose shapes can easily be controlled by a single real parameter (the eccentricityor,equivalently, the aspect ratio). Ellipse's geometrical evolute is a symmetric closed curve called astroid, containing as many as four cusps.Ellipses are the simplest smooth closed curvesnot endowed withradial symmetry (except for a null eccentricity). They are natural candidates, together with LG beams, for building up a scenario mathematically as simple as possible, in which the interactionbetween singularities of different nature (optical vortices and caustics) can be explored.In the first part of theLetter, a semi-analytical algorithm will be developed to evaluate, up to arbitrarily high (in principle) accuracies, the optical field generated by theparaxial diffraction of a monochromatic (wavelength λ) vortex LG beams carrying out a given topological charge m∈ℤ by a sharp-edge elliptic aperture of given aspect ratio χ∈ (0,1), placed ina planar, opaque screen and delimited by the boundary Γ=∂.The screen coincides with the plane z=0 of a suitable cylindrical reference frame (;z), whose z-axis coincides with the ellipse symmetry axis, as well as with the mean propagation axis of the impinging LG beam.On denoting ψ_0() the disturbance distribution of the latter at z=0^-, the field distribution, say ψ, at the observation point P≡ (;z>0),is given, within paraxial approximation and apart from an overall phase factor exp( k z), bythe following dimensionless version of Fresnel's integral: [ ψ =-iU/2π ∫_𝒜 d^2 ρ ψ_0(ρ) exp(iU/2 |r-ρ|^2) , ] where, in place of z, the Fresnel number U=2π a^2/λ z has been introduced, the symbol a denotinga characteristic length of the aperture size, which will be supposed to coincide with the ellipse major half-axis. All transverse lengths, both across the aperture and the observationplanes, have been normalized to a (for instance, ellipse's minor half-axis length equals χ).Theimpinging LG beam distribution willbe modeled by the following function: [ ψ_0(ρ) = ρ^m exp( m ϕ) exp(γ 2 ρ^2) , m ≥ 0,γ∈ℂ , ] where (ρ,ϕ) denote (normalized) polar coordinates of the transverse vector ρ across the aperture plane, while the overall amplitude factor has been set to the unity.The complex dimensionless parameter γ gives account of thecurvature (via its real part),as well as the transverse spot size (via Im{γ}≥ 0) of the impinging beam.On substituting from Eq. (<ref>)into Eq. (<ref>) the propagated diffracted field take on the form [ ψ=-i U2πexp( U 2 r^2);; ×∫_𝒜d^2ρ ρ^m exp( mϕ) exp( V 2 ρ^2) exp(-iUr·ρ) , ] where the complex parameterV=U+γ has been introduced, with Re{ V}≤ 0.In order to evaluate the integral into Eq. (<ref>) up to arbitrarily high (in principle) accuracies,the Fourier-based approach proposed in <cit.> to deal with plane-wave illumination (m=0), can be extended to |m|>0. Forroom reasons, only the principal details of the new computational algorithm will be provided here, themost important of them being the following represention ofCartesian components of transverse vectorsρ and r into Eq. (<ref>): {[ ρ =(ξ cosα , χ ξ sinα) , 0 ≤α≤ 2π ,ξ≥ 0 ,;; r =(χ Xcosβ , Xsinβ) , 0 ≤β≤ 2π , X ≥ 0 . ]. Then, on substituting from Eq. (<ref>) into Eq. (<ref>), long but straightforward algebra gives[ ψ =-iUχ exp(i/2U r^2) ∫_0^1 dξ ξ^m+1 exp( V1+χ^24ξ^2);;×1/2π∫_0^2πdα exp( V1-χ^24ξ^2cos 2α);; ×exp[- UXξχ cos(α - β)] [∑_σ=± η_σ exp(σα)]^m , ] where use has also been made ofd^2ρ = χ ξ dξ dα and where quantities η_±=(1±χ)/2 have been introduced. The otheringredientis Jacobi-Anger's formula,[ exp(iηcos 2α) = ∑_n=-∞^∞ i^n J_n(η) exp(i2nα) , ] withJ_n(·) denoting the nth-order Bessel function of the first kind. On usingEqs. (<ref>) and (<ref>) it is not difficult to find[ ψ =-iUχ2 exp(i/2U r^2) ∑_ℓ=0^m (mℓ) η^ℓ_+η^m-l_-; ; ×Ψ^m_2ℓ-m[V(1+χ^2)4, Uχ X, V(1-χ^2)4; β] , ] where functions Ψ^m_k are defined by [ Ψ^m_k(a,b,c;φ) = ^-kexp( kφ) ∑_n∈ℤ ^-nexp( 2nφ);; ×∫_0^1 dξ ξ^m/2 exp( aξ)J_n(cξ) J_2n+k(b√(ξ)) . ] As we shall see in a moment, numerical evaluation of the last integrals does not present any problems, even fornonsmall Fresnel's numbers and/or nonsmall values of X.Moreover,as far as the convergence of the n-series into Eq. (<ref>) is concerned, a truncationcriterion similar to that proposed in Ref. <cit.> will be adopted. Accordingly, theindex n will run within the range [-N,N], where [ N≃ int[max{Uχ X,|V|(1-χ^2)2}] + 1 , ] with int[·] denoting the integer part operator. That is all.In the second part of the Letter, the scenario above described will initially be explored with a LG beam with topological charge m=3.Such a value guarantees a reasonably high ratio between the values of the impinging optical intensity |ψ_0|^2 at the aperture boundary and thosein central region. In this way, the main contribution to the diffracted beam ψ is expectedto come from the rim Γ (with the three singularities being initially all located at the beams axis). For simplicity, the spot-size of the incident LG beams will be assumedto be much greater of the aperture sizes, what implies that γ≃ 0 and thus V≃ U. Incidentally, such a situation is also a sort of “computational worst case,” as far as the evaluation of the integrals into Eq. (<ref>) is concerned. Our exploration starts from the far-field limit U→ 0,where the diffracted field into Eq. (<ref>) takes on, apart unessential overall phase and amplitude factors, the followingform: [ ψ_ ff = ∫_𝒜 d^2 ρ ψ_0(ρ) exp(-i U r·ρ) , U → 0 . ] Equation (<ref>) shows that ψ_ ff depends only on the spatial frequency p=Ur.When m=0 (impinging plane wave), ψ_ ff is proportional to the Fourier transform (FT) of the characteristic function of the elliptic aperture 𝒜, given by πχ2J_1(√(p^2_x+χ^2p^2_y))/√(p^2_x+χ^2p^2_y), with p=(p_x,p_y).For m>0, the FT evaluation becomes considerably harder,due to the mismatchbetween the different symmetries related to ψ_0 (radial) and to the aperture (elliptical, of course). However, onusing partial derivativation under the Fourier integral,long but straightforward algebra gives [ℱ_m(p)=∫_𝒜 d^2 ρ[ρ exp(iϕ)]^m exp(-i p·ρ) =; ; =πχ(-1)^m∑_k=0^m (mk) i^-k∂^m∂ p_x^k∂ p_y^m-k 2J_1(√(p^2_x+χ^2p^2_y))√(p^2_x+χ^2p^2_y) . ] Here, the analytical evaluation ofℱ_3(p) has been done with the help of the latest release of Mathematica (the release 13.3), but the result will be not made explicit here for room reasons.Thank to such result, it is not difficult to prove that, in the far zone, the three singularities originally carried out by the impinging LG beam eventually propagate along different rectilinear paths contained in thexz-plane, with the x-axis being aligned along the ellipse major axis: one of them (the so-called on-axis singularity) propagates along the z-axis. Positions of the other two (theoff-axis singularities) will approximately be given, at a typicaltransverse plane U→ 0,by (±p̅_x/U , 0), where p̅_x is the least positive root of the equation ℱ_3(p̅_x,0)=0. This is in general agreement, for example, with some of the experimental results provided in <cit.>. In Fig. <ref>, two-dimensional (2D) maps of the phase distributions of the diffracted field at U=3 are shownfor three different elliptic shapes, namely χ=1/2 (a), χ=7/10 (b), and χ=9/10 (c). On increasing U, while the three singularities get closer and closer, it is also possible to appreciatethe counterclockwise (for m>0) rotation, with respect to the horizontal axis, of the segment joining all of them. This is sketchedin Fig. <ref> for χ=7/10 and U=11 (a) as well as for χ=9/10 and U=31 (b). With the above choice of the Fresnel number values, the outer singularities approximately touch the astroid-shapedcaustics. On further increasing U,they eventually cross the evolute and all singularities will be located inside the diamond-shaped region. As we shall see in a moment, this will cause a considerable change of the phaseas well as of the intensity distributions. To appreciate, at least qualitatively, such a new (topologically speaking) scenario, Fig. <ref> shows 2D maps of intensity (first row) and phase (second row)distributionsfor χ=9/10 and for three different Fresnel's numbers. Several other numerical experiments, not shown here, have been carried out for different values of χ as well as of m. All of them seem to confirm that the aperture evolute would play a key role to establish some basic features of the diffraction patterns. From acatastrophe optics perspective, this should be recognized as an important achievement.It was already emphasized <cit.> that, for plane-wave illumination, aperture's evolute provides, in the limitof U≫ 1, a sort of “geometrical skeleton”to be decorated by Pearcey-based diffraction catastrophes. In Fig. <ref> the 2D maps of intensity and phase are shown for U=101 and for three different aperture shapes, namely χ=5/10, χ=7/10, and χ=9/10.In all three cases, it can be appreciated how the diffracted intensity rests perfectly on the caustics (the white cusped curve).As far as the phase distribution is concerned,the role of the white “diamond” in discriminating its behaviour inside and outside of it,clearlyappears.Such role can be made much more evident on further increasing U, as it was done in our last numerical example, shown in Fig. <ref>. It is also aimed at showing at least one comparison between diffracted wavefields distributions (intensity and phase) generated by plane waves carrying out singularities of different topological charges.The aperture shape corresponds to χ=9/10, while the observation plane is located at U=307. The first column contains intensity and phase distributions form=1,the second column for m=2,the thirdfor m=3, and the fourth for m=4. Concerning theintensity maps, similar considerations as those for the scenario of Fig. <ref> can be done, in particular about the role played by the evolute. Moreover, it can also be appreciated how, for the lowest values of the topologicalcharge (m=1 and 2), the contribution coming from the central region of the illuminated aperture is sufficiently nonnegligible with respect to that coming from the aperture rim to produce, outside the diamond-shaped region, an interference fringe system similar to that producedby plane wave illumination (see, for example, Fig. 6 of <cit.>). On the opposite side, for instance when m=4, the diffracted wavefield can be ascribed to the sole edge waves, with an optical distribution similar to that producedin the diffraction by elliptic obstacles (see, for example, Fig. 10 of <cit.>).As far as the phase distribution is concerned, it is possible, for m1, to recognize the presence of a spiraling behaviour outside the diamond, which is much more evident by“zooming out” the pictures. A blow up of them, on the contrary, would put into evidence the presence of severalphase jumps, arranged along the curves of zero intensity, as it should be expected. Inside the diamond, on the contrary, the phase distributionpresents such a topological complexity which would deserve a dedicated, deeper investigation according to the general prescriptions given, for instance,in <cit.>. Optical vortices are phase singularities with null optical intensity.Caustics are singularsolutions of the wave equation where the optical intensity goes asymptoticallyto infinity.From a catastrophe optics perspective, it is rather natural to think smooth, noncircular sharp-edge apertures as potential sources of caustics.In the present Letter, a semi-analytical algorithm for studying paraxial diffraction of vortex LGbeams by centered elliptic apertures hasbeen developed. The potential interest for such an intriguingscenario wasdue to the simultaneous presence of different types of optical singularities: vortices and caustics. In particular, ourresults put into evidence the key role played by aperture's evolute in determining distinct (topologically speaking) behaviorsof intensity and phase distributions of the diffracted wavefield inside and outside of it, over a wide range of Fresnel's number values. From a catastrophe optics perspective, the presented results also give “experimental” support about the fact that the edge waves generated at the aperture rim focalize along the geometrical evolute (a signature of classical BDW theory for paraxial plane waves as well as for fundamental Gaussian beams <cit.>), seems to hold also for the class of vortex beams here considered, a result which could have not taken for granted a priori.Its proof still remains an open problem.The nature of the present letter is merely computational.Our algorithm, based on rapidly convergent Fourier series, presents only one bottleneck: the numerical evaluation of the integrals into Eq. (<ref>).However, it has been shown how high-resolution maps of the diffracted wavefield with, in principle, arbitrarily high accuracies can be (and have been) achieved even for Fresnel numbers of the order of some hundreds. We believe that the availability of such an accurate “numerical lab” could also help the design and development of experimental setups for future studies on the interaction betweenvortices and caustics <cit.>. Finally, what is contained herewould also like to be a further, modest contribution to spreading “catastrophe optics philosophy” for approaching complex optical problems. § ACKNOWLEDGEMENTS I wish to thank Turi Maria Spinozzi for his invaluable help during the preparation of the manuscript.To Jari Turunen (1961-2023), in Memoriam. 10 Allen/Beijersbergen/Spreeuw/Woerdman/1992 L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, Orbital angular momentum of light and the transformation oflaguerre-gaussian laser modes,Phys. Rev. A 45, 8185–8189 (1992).Soskin/Vasnetsov/2001 M.S. Soskin and M.V Vasnetsov, Singular opticsProg. Opt. 42, 219-276 (2001). Dennis/OHolleran/Padgett/2009 M. R. Dennis, K. O'Holleran, and M. J. Padgett, Singular optics: optical vortices and polarization singularitiesProg. Opt. 53, 293 (2009). Hickmann/Fonseca/Silva/ChavezCerda/2010 J. Hickmann, E. J. S. Fonseca, W. C. Soares Silva, and S. Chávez-Cerda, Unveiling a Truncated Optical Lattice Associated with a Triangular Aperture Using Light's Orbital Angular Momentum, Phys. Rev. Lett. 105, 053904 (2010). Borchardt/Duparre/Skupin/2012 J. Borchardt, M. Duparré, and S. Skupin,Tracking phasesingularities in optical fields, (2012), vol. 8274.Bekshaev/Chernykh/Khoroshun/Mikhaylovskaya/2017 A. Bekshaev, A. Chernykh, A. Khoroshun, and L. Mikhaylovskaya,Singular skeleton evolution and topological reactions inedge-diffracted circular optical-vortex beams, Opt. Commun.397, 72 – 83 (2017). Chernykh/Petrov/2021 A. V. Chernykh and N. V. Petrov,Optical vortex trajectory of theedge-diffracted single-charged laguerre-gaussian beam,Opt. LaserEng. 139 (2021).Liu/Sun/Pu/Lu/2013 Y. Liu, S. Sun, J. Pu, and B. Lü,Propagation of an optical vortexbeam through a diamond-shaped aperture,Opt.Laser Tech.45, 473 – 479 (2013). Ambuj/Vyas/Singh/2014 A. Ambuj, R. Vyas, and S. Singh, Diffraction of orbital angular momentum carrying optical beams by a circular aperture Opt. Lett. 39, 5475 (2014).Stahl/Gbur/2016 C. Stahl and G. Gbur,Analytic calculation of vortex diffraction by atriangular aperture,J. Opt. Soc. Am. A 33, 1175 (2016).Taira/Zhang/2017 Y. Taira and S. Zhang,Split in phase singularities of an optical vortex byoff-axis diffraction through a simple circular aperture,Optics Letters42, 1373 – 1376 (2017).Narag/Hermosa/2019 J. P. C. Narag and N. Hermosa, Probing Higher Orbital Angular Momentum of Laguerre-Gaussian Beams via Diffraction through a Translated Single Slit Phys. Rev. Appl. 11, 054025 (2019).Taira/Kohmura/2019 Y. Taira and Y. Kohmura, Measuring the topological charge of an x-ray vortex using a triangular aperture, J. Opt. 21,045604 (2019).Goto/Tsujimura/Kubo/2019 Y. Goto, T. Tsujimura, and S. Kubo,Diffraction patterns of the millimeter wavewith a helical wavefront by a triangular aperture,Int. J.Infr. Mill. Waves 40, 943–951 (2019).Rocha/Amaral/Fonseca/Jesus-Silva/2019 J. C. A. Rocha, J. P. Amaral, and A. J. Fonseca, E.J. S.and Jesus-Silva, Study of the conservation of the topological charge strength indiffraction by apertures,J. Opt. Soc. Am. B 36, 2114 – 2118 (2019).Berry/2001 M. V. Berry,Fractal modes of unstable lasers with polygonal andcircular mirrors,Opt. Commun. 200, 321–330 (2001).Berry/Upstill/1980 M. V. Berry and C. Upstill,Catastrophe optics: morphologies ofcaustics and their diffraction patterns,Prog. Opt. 18, 257–346 (1980).Nye/1999 J. F. Nye, Natural Focusing and Fine Structure of Light (IOP Publishing, Bristol, 1999).Borghi/2015 R. Borghi,Uniform asymptotics of paraxial boundary diffractionwaves,J. Opt. Soc. Am. A 32, 685 – 696 (2015).Borghi/2016 R. Borghi,Catastrophe optics of sharp-edge diffraction,OpticsLetters 41, 3114–3117 (2016).Borghi/2014 R. Borghi,Plane-wave fresnel diffraction by elliptic apertures: Afourier-based approach,J. Opt. Soc. Am. A31, 2120–2130 (2014).Borghi/2019 R. Borghi,Sharp-edge diffraction under gaussian illumination: Aparaxial revisitation of miyamoto-wolf's theory,J. Opt. Soc. Am. A36,1048–1057 (2019).Soifer/Kharitonov/Khonina/Volotovsky/2019V. A. Soifer,S. I. Kharitonov, S. N.Khonina, and S. G. Volotovsky,Caustics of Vortex Optical Beams,Doklady Phys. 64,1048 - 1057 (2019).Xiao/Xie/Courvoisier/Hu/2022 N. Xiao, C. Xie, F. Courvoisier, and M. Hu,Caustics of the axially symmetric vortex beams: analysis and engineering, Opt. Expr.30,29507 - 29517 (2022). | http://arxiv.org/abs/2310.18298v1 | {
"authors": [
"Riccardo Borghi"
],
"categories": [
"physics.optics"
],
"primary_category": "physics.optics",
"published": "20231027173515",
"title": "Sharp-Edge Diffraction of Laguerre-Gauss Vortex Beams by Elliptic Apertures"
} |
AIP/123-QED [email protected] , [email protected] of Mechanical Engineering, Clemson UniversityWe study the problem of transporting a distribution of fluid particles in a Stokes flow to a desired final distribution in a fixed, finite time by controlling the torques of a pair of microrotors at fixed positions in the flow. Our approach is based on a finite dimensional approximation of the Liouville operator, the infinitesimal generator of the semi-group of Perron Frobenius operators, which describes the density transport dynamics associated with the microrotor flow fields.Using this operator, we express the transport problem as an optimal control problem in terms of the moments of the density function of the particle distribution.The finite time optimal control problem is then solved using differential dynamic programming, an iterative trajectory optimization method.We apply this framework to the microrotor driven flow on four related problems:transport using rotors in an unbounded flow, transport near to an infinite plane wall, transport within a circular domain, and the simultaneous transport of two particle distributions to a common final distribution in an unbounded flow.These examples demonstrate the effectiveness of the proposed framework and also allow us to better understand the effects of boundaries on the ability to achieve a desired fluid transport using a rotor-driven flow. Controlled density transport by microrotors in a Stokes flow using linear transfer operators Phanindra Tallapragada 20 October 2023 ============================================================================================ The problem oftransporting a blob of fluidparticles in a Stokes flow to a desired distribution in a fixed, finite time period has several important applications. This is related to the scientific question of how fluid flow structures direct Lagrangian transport. We investigate this problem of directing the transport by manipulating the flow, specifically in the Stokes flow context, by controlling the strengths of two rotors fixed in space.Manipulating the flow allows control of dynamical structures such as almost invariant sets that define Lagrangian transport. We model the time evolution of the fluid particle density using finite dimensional approximations of the Liouville operators for the micro-rotor flow fields.Using this operator, the particle transport problem is framed as an optimal control problem, which we solve numerically.This framework is then applied to the problem of transporting a blob of fluid particles in free space, near to a plane wall, in a circular confinement, and the transport of two blobs to a common target.These examples demonstrate the effectiveness of the proposed method and help us to understand the abilities and limitations of fluid transport in a rotor-driven flow. § INTRODUCTIONUnderstanding and controlling the motion of fluid particles in the low Reynolds number regime has become increasingly significant in recent years, particularly in the realm of microrobotics and microfluidics.Microrotors and micropumps propelled by various mechanisms have been proposed as a useful means of transporting fluid particles or other submersed cargo in a microfluidic solution <cit.>.In this paper, we develop a method based on recent advances in data-driven dynamical systems to model and control the transport of distributions of fluid particles using microrotors in a Stokes flow. In recent decades, significant attention has been given to the application of dynamical systems theory to problems of fluid transport at low Reynolds number, with much of this attention focusing on mixing by chaotic advection <cit.>.Such research has been largely inspired by applications including industrial mixing, design of microfluidic lab-on-a-chip devices, and biomedical applications such as cell sorting and targeted drug delivery. While mixing is important to many such applications, many also require an ability to transport packets of fluid or a concentrated, passive scalar in a controlled way to a target destination while minimally mixing, stretching, or distributing the blob. Despite its growing practical importance, this area has received considerably less research attention. In this work, we study the problem of steering an ensemble of fluid particles in a Stokes flow from an initial particle distribution to a final distribution, where the particles are advected by the flow field generated by a pair of fixed rotors. In our formulation, the distribution of fluid particles is described by a density function, and a data-driven method based on a finite dimensional approximation of the Liouville operator associated with the rotor-driven flow is developed to approximate the density transport dynamics. With this model, we show that the problem of controlled density transport can be posed as an optimal control problem which we solve using differential dynamic programming, an iterative trajectory optimization scheme.To apply this framework to the problem of steering a density using fixed rotors, we model the rotors as rotlets, the singularity solution of the Stokes equations associated with a point torque <cit.>. This work is an extension of the authors' recently submitted paper<cit.>.In this work, we seek to further highlight the fluid mechanical applications of the proposed method and use it to study the effects of boundaries on this fluid transport problem. The rotlet singularity model has become commonly used as an approximation for flows generated by rotating bodies at small length and velocity scales.Meleshko and Aref <cit.> studied the flow generated by the so-called blinking rotlet model consisting of two rotlets at fixed positions in a circular domain which run for a fixed time period in an alternating pattern, itself a Stokes flow alternative to the blinking vortex model introduced by Aref <cit.>.These models have received interest as minimalistic examples of the concept of chaotic advection, the notion that fluid particle trajectories in time dependent laminar flows can exhibit chaotic motions, even in two dimensions. Van der Woude et al. <cit.> considered a similar blinking rotlet problem in a rectangular cavity and considered mixing by sinusoidal stirring patterns as well as the typical blinking pattern.While these works introduce a time dependence by explicitly varying the strengths of fixed rotors in time, more recent works have studied effect of a time dependence in the fluid flow due non-stationary rotors, typically where each rotor is advected by the flow field generated by all other rotors <cit.>.In this work, while we only consider the case of fixed rotors, we develop methods to stir in a controlled way to steer a distribution to a desired location while minimizing the spread of the particle distribution.Several works have framed fluid mechanical transport problems as optimization or optimal control problems <cit.>, with most of these focusing on optimizing mixing performance.Mathew et al <cit.> studied the problem of optimally modulating (in time) a finite set of spatially varying force fields to optimize mixing over a fixed timespan and for a fixed action integral, using a conjugate gradient descent method to numerically approximate the optimal control. Zhang and Balasuriya<cit.> develop a method to determine an optimal spatiotemporally varying additive control velocity field for two problems: Lagrangian mixing and to drive trajectories to desired end states in a finite time.In this work, we present a numerical method to optimally modulate two flow fields (corresponding to rotors in fixed positions) in order to drive an initial distribution of fluid particles to a desired final distribution, as specified by the moments of the density function of the distribution. Further, we examine the structure of the optimal flow field by calculating the coherent sets and the associated flow structures produced by this flow field.These results show that the optimal control typically produces a flow field which generates a transport barrier dividing the coherent sets which passes through the blob location at the initial time and connects to the target location at the final time, effectively directing the particle distribution toward the target. Our method relies on a finite-dimensional approximation of the Liouville operator, the infinitesimal generator of the semi-group of Perron-Frobenius operators <cit.>, which describe the density transport dynamics for a given flow map.The use of data-driven approximations of transfer operators in modelling fluid flows and in problems with actuation has been an active area of research in recent years <cit.>, with many of the most common methods having their origin in the analysis of fluid flows <cit.>. Refs. froyland2016optimal,froyland2017optimalmixing develop a convex optimization formulation based on transfer operators to determine optimal local perturbations of a flow field to enhance mixing of a fluid.Klünker et al <cit.> recently studied mixing in open flows in terms of spectral properties of a finite-rank approximation of the Perron Frobenius operator.Sinha et al<cit.> use the Perron Frobenius and Koopman generators associated with a given velocity field to choose an optimal location of release of a dispersant in the flow field.Brockett<cit.> proposed the optimal control of the Liouville equation with applications in ensemble control, but assumes a control input that can be varied arbitrarily in space and time.Relatedly, Grover and Elamvazuthi <cit.> use transfer operators and their generators in a graph-based approach to solving the optimal transport problem, motivated control problems for multi-agent and swarm systems, in which the control is also taken to vary spatiotemporally. The problem considered in this work can be viewed as a variation of those in <cit.> with one significant distinction:in this work the control input u does not vary with the spatial location of the particle. The flow field is restricted to those that can be generated as linear combinations of the flow fields of two fixed micro-rotors and the strengths of the micro rotors in turn influence the flow field. The remainder of the paper is structured as follows.In Sec. <ref>, we review methods from the operator theoretic view of dynamical systems for modelling the transport of density functions through a dynamical system and present a numerical method for the computation of a finite dimensional approximation of the Liouville operator. In Sec. <ref> we demonstrate that this method can be naturally extended to account for the effects of actuation on a dynamical system, allowing the use of this framework to express the density transport problem as an optimal reference tracking problem. In Sec. <ref> we discuss how the operator theoretic methods relate to the computation of finite-time coherent sets for a time-varying flow field. In Sec. <ref> we briefly review the method of differential dynamic programming, an iterative trajectory optimization scheme which we implement to numerically solve this optimal control problem.In Sec. <ref>, we implement these methods on the problem of steering a density of fluid particles using a pair of fixed microrotors. In Secs. <ref> and <ref>, we study the effects of plane wall and circular boundaries on this transport problem in comparison to the case of an unbounded flow. In Sec. <ref>, we consider the ability to manipulate multiple density functions simultaneously in this system. § DENSITY TRANSPORT In order to formulate the problem of controlling the motion of ensembles of fluid particles, we will first specify the distribution of such an ensemble by a density function.In this section, we will review the methods used to study the evolution of such a density function over time, given that the individual particle motion is specified by a known dynamical system. §.§ Perron Frobenius operator and generatorConsider a dynamical system dx/dt = f(x)on a measure space (, , μ) where x∈ is the state, ⊂^n is the state space,is the Borel σ-algebra on , and μ is a measure on . Denote the time-t flow map from an initial state x_0 by Φ^t(x_0).We will further assume that the measure μ is absolutely continuous with respect to the Lebesgue measure, so that μ can be expressed in terms of a density, ρ∈ L_1(), such that dμ(x) = μ(dx) = ρ(x)dx.With this, the Perron Frobenius operator, ^t:L_1()↦ L_1() corresponding to the flow Φ^t can be defined as the unique operator <cit.> such that ∫_A ^tρ(x) dx = ∫_(Φ^t)^-1(A)ρ(x) dxfor any A∈, t≥ 0. The family of these operators, parameterized by time, t, have been shown to satisfy the properties of a semigroup <cit.>.The infinitesimal generator of this semigroup, denoted here by Ł, is known as the Liouville operator or the Perron-Frobenius generator, and defined as Łρ = lim_t→ 0^tρ - ρ/t = lim_t→ 0(^t-/t) ρwhereis the identity operator.Alternatively, as this operator expresses the deformation of a density function under an infinitesimal action of the operator ^t, the Liouville operator can be thought of as expressing a continuity equation for the number of particles in the state space <cit.>; that is,∂ρ/∂ t = Łρ = -∇_x· (ρ f) .From this definition, we can immediately derive the following important property of the Liouville operator. Suppose the Liouville operator associated with a vector field f_1:↦^n is denoted by Ł_1 and the Liouville operator associated with the vector field f_2:↦^n by Ł_2, then the Liouville operator associated with the vector field f(x) = f_1(x) + f_2(x), is Ł = Ł_1 + Ł_2. The numerical method used for the computation of the Perron-Frobenius operator and Liouville operator is derived from the relationship between the Perron-Frobenius operator and the Koopman operator.The Koopman operator ^t:L^∞() ↦ L^∞() is the operator which propagates observable functions h∈ L^∞() forward in time along trajectories of the system and is defined as ^th = h∘Φ^t .The Koopman and Perron-Frobenius operators are adjoint to one another, with the adjoint relationship given by ∫_[^th](x)ρ(x)dx = ∫_h(x)[^tρ](x) dx.§.§ Numerical approximation One of the most common methods of approximating the Perron-Frobenius operator is a set-oriented approach known as Ulam's method<cit.>, in which a domain of interest is discretized into cells, a large number of short-time trajectories are simulated, and then the operator is computed as the matrix containing the approximate transition probabilities between the cells<cit.>.It has been shown that this method can be viewed as a Galerkin projection of the Perron Frobenius operator onto the function space spanned by indicator functions corresponding to the discrete cells <cit.>.In recent works involving numerical approximation of the Koopman operator, one of the most common approaches is that of extended dynamic mode decomposition (EDMD) <cit.>, in which the operator is computed by solving a least squares problem, which can also be viewed as a Galerkin projection of the operator onto a function space spanned by a predefined set of basis functions <cit.>. By exploiting the adjoint relationship between the Perron Frobenius and Koopman operators, it has been shown that methods typically used for one operator can be used to compute the other.Based on this idea, recent works have developed variations of EDMD for the computation of the Perron-Frobenius operator <cit.>. In this work, we also implement EDMD for the computation of the Perron-Frobenius operator, which we outline below, largely following Klus et al. <cit.>.The method requires a predefined dictionaryof k scalar-valued basis functions, = {ψ_1, ψ_2, …, ψ_k}, where ψ_i:↦ for i = 1, …, k and trajectory data collected from the dynamical system with fixed timestep, Δ t, arranged into snapshot matrices as X=[ x_1 , ⋯ , x_m ]Y =[x_1^+ , ⋯ ,x_m^+ ]where the subscript i=1,…, m is a measurement index and x_i^+ = Φ^Δ t(x_i). Then, given an the observable function h and density ρ, these functions are approximated by their projections onto the space spanned by elements ofash(x)≈ĥ^TΨ(x) ρ(x)≈Ψ^T(x) ρ̂where ĥ, ρ̂∈^k are column vectors containing the projection coefficients and Ψ:↦^k is a column-vector valued function where the elements are given by [Ψ(x)]_i = ψ_i(x).Substituting these expansions into Eq. (<ref>) yields∫_^Δ t[ĥ^TΨ]Ψ^Tρ̂dx=∫_ĥ^TΨ^Δ t[Ψ^Tρ̂] dx .Then noting that [^Δ tΨ](x) = Ψ(x^+) and assuming that ^Δ t can be approximated by a matrix P operating on the coordinates ρ̂, it is clear that in the limit of a large dataset m→∞, the above expression becomes Ψ_YΨ_X^T = Ψ_XΨ_X^TP + ewhere e is a residual error arising due to the matrix approximation of ^Δ t by P.This can be posed as a least-squares problem for the matrix Pmin_P Ψ_YΨ_X^T - Ψ_XΨ_X^TP_2^2where Ψ_X,Ψ_Y ∈^k× m are matrices with columns containing Ψ evaluated on the columns of X and Y respectively.The analytical solution of this least squares problem is P = (Ψ_XΨ_X^T)^†Ψ_YΨ_X^Twhere (·)^† is the Moore-Penrose pseudoinverse. Given this matrix approximation of the operator, P, if the timestep Δ t chosen in the data collection is sufficiently small, the corresponding matrix approximation L of the Liouville operator can be approximated based on the limit definition of the generator in Eq. <ref>. as L≈P - I_k/Δ twhere I_k is the k× k identity matrix.The matrix approximation P of the operator ^Δ t approximates the propagation of a density function ρ by advancing the projection coordinates ρ̂ forward for a finite time, Δ t.Similarly, the matrix approximation L of the generator Ł approximates the infinitesimal action of the operator ^t by approximating the time derivative of the projection coordinates dρ̂/dt = Lρ̂ .§.§ Extension to controlled systemsIn the field of control theory, much attention has been given in recent years to applications the Koopman operator to control systems <cit.>, including several recent works which have noted the usefulness of formulating the problem in terms of the Koopman generator, rather than the Koopman operator <cit.>.Such a formulation in terms of the Koopman generator typically results in a lifted system that is bilinear in the control and lifted state, as the effect of the control vector fields is expressed in a way that is also dependent on the lifted state. This approach allows for a better approximation of the effects of control as compared to other common approaches<cit.>, especially for systems in control-affine form dx/dt = f(x) + ∑_i = 1^n_cg_i(x)u_iwhere the u_i are control inputs and n_c is the number of control inputs affecting the system. Here we apply a similar approach to the density transport problem, expressed in terms of the Perron-Frobenius generator. As shown by Peitz et al. <cit.> for the Koopman generator, by the property of the Perron-Frobenius generator given in Lemma <ref>, if the dynamics are control-affine, then the generators are also control affine, as can be seen by application of Eq. <ref>.This leads to density transport dynamics of the following form∂ρ/∂ t = Ł_0ρ + ∑_i=1^n_cu_i_iρwhere Ł_0 is the Perron Frobenius generator associated with the vector field f(x) and similarly, the _i are the Perron Frobenius generators associated with the control vector fields g_i(x).Therefore, given the finite dimensional approximation of these generators, we can approximate the density transport dynamics as dρ̂/dt = L_0ρ̂ + ∑_i=1^n_c u_iB_iρ̂where the matrices L_0 and B_i are the matrix approximations of the operators in Eq. <ref>.These matrix approximations can be computed using the method outlined in Sec. <ref> for uncontrolled systems. This is done by first computing L_0 by Eq. <ref> using trajectory data from the system with all control inputs set to zero.Once L_0 is found, each of the B_i can be computed similarly by first computing a matrix L_i by Eq. <ref> using trajectory data from the system collected with u_i = 1 and u_j = 0 for j≠ i. This matrix L_i approximates the Liouville operator corresponding to a vector field f + g_i.The matrix approximation B_i of the operator corresponding to the vector field g_i alone is then found using Lemma <ref> as B_i = L_i - L_0. For the systems of microrotors considered in this work, the control inputs are taken to be the strengths γ_i of a pair of micro-rotors and the states are taken to be the position coordinates of a fluid particle.In this application, it will be shown (see, e.g. Eq. <ref>), that the control system is drift-free.That is, the vector field f = 0 in Eq. <ref>, and therefore, the corresponding Liouville operator Ł_0 = 0, as well for these systems.This is due to the typical quasistationary assumption of Stokes flows,which indicates that any change in the flow field is established instantaneously, without transience <cit.>.§.§.§ Propagation of momentsIn what follows, the problem of driving an initial density to a desired final density will be posed as an optimal control problem.The control inputs for this problem are the strengths of a finite number of micro-rotors, meaning that this problem involves steering a function using only a finite number of control inputs.To make this problem more tractable, we instead consider the problem of steering the moments of the density function to match the moments of a desired final density function.In the remainder of this section, an approximation of the moments of a density function ρ(x) are derived in terms of the projection of ρ onto the space spanned by the elements of . Given a projection of ρ onto , as in Eq. <ref>, the first moment (mean), m_1 is written as m_1^i = ∫ x^iρ(x) dx = ρ̂^T∫ x^iΨ(x) dxwhere we use the superscript i in the moment to indicate the coordinate index and the subscript indicates the order of the moment being considered.Therefore, the first moment of ρ can be approximated as a linear combination of the means of the dictionary functions in Ψ, weighted by the projection coefficients ρ̂.This is also true for higher order raw moments, whereas higher order central moments become polynomial in ρ̂ due to their dependence on the mean.Since ρ̂ will be treated as the `lifted state' in the control formulation, it is desirable to consider moments which are linear in ρ̂, so for this reason we will work with raw moments in what follows.Here, for the dictionary functions, we use Gaussian radial basis functions of the form ψ_l(x) = exp(-(x-c_l)^T(x-c_l)/2s^2)where c_l is the center of the l^th basis function, and s is a scaling parameter affecting the spread.Computing the integral in Eq. <ref>, in terms of this dictionary, the mean is approximated as m_1^i = 2π s^2∑_l=1^kρ̂_lc^i_lwhere c_l^i is the i^th coordinate of the l^th basis function center. Similarly, the second raw moment can be written as m_2^ij = ∫ x^ix^jρ(x)dx = ρ̂^T∫ x^ix^jΨ(x)dxwhere the last integral reduces to ∫ x^ix^jψ_l(x)dx = 2π s^2(s^2 + (c_l^i)^2) i=j2π s^2c_l^ic_l^j i≠ jfor a given basis function ψ_l(x) where superscripts i and j are coordinate indices.§.§ Finite-time coherent set detectionFor autonomous dynamical systems, methods based on the Perron-Frobenius operator have been used to compute invariant or almost invariant sets of the system <cit.>.This is typically done by studying eigenfunctions of the Perron-Frobenius operator with eigenvalues, λ≈ 1.Such eigenfunctions correspond to invariant or almost invariant densities, which describe groups of states which are left nearly unchanged by the flow of the system. These methods have also been extended to time-varying systems, in which the goal is to identify finite-time coherent sets<cit.>.Such sets are defined as sets in the state-space which are maximally coherent, or minimally dispersive, over a certain finite time interval. That is, they describe sets of states which may be transported as a whole by the flow, but with minimal transport outside of the coherent set or between coherent sets.These methods are also closely related to the Perron-Frobenius operator and are commonly seen as a probabilistic alternative to geometric methods related to the identification of invariant manifolds, dominant material lines, or Lagrangian coherent structures (see Refs. allshouse2015lagrangian,hadjighasem2017critical for a review). Here, we will apply the methods of Ref. williams2015identifying to the time-varying flow field generated by the solution to the optimal control problem to illucidate the flow structures associated with the optimal control.In this section, we will briefly summarize the method for the detection of coherent structures used here and its relation to the finite-dimensional operator approximation defined in the previous section. We assume that dataset is given of m points, {(x_i,y_i)}_i=1^m, where x_i is the position of the i^th particle at the initial time, t_0 and y_i is the position of the particle at a later time t_f. That is, y_i = Φ_t_0^t_f(x_i), where Φ_t_0^t_f is the flow map associated with the non-autonomous system from time t_0 to t_f. Given that the data lies in a set X at time t_0 and a set Y at time t_f, our goal is to partition this dataset into two sets, X_1 and X_2 at time t_0 and Y_1 and Y_2 at time t_f, such that points in X_1 are mapped into Y_1 by the flow and points in X_2 are mapped into Y_2. This partition is designed by constructing partition functions f_X and f_Y which partition the space based on their sign. For example, we can define X_1 = {x∈ X|f_X(x) > 0}.Then the problem of identifying coherent sets can be framed as choosing the functions f_X and f_Y to maximize the objective g(f_X,f_Y) = 1/m∑_i = 1^m f_X(x_i)f_Y(y_i)which can be thought of as an approximation of the an inner productg(f_X,f_Y)≈⟨ f_X,_t_0^t_ff_Y ⟩ = ∫_X f_X(x) f_Y(Φ_t_0^t_f(x))dx ≈⟨_t_0^t_ff_X,f_Y⟩= ∫_Y f_X(Φ_t_f^t_0(y))f_Y(y)dy where _t_0^t_f and _t_0^t_f are the Koopman and Perron-Frobenius operators associated with this time-varying flow and Φ_t_f^t_0 = (Φ_t_0^t_f)^-1. Note that this objective is only reasonable if an overall scale is imposed on the magnitude of the functions f_X and f_Y.If we approximate the partition functions f_X and f_Y by their projection onto the space spanned by the dictionary ,f_X(x) ≈Ψ^T(x)a , f_Y(y) ≈Ψ^T(y)then the objective is approximated as g(f_X,f_Y) ≈1/m∑_i = 1^m a^TΨ(x_i)Ψ^T(y_i) = a^TAwhere A = 1/mΨ_XΨ_Y^T.If we impose a scale by requiring that a^Ta = ^T = 1, then this maximization can be solved by singular value decomposition, with the optimal a andgiven by left and right singular vectors, respectively, as shown in Refs. froyland2010transport,williams2015identifying.This problem can be solved trivially by choosing f_X to be uniform over X and choosing f_Y to be uniform over Y – this solution typically corresponds to the singular vector associatedwith the largest singular value. Therefore, the singular vectors associated with the 2nd largest singular value give the optimal non-trivial solution, which divides the domain into partitions of roughly equal size <cit.>. § CONTROL FORMULATIONIn Sec. <ref>, it was shown that the problem of steering a density ρ to a desired final density can be expressed as an output tracking problem on a lifted, bilinear system given by Eq. <ref>, where the projection coefficients ρ̂ can be interpreted as a lifted state.Then, if the first and second raw moments are taken to be the relevant output, y = [m_1^1m_1^2 m_2^11 m_2^22 m_2^12 ] ^Tthis can be expressed linearly in the lifted state, y = Cρ̂, where the elements of the output matrix C are given by rewriting Eqs. <ref>, <ref> in matrix form. For the optimal output tracking problem, we consider a discrete time optimal control problemmin_u_1,u_2,…,u_H-1 ∑_t=1^H-1l(_t,u_t) + l_H(_H) s.t. ρ̂_t+1 = F(ρ̂_t,u_t)y_t = C_t where H is the number of timesteps in the time horizon and Eq. <ref> represents the discrete time version of Eq. <ref>.In particular, for output tracking, we consider in-horizon and terminal cost functions l and l_H of the following quadratic forms l(_t,u_t)= (y_t - y^ref_t)^TS(y_t - y^ref_t) + u_t^TRu_tl_H(_H)= (y_H - y^ref_H)^TS_H(y_H - y^ref_H) where S, R, and S_H are weighting matrices which define the penalty weight on tracking error, control effort, and error in the terminal state, respectively. Since the output y is linear in the lifted state _t, this cost can be rewritten as a quadratic cost in terms of _t, with an added linear term.It is well known that for optimal control problems on bilinear systems with quadratic cost, an effective way of solving the problem is by iteratively linearizing and solving a finite time linear quadratic regulator (LQR) problem about a nominal trajectory, utilizing the Ricatti formulation of that problem <cit.>.For this reason, we solve the optimal control problem using differential dynamic programming (DDP) <cit.>, which is closely related to the method of iterative LQR.We briefly recount the primary steps of this algorithm below.DDP computes a locally optimal control around a nominal trajectory by minimizing a quadratic approximation of the value function along this trajectory, and then doing this iteratively about the new trajectories obtained by applying the locally optimal control.First define the value function V(_t,t) at time t as,V(_t,t) = min_u_t [ l(_t,u_t) + V(_t+1,t+1)]which expresses the optimal cost-to-go from _t, where V(_H,H) = l_f(_H). Denote by Q(δ, δ u) the change in the value function due to applying change in control input δ u about the nominal trajectory and consider its quadratic approximation Q(δ, δ u) ≈Q_δ + Q_u^Tδ u + δ^T Q_ uδ u+ 1/2δ^TQ_δ + 1/2δ u^T Q_uuδ uwhere these derivatives are given by Q_ = l_+ F_^T V_'Q_u = l_u+ F_u^T V_'Q_ = l_+ F_^T V_'F_ + V_'· F_Q_uu = l_uu+ F_u^T V_'F_u + V_'· F_uuQ_ u = l_ u+ F_^T V_'F_u + V_'· F_ uwhere the notation (·)' indicates the next time step. The algorithm proceeds by computing these derivatives by recursing backward in time along the nominal trajectory from the end of the horizon. At each iteration, the control policy is improved by optimizing this quadratic expansion with respect to δ uδ u^* = min_δ uQ(δ,δ u) = -Q_uu^-1(Q_u + Q_u δ)This can be seen as providing a descent direction in the space of control policies.An updated nominal control is then computed by a line search over a stepsize parameter α to update the policy, that is u_new = u - α Q_uu^-1Q_u - Q_uu^-1Q_u δ and this new control is applied to obtain a new nominal trajectory, and this procedure is iterated until the relative change in cost falls to less than a specified tolerance. For full details of the algorithm, the reader should refer to Refs. <cit.>.§ TRANSPORT BY ROTORS IN FREE SPACETo describe the fluid flow produced by a microscale rotor, we employ a model of a point torque in a two dimensional Stokes flow.Mathematically, this flow is described by a rotlet <cit.>, whose stream function is given by ψ() = -γlog | - _r|where = (x,y) is the position a point in the fluid, _r = (x_r,y_r) is the position of the rotlet, and γ is the strength of the rotlet.Physically, γ describes the magnitude of the point torque or the angular velocity of the rotor. The linearity of Stokes flows allows for the velocity fields produced by multiple rotlets to be determined by superposition of the velocity field produced by each rotlet individually.Therefore, for n_r rotors, the resulting fluid flowresults in the following fluid velocity field: 𝐮() = -∑_i=1^n_r(γ_ik̂× - _i /r_i^2)where _i is the location of the i-th rotlet and r_i = |x - x_i|.Clearly, this results in a flow with a singularity at _r, circular streamlines around the singularity with counterclockwise flow for positive γ, and a fluid velocity that decays as r^-2 going away from the rotor. Here, we consider the case of rotors fixed in place on the x-axis at (-1,0) and (1,0), respectively, denoting their strengths by γ_L and γ_R for left and right.We consider a problem of manipulating a collection of fluid particles initially distributed at time t=0 according to a normal distribution, with mean m_1(0) = (1,1) and covariance Σ(0) = 0.025I_2, where I_2 is the 2×2 identity matrix. That is, ρ(x(0)) = ((1,1),0.025 I_2).From this initial fluid particle distribution, we seek a sequence of rotor strengths over a timespan of 5 time units to drive the fluid particles to a final distribution with a mean of m_1(5) = (-1,-1), while minimizing the variance.For this, we use the relationship between the second raw moment and the variancem_2^ij = σ^ij + m_1^im_1^jwhere the σ^ij are the elements of the covariance matrix Σ, to convert the desired final variance to a desired second moment. With this, a rotor control sequence is found by solving the optimization problem as in Eq. <ref> using the DDP scheme described in Sec. <ref>.In solving this, the cost function weights are chosen to be S = 0.1I_5, R = I_2, and S_H = 10^3I_5. That is, the error in the moments is penalized very little from t = 0 until t=5, with a large penalty placed on the moments at t=5.This choice allows the optimizer the flexibility to steer the distribution in a way that may temporarily increase the error if it results in a lower error in the moments at t = 5. For the computation of the Liouville operators for this case, data is collected by simulating a grid of 2500 initial conditions, evenly spaced over [-2,2]^2 forward for time interval Δ t = 0.005.The Perron-Frobenius operators are computed using Eq. <ref> with this trajectory data and a 25×25 grid of Gaussian radial basis functions with centers evenly spaced over the same domain, excluding small radii around the rotors.The Liouville operators are obtained from this using Eq. <ref> as described in Sec. <ref>. Fig. <ref> shows the effect of the rotor control on the motion of a distribution of 10^4 fluid particles sampled according to the initial density and displayed as a histogram approximation of the density.The rotor positions are indicated by the circle-cross and the position of the target mean is shown by the green circle.The white-filled circle indicates the mean of this sample, while the black-filled circle indicates the mean as predicted using the Liouville operator.The streamlines in the figure indicate direction of the fluid velocity field produced by the rotors at the indicated time instant.Fig. <ref> shows the rotor strengths γ_L and γ_R selected by the DDP algorithm. For the first 1.25 seconds, the rightmost rotor has a positive strength of near 1 to generate a counterclockwise flow, pulling the distribution of particles toward the origin, while the leftmost rotor has a low strength near zero. As the distribution nears the origin, the strength of the right rotor decreases, while the magnitude of the strength of the left rotor increases to generate a clockwise flow, which pulls the distribution toward the target mean. Also shown in Fig. <ref> are plots of the elements of the first and second moment over time as computed from the sample shown in Fig. <ref> (labelled `True') and as predicted using the finite approximation of the Liouville operators (labelled `Predicted').§.§ Finite time coherent sets With the optimal control determined, Eq. <ref> gives a nonautonomous dynamical system.We can then apply the methods outlined in Sec. <ref> to this systemto identify coherent sets to better understand the underlying structure of the flow field produced by the optimal control. For this computation, we use a dataset of 10^4 data pairs, initially spaced on a uniform grid over [-2,2]^2.For the basis, we use a set of 2501 basis functions consisting of Gaussian radial basis functions uniformly spaced on a 50×50 grid over [-2,2]^2 and the constant function, ψ = 1. Fig. <ref> shows the time evolution of the data set, with the points colored according to the partition function f_X as approximated by the 2nd left singular vector of A = 1/mΨ_XΨ_Y^T.Also shown is the evolution of the f_X=0 contour, which approximates the barrier between the coherent sets, as depicted by the black line.Finally, level sets and mean of the density function, ρ(x(t)), as approximated from a sample of 10^4 points from the initial density, are shown by the purple contours and purple markers, respectively.The level sets shown correspond to values of the initial density at one and two standard deviations from the mean, respectively. To quantify the coherence of the sets identified by the partition functions f_X and f_Y, we use a modification of the objective in Eq. <ref>, which only considers the sign of f_X and f_Y, g̅(f_X,f_Y) = 1/m∑_i = 1^m (f_X(x_i))(f_Y(y_i))which effectively gives the fraction of the data points which are classified correctly by the partition functions (for which the partition functions do not change sign from initial to final time).For the case shown in Fig. <ref>, we have g̅ = 0.9904. This computation of the coherent sets shows that the flow field generated by the optimal control is such that a transport barrier is formed over the 5s time interval, with the barrier passing through the particle distribution at the initial time and connecting to the target location at the final time.Many previous works <cit.> have studied the relationship between the optimal control problem of steering a particle efficiently in an unsteady flow and the coherent structures associated with that flow.Typically in these studies, the problem being considered is motivated by the efficient navigation of an underwater vehicle to a target in an unsteady ocean flow. For this reason, the control input is usually taken to be a propulsive velocity which is added to the unsteady flow field, as could be generated by a thruster onboard an underwater vehicle, and coherent structures associated with the unsteady flow are used to identify efficient routes. Our work takes a different perspective, where instead of controlling individual particles in a given unsteady flow field, we solve an optimal control problem to determine the optimal time-varyingflow field to steer the initial particle distribution to the target, where the unsteady flow field is constrained to be a superposition of flow fields produced by the two rotors at each time instant.In previous works <cit.>, it was seen that the optimal routes of an underwater vehicle tend to follow the coherent structures which guide the particle towards the target for energy-optimal navigation.Here we see that the optimal flow field produces a flow structure which guides the distribution of particles from the initial condition to the target, as shown in Fig. <ref>.This sort of flow structure seems to be typical of the optimal control solutions in this setting.To verify this, we solve the control problem with the same parameters but with an initial density centered about (-0.5,1).That is, the initial density is ρ(x(0)) = ((-0.5,1),0.025I_2).Fig. <ref> shows the 10^4 data points colored according to the 3rd left singular vector of A. With both the initial distribution and the target in the left half of the domain, the second left singular vector simply divides the domain roughly into its left and right halves. However, for this case the third singular vector shows a partition which indicates a coherent structure that extends from the initial blob location at the initial time (see Fig. <ref> (a)) to the target at the final time (see Fig. <ref> (b)).Evaluating the objective in Eq. <ref> for this case, we have that g̅(f_X^2,f_Y^2) = 0.9978 and g̅(f_X^3,f_Y^3) = 0.9928 where f_X^2,f_Y^2 and f_X^3, f_Y^3 refer to the partition functions given by the second and third singular vectors, respectively. § TRANSPORT BY ROTORS NEAR AN INFINITE PLANE WALL For the case of a rotlet located at a point _r above and infinite plane wall at y=w, the fluid flow must satisfy the additional boundary conditions of no slip and no penetration at the plane wall. The stream function associated with this flow is given by <cit.> ψ_w()= γ(-log|-_r| + log| - _| - 2 (y- w)( y-y_)/r_^2)where _ = (x_,y_) = (x_r, 2w-y_r) is the location of an image singularity which has the effect of making the fluid velocity vanish at the plane wall.Similarly, r_ = | - _|.Therefore, the flow produced for this case is 𝐮_w = (u_w,v_w) where u_w= γ(-y-y_r/r^2 - y-y_/r_^2 - 2(y-w)/r_^2 + 4(y-w)(y-y_)^2/r_^4) v_w= γ(x-x_r/r^2 - x-x_r/r_^2 - 4(y-w)(x-x_)(y-y_)/r_^4) and the velocity field for multiple rotlets above a plane wall can be found by summing the individual velocity fields as in Eq. <ref>. With these governing equations for the flow produced by microrotors in the presence of a plane wall, we consider a similar transport problem to the one considered in Sec. <ref> in order to examine the boundary effects of the plane wall on the transport problem. As in Sec. <ref>, the same rotor positions of (-1, 0) and (1,0), initial density of ρ(x(0)) = ((1,1),0.025 I_2), timespan of 5 units, target moments, and cost function are considered. The Liouville operators are computed using trajectory data from the same grid of initial conditions and basis functions positioned on the same grid as in Sec. <ref>, but with any points in these grids lying outside of the fluid domain (below the plane wall) neglected. Fig. <ref> shows the resulting flow field and its effect on the motion of the particle distribution for the case of a plane wall located at w = -1.25 from the control computed using the DDP algorithm.From this figure, it is clear that the effect of the wall is to stretch the particle distribution along the wall due to the vanishing fluid velocity at the wall.Due to this effect, the control tends to pull the distribution to the left in the early stages of the trajectory using a larger positive (counterclockwise) strength of the left rotor than in the free space case.Related to this, in the middle stages of the trajectory, a larger positive strength of the right rotor is needed to supplement effects of the left rotor, as compared to the free space case.These effects can also be clearly seen in Fig. <ref> (b), which shows a time sequence of the rotor strengths for this problem for varying wall locations. Fig. <ref> (a) shows an overlay of the final particle distribution at t = 5 for the same wall locations.From this figure, it can be seen that effect of the wall is to elongate the distribution more for cases where the wall is closer to the target mean location.Fig. <ref> (c) shows a comparison of the optimal cost found from the DDP algorithm at varying wall locations, which demonstrates that the cost increases significantly as the wall nears the target mean position. This is due to both to the increased control effort (rotor strength) needed to steer the distribution as well as well as greater error in the moments due to the stretching effect of the wall.Fig. <ref> shows the coherent sets for the case shown in Fig. <ref> at the initial and final times.As in the free space case, the optimal control forms a coherent structure which passes near to the initial blob location at the initial time and extends toward the target at the final time. § TRANSPORT BY ROTORS WITHIN A CIRCULAR BOUNDARYFor the case of a rotlet positioned at a point _r inside of a circular boundary of radius, a, centered about the origin, again the no-slip and no penetration boundary conditions must be satisfied by the flow at the boundary, and again, these can be satisfied by modifying the stream function to include image terms to cancel out the flow at the wall.The stream function satisfying these conditions can be shown to be <cit.>ψ_c = γ(-log| -_r| + log|-_| + logR_r/a- 1/2(R^2-a^2/x_^2)(a^2/R_r-R^2/a^2) )where R = || and R_r = |_r| are the radial distances from the center of the circle to the evalutation point and to the rotlet, respectively, and _ = a^2/R_r^2_r is the location of the image system.That is, the image is located outside of the circular boundary at a point along the line between the center of the circle and the rotlet at a radial distance of a^2/R_r from the center of the circle. Then the flow field for this case is given by 𝐮_c = (u_c,v_c) whereu_c= γ(-y-y_r/r^2 + y-y_/r_^2 + (a^2/R_r^2 - 2R^2/a^2+1) y/r_^2 - (y-y_)(R^2-a^2)(a^2/R_r-R^2/a^2)/r_^4)v_c= γ(x-x_r/r^2 - x-x_/r_^2 - (a^2/R_r^2 - 2R^2/a^2+1) x/r_^2+ (x-x_)(R^2-a^2)(a^2/R_r-R^2/a^2)/r_^4) .With these governing equations for the flow produced by microrotors within a circular boundary, we consider the same transport problem considered in previous cases in order to examine the boundary effects of the circular boundary on the transport problem. As before, the rotor positions of (-1, 0) and (1,0), initial density of ρ(x(0)) = ((1,1),0.025 I_2), timespan of 5 units, the same target moments and cost function are considered. The Liouville operators are computed using trajectory data from the same grid of initial conditions and basis functions positioned on the same grid as in Sec. <ref>, but with any points in these grids lying outside of the fluid domain (beyond the circular boundary) neglected. Fig. <ref> shows the resulting flow field from the control and its effect on the motion of the particle distribution for the case of the two rotors within a circular boundary of radius a = 2.5. Similarly to the case next to a plane wall, the reduced fluid velocity near the circular boundary leads to a stretching effect on the distribution, especially when a significant part of the particle distribution lies in regions near to the boundary. Since this is encountered at the initial condition, significantly more particles remain in the upper, trailing `tail' of the distribution due to the drag effects of the boundary in the upper right quadrant.This effect becomes more apparent for smaller boundary radius.Due to this effect, more control effort must be exerted by the rotors in the early stages of the trajectory to overcome this drag. A secondary effect of this is that the leading tail of the distribution, which consists of particles closer to the interior of the circle and further from the boundary, tends to stretch more, leading it to wrap around the rightmost rotor in the later stages of the trajectory in a way that was not seen in the previous cases.These qualitative differences are highlighted in Fig. <ref> (a), which shows an overlay of the final particle distribution at t = 5 for the same boundary radius. Fig. <ref> (b) shows a time sequence of the rotor strengths for this problem for varying wall locations. Fig. <ref> shows the coherent sets for the case shown in Fig. <ref> at the initial and final times.As in the previous cases, the optimal control produces a flow field a coherent structure which passes near to the initial blob location at the initial time and extends toward the target at the final time. § TRANSPORT OF TWO DENSITIES We now return to the case of two micro-rotors in free space to consider the problem of manipulating two distinct distributions of fluid particles to a common target mean and second moment.This requires a reformulation of the optimal control problem as posed in Eq. <ref>. In that formulation, the state of the control problem was taken to be the vector of projection coefficients ρ̂.Here we consider an augmented state containing the projection coefficients of the two density functions. Denoting these two density functions as ρ^A and ρ^B, and their corresponding projection coefficients by ^A and ^B, the augmented state for this case is [(^A)^T, (^B)^T]^T. Similarly, we consider an output vector which concatenates the first and second moments for the two density functions y = [(m^A)^T , (m^B)^T]^T, where m^A and m^B are vectors containing the moments of the densities ρ^A and ρ^B respectively, as in Eq. <ref>.The same Liouville operators are used to propagate each of these densities forward in time.From this point, an appropriate cost function can be specified and the rotor control can be optimized using the DDP scheme as before.With this formulation, we consider the problem of manipulating two densities using two rotors fixed at the same locations as before, (-1,0) and (1,0).We take the initial density for one of the distributions to be the same as the previous examples, ρ^A(x(0)) =((1,1),0.025 I_2), and consider a second distribution starting from an initial density of ρ^B(x(0)) = ((b,1),0.025 I_2), where b is a parameter to be varied.This formulation allows us to examine the ability to steer two distributions starting from varying initial distances apart. We consider the problem of choosing the rotor strengths to steer both of these distributions to a final distribution with a mean of m_1^A(5) = m_1^B(5) = (-1,-1), while minimizing the variances. For this problem, the cost function is taken to be of the same form as Eq. <ref> with the weights chosen to be S = 0.1I_10, R = I_2, and S_H = 500I_10.That is, the terminal cost is chosen to be half that of the previous cases since it is being applied to the error in the moments of two density functions and summed. Fig. <ref> shows snapshots from the evolution of the particle distributions for the flow induced by the rotors controlled using the strengths determined from the DDP algorithm for four different initial distributions ρ^B with the initial x-coordinate of the mean being b = 0.5, b = 0.0, b=-0.5, and b=-1.0, respectively, on the rows. It can be seen that the flow produced in the case where b = 0.5 is qualitatively similar to the case of controlling the density ρ^A alone, as was considered in Sec. <ref>.For the next two cases of b = 0 and b = -0.5, we see that as the initial distribution of ρ^B starts farther from the initial distribution of ρ^A, a higher negative spin is applied by the left rotor in the early stages of the trajectory, producing a flow that is more symmetric as the blobs are pulled toward the middle, but with a similar flow near the end of the trajectory as the distributions near the target.In the last case shown, where b = -1.0, it appears that a transition has occurred and a qualitatively different optimal trajectory is found in which the leftmost distribution ρ^B is stirred counterclockwise around the left rotor rather than through the region between the rotors.This is done by a positive torque applied from the left rotor, which also results in the rightmost blob ρ^A being pulled to a position above the left rotor.As a result of this, at the end of the sequence, the rightmost rotor generates a counterclockwise flow which pushes the two distributions down toward the target.This is in contrast to the other cases, where the right rotor generates a clockwise flow near the end in order to steer the particles from right to left toward the target. These effects can also be seen by examining the rotor strengths directly, as shown in Fig. <ref>.Fig. <ref> shows the coherent sets at the initial and final time for the cases shown in Fig. <ref> where two distributions are to be steered to the common target.In the first three cases considered, the coherent structure which divides the coherent sets at the initial time passes through the regions of high concentration of both initial distributions.At the final time, this structure moves toward the target, effectively pulling both distributions toward the goal.§ CONCLUSIONA promising new approach has been developed and demonstrated for computing the optimal control to transport a distribution of states whose dynamics are governed by a control affine system to a desired final state distribution in a fixed, finite time.We demonstrate the usefulness of this method by highlighting a fluid mechanical application, in which the relevant state is the position of a fluid particle,the distribution describes a blob of fluid particles, and the controls are the torques applied by a pair of fixed rotors, which stir the flow in circular patterns.In this setting, we used the proposed approach to analyze the effects of fixed boundaries on the transport problem. We believe that such control strategies will be very useful in applications, such as targeted drug delivery, particle manipulation, and cell sorting in which the relevant transport problem is not to mix the fluid, but to transport a concentrated distribution of particles in a controlled way to a desired location. In future works, we plan to study similar transport problems in which the flow is generated by non-stationary stirrers, such as a moving rotors or microswimming robots<cit.>, or by boundary controls.Other interesting use case of the work presented here could be using this algorithm to optimize rotor placement for a given task.This application could be especially relevant for the design of microfluidic devices where fluid transport is critical.While it was demonstrated on and motivated by problems in the fluids setting, we believe that the proposed approach can have much broader application in control systems, where the density of states can be taken to represent an uncertainty distribution <cit.>. Other exciting extensions of this work could include understanding the relationship between this method and the formation, motion, and manipulation of transport barriers in a flow field. | http://arxiv.org/abs/2310.17832v1 | {
"authors": [
"Jake Buzhardt",
"Phanindra Tallapragada"
],
"categories": [
"physics.flu-dyn",
"cs.SY",
"eess.SY",
"math.DS",
"math.OC"
],
"primary_category": "physics.flu-dyn",
"published": "20231027010137",
"title": "Controlled density transport by microrotors in a Stokes flow using linear transfer operators"
} |
Learning to Recognize Occluded and Small Objects with Partial Inputs Hasib ZunairCIISE, Concordia UniversityMontreal, QC, [email protected]. Ben HamzaCIISE, Concordia UniversityMontreal, QC, [email protected] 14, 2024 ========================================================================================================================================================================================== Recognizing multiple objects in an image is challenging due to occlusions, and becomes even more so when the objects are small. While promising, existing multi-label image recognition models do not explicitly learn context-based representations, and hence struggle to correctly recognize small and occluded objects. Intuitively, recognizing occluded objects requires knowledge of partial input, and hence context. Motivated by this intuition, we propose Masked Supervised Learning (MSL), a single-stage, model-agnostic learning paradigm for multi-label image recognition. The key idea is to learn context-based representations using a masked branch and to model label co-occurrence using label consistency. Experimental results demonstrate the simplicity, applicability and more importantly the competitive performance of MSL against previous state-of-the-art methods on standard multi-label image recognition benchmarks. In addition, we show that MSL is robust to random masking and demonstrate its effectiveness in recognizing non-masked objects. Code and pretrained models are available on https://github.com/hasibzunair/msl-recognitionGitHub.§ INTRODUCTIONMulti-label image recognition (MLIR) is a fundamental and challenging task in a variety of computer vision applications such as automatic tagging of images on social media platforms and object detection in autonomous vehicles <cit.>. The aim is to recognize multiple objects or attributes in an image. A major challenge in MLIR is how to effectively tackle the issue of large variations in the size and spatial locations of objects. This issue becomes more pronounced when the objects are occluded and small.Recent MLIR approaches, including graph convolutional networks and their variants <cit.>, focus primarily on capturing semantics and label co-occurrence among objects. While powerful, most of these methods require the combination of multiple networks, resulting in high computation cost. Also, methods that deal with both semantics of objects and label relations often consist of multiple stages of training <cit.>, rely on large language models <cit.>, and operate on high input resolution <cit.>. Moreover, they require additional data for pretraining <cit.>, even with models already pretrained on large datasets such as ImageNet-1k and ImageNet-21k, and also rely on complex data augmentation strategies <cit.>. In addition, these methods do not explicitly address the occlusion problem and fail to accurately recognize small objects, leading to suboptimal performance on images containing small and occluded objects. In practical real-world applications of MLIR such as object detection in self-driving cars, images are usually comprised of multiple objects of different sizes (e.g., small) and shapes that co-exist and are densely cluttered (e.g., occluded), and hence it is of vital importance to develop MLIR approaches that can effectively recognize small objects even under heavy occlusions.Intuitively, we can consider occluded objects as partial inputs, and hence accurate recognition requires knowledge of partial inputs, and hence context. Motivated by this intuition, we propose Masked Supervised Learning (MSL), a single-stage, model-agnostic learning paradigm for MLIR tasks. Given a base recognition network, MSL uses a masked branch to predict the labels for a heavily masked version of the input image, which is a good cue for learning context-based representations. We also propose to use label consistency to model label co-occurrence by maximizing the similarity between the predictions from the recognition and masked branches. The main contributions of this work can be summarized as follows: * We propose a simple yet effective single-stage, model-agnostic learning paradigm that aims to learn context-based representations and to better model label co-occurrence from partial inputs via masking.* We demonstrate through experimental results and ablations that MSL yields competitive performance in comparison with single- and multi-stage approaches, especially for small and occluded objects.* We show that MSL is not only robust to partial inputs, but also predicts objects that are almost entirely masked, while yielding improved recognition of non-masked objects.§ RELATED WORKHybrid Methods.These methods leverage a combination of convolutional, graph, transformer or recurrent neural networks <cit.>. Graph based networks, for instance, leverage semantic relations between object classes <cit.>, but tend to incur heavy computation costs. ADD-GCN <cit.> dynamically generates graphs for an image by first generating a category-aware representation, followed by modeling the relationship between the representations. ADD-GCN <cit.> operates on high resolution in the same vein as SSGRL <cit.>, C-Tran <cit.> and MCAR <cit.>. KGGR <cit.> operates on knowledge graphs and requires additional data for pretraining. Our method does not require the combination multiple networks, high input resolution, or additional data.Model-Agnostic Methods.This class of approaches are not architecture dependent, and include ASL <cit.> and CSRA <cit.>, which can be applied to any architecture, but require an exhaustive hyperparameter tuning. Moreover, they achieve competitive results only when using complex data augmentation techniques such as CutMix, GPU Augmentations, or RandAugment <cit.>. By comparison, our proposed model achieves state-of-the-art performance without relying on complex data augmentation strategies.Multistage and Bimodal Frameworks.Query2Label (Q2L) <cit.> is a two-stage framework that focuses on class-specific attention. KSS-Net <cit.> is a knowledge distillation based method comprised of a two-stage training scheme with teacher and student models. BMML <cit.> is a bimodal learning approach that not only uses a convolutional neural network and a recurrent neural network, but also relies on large language models <cit.> and additional data. Our work differs from these frameworks in that it does not require multiple stages of training and also does not rely on large language models.Transformer-Based Methods.TDRG <cit.> consists of convolutional neural network, a transformer, as well as a graph neural network that is used to capture long-term contextual information and to build position-wise relationships at different scales. C-Tran <cit.> is a transformer based method that relies on an additional image feature extractor and high input resolution. It exploits the dependencies among both visual features and labels using a single transformer encoder. By comparison, our work is significantly different from C-Tran. First, during training we mask images, whereas C-Tran masks labels. Second, the input to the transformer encoder in C-Tran consists of an image and a masked label (i.e., token), whereas our model requires only an image. Also, our method is model-agnostic and can be applied to any kind of network for MLIR tasks.Overall, our work differs from previous MLIR approaches in that we propose a simple yet effective single-stage learning paradigm that is model-agnostic. Most notably, our model does not require multiple stages of training, the combination of multiple networks, large language models, high input resolution, complex data augmentation strategies, or additional data for pretraining. § MASKED SUPERVISED LEARNINGIn this section, we begin by formulating the task at hand and subsequently introduce the fundamental components that make up the proposed MSL paradigm. The overall framework of MSL is depicted in Figure <ref>.Problem Statement.Let 𝒟={(I_i,y_i)}_i=1^N be a training set of N labeled images I_i∈𝒳 and their ground-truth multi-label vectors y_i=(y_i,1,…,y_i,K)^∈𝒴={0,1}^K, with y_i,k=1 indicating the presence of the k-th label (i.e., object or attribute) in the image, and y_i,k=0 indicating its absence. In other words, each image I_i is associated with multiple labels chosen from a set of K possible classes (i.e., object categories). The task of multi-label image recognition is to learn a multi-label recognition model f_θ: 𝒳→𝒴, where θ is a set of learnable parameters. Given a test image I, the trained model predicts the corresponding multi-label vector y_p=σ(f_θ(I)), where σ(·) is the sigmoid activation function applied element-wise.§.§ Masked InputsFor masked image generation, we leverage the Irregular Mask dataset <cit.>, which is commonly used in image inpainting <cit.> and is comprised of roughly 20,000 masks with random streaks and holes of arbitrary shapes. From this dataset, we generate low and high mask subsets, each of which is comprised of 1000 samples. The process for creating these two subsets is as follows: For a given mask sampled from the Irregular Mask dataset, we first compute the percentage p of zero pixels in the mask. If p is greater than 50%, then the mask is included in the high mask subset. Otherwise, the mask is placed in the low mask subset. In our experiments, we find that high masks generally improve performance. Intuitively, image masking can be viewed as “simulating" images with partial inputs. During training, we randomly sample a mask from the high mask subset and perform binary thresholding, where the pixel values are either 0 or 1, and we denote this mask by M_holes. Then, we follow the masking procedure in <cit.> to create a masked image I_masked = I⊙M_holes, where I is the input image, and ⊙ denotes element-wise multiplication. The masked image has a similar layout as the input image, but with roughly 50% of pixels randomly removed. §.§ Masked BranchThe goal of Masked Branch (MaBr) is to explicitly learn context-based representations, as this branch is tasked to predict labels of heavily masked inputs (i.e., partial inputs), translating into better multi-label predictions. The masked branch has the ability to learn short-range context even when objects in the image are densely cluttered. Similarly, it can also learn long-range context when objects are more spaced apart.Given an input image I and its masked version I_masked, we train a base recognition network f_θ to predict both the output y_p of the image recognition branch and the output y_mp of the masked branch. Here, f_θ is a Siamese network like architecture <cit.>, where the branches are identical and share weights.We train f_θ by minimizing the following combined loss function of the recognition branch and masked branchℒ_inter = ℒ_rcg (y_p, y_gt) + ℒ_MaBr (y_mp, y_gt),where ℒ_rcg and ℒ_MaB are binary cross-entropy losses between the ground truth and the outputs of the recognition and masked branch, respectively. Application-specific loss functions can also be used in lieu of cross-entropy. §.§ Label ConsistencyAs objects generally co-exist in an image (e.g., chair is more likely to co-occur with table than a sportsball), it is of vital importance to model this label co-occurrence to help improve the recognition performance. To perceive this label-level feature, we propose to use Label Consistency (LaCo) that maximizes the similarity between the predictions from the recognition and masked branch. Since we use a Siamese style architecture, where the network is the same with shared weights, maximizing the predictions helps the network learn to predict heavily occluded objects (e.g., partial inputs) from the presence of other target objects, thereby effectively utilizing masked branch. More specifically, we maximize the similarity between the predictions from the recognition branch y_p and the masked branch y_mp by minimizing the L_2-loss ℒ_LaCo=‖y_p-y_mp‖^2. §.§ Overall Loss FunctionUsing the recognition branch, masked branch and label consistency, we define the overall loss function for the proposed MSL model as followsℒ_total= α_1ℒ_rcg (y_p, y_gt) + α_2ℒ_MaBr (y_mp, y_gt)+ α_3ℒ_LaCo (y_p, y_mp),where the scalars α_1, α_2 and α_3 are nonnegative trade-off hyperparameters, which control the contribution of each loss term.During training, ℒ_total is minimized between predictions and ground-truth labels for several epochs using stochastic gradient descent to learn the parameters of f_θ using a labeled training set. For inference, the trained network f_θ is used in multi-label image recognition to obtain multi-label predictions given a test image I. Hence, MSL is simple in structure (i.e., model-agnostic) and easy to implement (i.e., single-stage training). When α_1=1 and α_2=α_3=0, we obtain the recognition loss, which is basically the loss function for the vanilla network.§ EXPERIMENTSIn this section, we demonstrate the performance of MSL in comparison with state-of-the-art methods. Details on the implementation, architecture and training, as well as additional results are included in the supplementary material.§.§ Experimental SetupDatasets.We conduct experiments on two MLIR benchmarks: VOC2007 <cit.> and MS-COCO <cit.>.* VOC2007. This is a widely-used dataset for MLIR tasks, and is comprised of 9,963 images with 20 classes, where the train-val set has 5,011 images and the test set has 4,952 images. Following previous work <cit.>, we use the train-val for training and test for testing. We also set the input resolution to 448× 448, unless otherwise specified.* MS-COCO.This is a standard benchmark for training and evaluating image recognition, segmentation, and detection algorithms. In our experiments, we use COCO-2014, which consists of 82,081 and 40,137 training and validation images, respectively, with 80 different classes. For fair comparison with previous work <cit.>, we use the same training and evaluation procedures, and evaluation metrics. Baselines.We compare MSL against several state-of-the-art graph-based methods that use different learnable networks such as ML-GCN <cit.>, P-GCN <cit.>, ADD-GCN <cit.> and TDRG <cit.>. We also compare against model-agnostic methods such as ASL <cit.> and CSRA <cit.>, which rely on complex data augmentation. Moreover, we compare against methods that require large language models and additional data for pretraining such as BMML <cit.> and KGGR <cit.>, as well as methods that operate on high input resolution such as SSGRL <cit.>, C-Tran <cit.>, MCAR <cit.>, and IDA <cit.>. Finally, we compare against multi-stage frameworks such as KSS-Net <cit.> and Query2Label <cit.>.Evaluation Metrics.We use the mean average precision (mAP) as primary evaluation metric <cit.>. We set positive threshold to 0.5 and report overall performance results of MSL and baselines using other evaluation metrics, including overall precision (OP), overall recall (OR), overall F1-measure (OF1), per-category precision (CP), per-category recall (CR), and per-category F1-measure (CF1). §.§ Comparison with State-Of-The-ArtComparisons on VOC2007.We compare the performance of MSL against several state-of-the-art methods, and the results are reported in Table <ref>. All scores are averaged over 3 runs. We employ MSL with two CSRA-based backbones: ResNet-cut, which is a ResNet-101 <cit.> pretrained on ImageNet-1k with CutMix <cit.>, and ViT-L16 <cit.>, which is a large vision Transformer pretrained on ImageNet-1k with 224 × 224 resolution. We refer to these MSL variants as MSL-C and MSL-V, respectively. The classification head of these backbones differs from the typical fully connected or global average pooling layer by utilizing a CSRA module <cit.>. This module generates class-specific features for each category, and then combines the intermediate results to produce the final logits. As shown in the table, MSL-C outperforms all previous state-of-the-art models, achieving relative improvements of 1.1%, 5.6% and 3.9% in terms of mAP, CR and CF1, respectively, over the strongest baseline. MSL-C performs better than graph-based methods such as ML-GCN and ADD-GCN. MSL-C also achieves a relative improvement of 2.8% in terms of mAP over SSGRL, which is trained on input resolution of 640× 640 and uses both a convolutional feature extractor and a graph neural network. Notably, MSL-C is also efficient and more accurate than KGGR and BMML, which use additional data (MS-COCO) consisting of 82,081 images for pretraining on top of ImageNet-1k pretraining, and also rely on large language model BERT <cit.> and operate on label-level attentions (i.e, multiple images), making them compute intensive.The first two rows of Figure <ref> show visual examples of predictions made by MSL-C and CSRA ResNet-cut as baseline. In the first row, we can see that the baseline fails to recognize small objects such as motorbike, person, chair and tvmonitor. The second row shows instances where the baseline model fails to recognize target objects under heavy occlusions, such as sports ball, person and vase. In contrast, MSL-C is able to recognize small objects, as well as objects that are heavily occluded. The masked branch, which is responsible for recognizing target object(s) from partial inputs through masking, can acquire context-based representations. This ability is likely responsible for its success in recognizing objects under challenging conditions. Label consistency, on the other hand, helps model label co-occurrence by maximizing the similarity between the predictions made by the recognition and masked branches.Comparisons on MS-COCO.In Table <ref>, we report results on MS-COCO, where all scores are averaged over 3 runs and MSL is applied on CSRA-based ResNet-cut backbone. As can be seen, MSL-C outperforms all baselines operating on input resolution 448 × 448 by 1.4% in terms of mAP. MSL-C also outperforms complicated and time-consuming methods such as KSSNet and MCAR, as well as methods that operate on higher input resolution 575× 576 such as ADD-GCN, SSGRL, and C-Tran. In particular, MSL-C outperforms MCAR by a relative improvement of 2.2% in terms of mAP. MCAR has two network streams that are trained jointly, and at inference predictions are fused from the two streams to generate a final prediction, whereas MSL has two streams with same weights in a Siamese-style network, which is much easier to optimize, and at inference a single network is used to make predictions. Moreover, MSL-C outperforms ADD-GCN, which uses a CNN and a GCN, by a relative improvement of 1.4% in terms of mAP.In the last two rows of Figure <ref>, we show visual examples of predictions made by MSL-C and CSRA ResNet-cut as baseline on MS-COCO. A similar pattern can be observed, where MSL can recognize small objects and also objects under heavy occlusions compared to the baseline. It is worth mentioning that the variation of objects and their shapes or sizes are more complex in MS-COCO than those in VOC2007. Overall, MSL is able to learn context-based representations and to better model label co-occurrence by masked branch and label consistency, thereby translating to better predictions in comparison with the baselines. MSL can better recognize small objects and also objects under heavy occlusions. MSL is also very simple and much easier to train, as it does not require multiple stages of training, the combination of multiple learnable networks, large language models, high input resolution, complex data augmentation strategies, or additional data. §.§ Ablation StudyWe analyze how each of the key components of the proposed MSL framework affects the final performance. We also perform hyperparameter sensitivity analysis.Effectiveness of Masked Branch.Table <ref> illustrates the benefit of using masked branch tasked to make predictions, given partial inputs by random masking. We adopt CSRA with ResNet-cut backbone as our baseline, and evaluate performance on VOC2007. We find that the masked branch improves performance in terms of mAP and other evaluation metrics. It helps learn useful representations, especially for small and occluded objects due largely to the fact that the branch is tasked to recognize masked objects (i.e., partial inputs), thereby leveraging information from neighboring objects.Effectiveness of Label Consistency.As shown in Table <ref>, label consistency helps improves performance in terms of mAP and other metrics. This constraint essentially guides the model to make accurate predictions on masked inputs by minimizing the distance between the predictions made by the recognition and masked branches. Basically, we push the predictions of the masked branch predictions and recognition branch predictions together and learn representations for partial inputs. As can be seen, the best performance is achieved when combining masked branch and label consistency. Also, Table <ref> shows that MSL is model-agnostic, and can also improve performance of not only classical convolutional backbones, but also modern transformer backbones. Effectiveness of Binarization.Table <ref> shows the benefit of using binarization of masking during training. We find that applying binary thresholding to the masks significantly improves performance of the baseline in terms of all metrics. This is attributed to the fact binarization yields true masking, dropping certain pixels while retaining the rest, thereby resulting in better numerical stability. Without binarization, the image is slightly offset in the pixel space when multiplied by 0.884 instead of 1, yielding a different representation in the feature space that degrades performance. Amount of Masking.In Table <ref>, we report the effect of the amount of masking on MSL performance. We adopt CSRA with ResNet-cut backbones, and evaluate performance on VOC2007 and MS-COCO, respectively. We find that applying extensive image masking during the training process leads to improved performance.Hyperparameter Sensitivity Analysis.We adopt CSRA with ResNet-cut as a base model and apply MSL to evaluate its performance for various values of the trade-off hyperparameters α_1, α_2 and α_3 on VOC2007. Table <ref> shows the effect of each hyperparameter on MSL performance in terms of mAP, CR and CF1. Interestingly, the best performance is achieved when the trade-off hypeparameters α_1 and α_2 are weighted almost equally. Moreover, using label consistency with α_3 = 0.5 gives the best results. This suggests that the learned representations for partial inputs contribute to the improvement of the overall performance. §.§ RobustnessWe now examine the robustness of MSL against partial inputs and showcase its ability to predict non-masked objects.Quantitative Results.We evaluate MSL against partial inputs by deliberately masking the input images before making a prediction. In Figure <ref>(a), we show comparison results of MSL against CSRA with ResNet-cut on VOC2007. Using MSL, mAP is improved by 19.8%, while CR and CF1 are improved by 30.7% and 24%, respectively. A similar trend is observed when comparing MSL against CSRA with ResNet-cut on MS-COCO, as depicted in Figure <ref>(b), achieving a mAP improvement of 20.6%. This shows that MSL is robust to heavily masked inputs, and hence occlusions.Qualitative Results.In Figure <ref>, we show visual comparisons of the top three predictions by our approach compared to the baseline when making predictions on masked inputs. We can see that the baseline fails to make predictions when the input image is masked. Even in cases where the object is slightly masked, the baseline fails to make a prediction. By comparison, our model is able to recognize objects that are heavily masked thanks to the masked branch. Also, there are cases where the object is almost completely masked, but still our method is able to make a prediction. This is largely attributed to label consistency, where the target label can be inferred from the other predicted labels. Non-Masked Objects.We also highlight an interesting property of MSL predictions in Figure <ref>, which shows that our model predicts non-masked objects better than the baseline. We hypothesize that this is due in part to the initial features or cues that the model needs to focus on. Comparison with random masking strategy. While the Masked Autoencoder (MAE) <cit.> is a well-established masking strategy frequently employed in self-supervised learning, the key novelty of our MSL framework lies in the application of a masking strategy within the context of supervised learning. This novel utilization of masking during supervised learning sets our approach apart from existing methods. Moreover, MAE follows a two-step process: first, it undergoes pre-training for 800 epochs exclusively on images, and then it proceeds to fine-tune for an additional 50 epochs using both images and labels. In contrast, MSL requires only a single stage of training, lasting 60 epochs, utilizing both images and labels. To compare the performance of MSL and MAE, we present the results in Table <ref>, which demonstrates the superiority of MSL over MAE in terms of mAP on both VOC2007 and MS-COCO datasets.Comparison with CSRA variants. In Table <ref>, we compare CSRA variants and MSL variants on VOC2007 and MS-COCO. As can be seen, MSL yields improved performance for both transformer and convolutional backbones. § CONCLUSIONIn this paper, we presented a single-stage, model-agnostic learning paradigm using masking. The proposed paradigm, which is motivated by the intuition that occluded objects are partial inputs, enables models to explicitly learn context-based representations and to model the label co-occurrence. We showed through extensive experiments that our method surpasses state-of-the-art models that heavily depend on multiple stages of training, high input resolution, the combination of multiple networks, large language models, complex data augmentation strategies, and additional data. We also demonstrated that MSL is robust to masked partial inputs for large and small objects, which is a strong indicator of its ability to handle challenging cases of small and occluded objects. Our method distinguishes itself from previous approaches due to its simple and straightforward training process, with the added benefit of incurring only a minor computational overhead compared to those methods. For future work, we aim to adapt the proposed framework to other computer vision tasks such as object detection. Acknowledgments.This work was supported in part by the Discovery Grants program of Natural Sciences and Engineering Research Council of Canada. ieee_fullname § — SUPPLEMENTARY MATERIAL —§ IMPLEMENTATION DETAILS Data preprocessing.Images and masks are resized to 448 × 448 and normalized to have values in [0, 1]. For ViT <cit.> based models, the input resolution is set to 224 × 224 to leverage ImageNet-21k and ImageNet pretrained weights. We use simple data augmentation techniques such as random flip and random resize crop. Unlike previous works <cit.>, we do not employ complex data augmentation strategies such as CutMix, GPU Augmentations, or RandAugment.Architecture.We apply MSL on two CSRA <cit.> based backbones, a convolutional backbone ResNet-cut which is a ResNet-101 pretrained on ImageNet with CutMix <cit.> augmentation strategy. It is worth mentioning that we do not use CutMix <cit.> augmentation strategy when applying MSL, to demonstrate its effectiveness. Note that here CutMix is for the pretrained model and not during fine-tuning on VOC2007 and MS-COCO datasets. To demonstrate the generality of MSL, we use a transformer backbone ViT-L16 <cit.> pretrained on ImageNet-21k and fine-tuned on ImageNet with the 224 × 224 resolution. We drop class tokens and use the final output embeddings as features, and we also interpolate positional embeddings when the models are fine-tuned on the higher resolution datasets. We refer to these MSL variants as MSL-C and MSL-V, where C and V denote convolutional and vision transformer, respectively.Model Training.MSL models are trained in a single stage, requiring a training set comprised of images and labels. We use the SGD optimizer to minimize the loss function. Following previous work <cit.>, we apply simple data augmentation such as random flip and random resize crop. For training both the baseline and MSL models, we set the learning rate, momentum and weight decay to 0.01, 0.9 and 0.0001, respectively. The models are trained for 60 epochs with a batch size of 6, and the best weights according to the mAP score on the test set are recorded. We follow CSRA <cit.> models and set H = 1, λ = 0.1 for VOC2007, and H = 6, λ = 0.4 for MS-COCO.Model Testing.After training, given an image as input, the model simply makes a prediction by assigning multiple label(s) among the defined classes.Hardware and software details.Our experiments were conducted on a Linux workstation running 4.8Hz and 64GB RAM, equipped with a single NVIDIA RTX 3080Ti GPU packed with 12GB of memory. All algorithms are implemented in Python using PyTorch.§ ADDITIONAL RESULTSIn this section, we provide additional experimental results on VOC2007, MS-COCO and WIDER-Attribute datasets, showing the effectiveness of MSL in recognizing small and occluded objects. Runtime Analysis.MSL incurs a minor computational overhead compared to traditional supervised learning. This is primarily due to the masking operation and the computation of predictions on the masked images. It is important to mention that this extra cost is only present during the training phase, and during inference, there is no masking involved. Instead, predictions are directly computed on the original input images. When compared to previous approaches, our method stands out for its simplicity and ease of training. Unlike other methods, MSL does not require multiple stages of training, the combination of multiple learnable networks, the utilization of large language models, high input resolution, complex data augmentation strategies, or the inclusion of additional data.Discussion on MLIR for small objects. Upon analyzing recent MLIR methods, we noticed that MCAR <cit.> stands out as the only method that explicitly tackles the problem of small-sized and occluded objects. Comparatively, our MSL model achieves higher scores in terms of mean Average Precision (mAP), with values of 96.1% and 86.4% on the VOC2007 and MS-COCO datasets, respectively. On the other hand, MCAR's performance falls slightly behind, scoring 94.8% and 84.5% on the same datasets. Note that MCAR employs an input resolution of 576 × 576, while MSL operates at a resolution of 448 × 448. MSL explicitly addresses the problem of small and occluded objects through the Masked Branch since that task of the branch is to recognize masked objects, which are partial inputs. We further illustrate the effectiveness of MSL in handling small objects and heavily occluded objects through visual examples presented in Figures <ref> and <ref>. These examples demonstrate MSL's ability to accurately predict such challenging instances. MSL is model-agnostic. In Tables <ref> and <ref>, we show recent state-of-the-art methods, as well as convolutional and transformer backbones, all of which were trained using MSL. As can be seen, MSL consistently improves performance of various methods, demonstrating that MSL is model-agnostic.WIDER-Attribute dataset results. Table <ref> shows that MSL outperforms strong baselines on the WIDER-Attribute dataset <cit.>.Comparison with CSRA variants. In Table <ref>, a comparison is made between CSRA variants and MSL variants on VOC2007 and MS-COCO. Specifically, we train CSRA and MSL with two pretrained backbones, namely ViT-L16 and ResNet with CutMix. Note that in the main body of the paper, we use CSRA-based backbones in MSL with MSL-C and MSL-V notations. Here, we test CSRA and MSL independently to highlight the contributions of MSL. We find that MSL improves performance for both transformer and convolutional backbones on both datasets. For fair comparison, we run CSRA variants on our working environment and conduct all experiments with a batch size of 6, whereas the CSRA results reported in the paper <cit.> use a batch size of 64. Hence, the results we report here do not exactly match those in <cit.>. To analyze the effect of batch size on the performance of CSRA and MSL, we conduct a small experiment on VOC2007 by varying the batch size from 4 to 12, which maximizes our GPU usage, and we found that both CSRA and MSL improve in terms of performance. Therefore, we argue that the performance of MSL could be further improved using a higher batch size.Analysis of masking in MSL. In Table <ref>, we report the impact of low and high masking on the performance of MSL-C and MSL-V. As can be seen, better results are achieved with high masking on different backbones tested on both VOC2007 and MS-COCO. High masking enables the network to learn better context when training using MSL. Low masking, on the other hand, does not result in significant performance improvements, partly due to learning redundant features. In other words, low masking does not significantly change the original image. Hence, learning very similar features does not help to learn useful representations. | http://arxiv.org/abs/2310.18517v1 | {
"authors": [
"Hasib Zunair",
"A. Ben Hamza"
],
"categories": [
"cs.CV"
],
"primary_category": "cs.CV",
"published": "20231027222927",
"title": "Learning to recognize occluded and small objects with partial inputs"
} |
Structure of 3D gravastars in the context of massive gravity H. Barzegar^1 [ email address: [email protected]], B. Eslam Panah^2,3,4 [ email address: [email protected]], G. H. Bordbar^5 [ email address: [email protected]], and M. Bigdeli^1 [ email address: [email protected]] January 14, 2024 ======================================================================================================================================================================================================================================= For a given δ∈ (0,1), the randomly perturbed graph model is defined as the union of any n-vertex graph G_0 with minimum degree δ n and the binomial random graph 𝐆(n,p) on the same vertex set.Moreover, we say that a graph is uniformly coloured with colours in 𝒞 if each edge is coloured independently and uniformly at random with a colour from 𝒞.Based on a coupling idea of McDiarmird, we provide a general tool to tackle problems concerning finding a rainbow copy of a graph H=H(n) in a uniformly coloured perturbed n-vertex graph with colours in [(1+o(1))e(H)]. For example, our machinery easily allows to recover a result of Aigner-Horev and Hefetz concerning rainbow Hamilton cycles, and to improve a result of Aigner-Horev, Hefetz and Lahiri concerning rainbow bounded-degree spanning trees.Furthermore, using different methods, we prove that for any δ∈ (0,1) and integer d ≥ 2, there exists C=C(δ,d)>0 such that the following holds. Let T be a tree on n vertices with maximum degree at most d and G_0 be an n-vertex graph with δ(G_0)≥δ n. Then a uniformly coloured G_0 ∪𝐆(n,C/n) with colours in [n-1] contains a rainbow copy of T with high probability. This is optimal both in terms of colours and edge probability (up to a constant factor).§ INTRODUCTIONGiven δ∈ (0,1), we define _δ,n to be the family of graphs on vertex set [n] with minimum degree at least δ n, and we let np be the binomial random graph on vertex set [n] with edge probability p. One of the central themes in extremal combinatorics is determining the minimum degree threshold for a given graph property , i.e. how large δ needs to be so that every G ∈_δ,n satisfies . Similarly, probabilistic combinatorics aims to determine how large p needs to be for np to satisfywith high probability[ Formally, we say that a sequence of events (A_n)_n∈ holds with high probability if [A_n]1 as n∞. ]. Bohman, Frieze and Martin <cit.> provided a connection between the extremal and the random graph settings by introducing the randomly perturbed graph model. For a given δ∈ (0,1), this is defined as G_0 ∪np where G_0 ∈_δ,n, i.e. as the graph on [n] whose edge set is the union of the edges of a deterministic graph G_0 with minimum degree at least δ n and the edges of a random graph np on the same vertex set. For a given δ and a given graph property , a pivotal question in the area is to determine how large p needs to be so that for every G_0 ∈_δ,n, with high probability, G_0 ∪np satisfies . More precisely, we say that p̂=p̂(δ,,n) is a perturbed threshold for the propertyat δ if there are constants C > c > 0 such that for any p ≥ C p̂ and for any sequence of n-vertex graphs (G_n)_n ∈ℕ with G_n∈_δ,n we have lim_n→∞(G_n∪ G(n,p) ∈)=1, and for any p ≤ c p̂ there exists a sequence of n-vertex graphs (G_n)_n∈ℕ with G_n ∈_δ,n such that lim_n→∞( G_n∪ G(n,p)∈) =0. For example, the main result of <cit.> is that whenis the property of being Hamiltonian, n^-1 is a perturbed threshold forat δ, for any δ∈ (0,1/2). This interpolates between the well known result that the threshold for the containment of a Hamilton cycle in np is n^-1log n, and the classical theorem of Dirac that every n-vertex graph with minimum degree at least n/2 is Hamiltonian (and thus when δ≥ 1/2, no random edges are needed in the perturbed model). Since <cit.>, there has been a sizeable body of research extending and adapting results from the extremal and the probabilistic setting to the perturbed one, particularly when the propertyis the containment of a spanning subgraph (e.g. trees <cit.>, factors <cit.>, bounded-degree subgraphs <cit.> and powers of Hamilton cycles <cit.>).Another flourishing trend is to investigate the emergence of rainbow structures in uniformly edge-coloured graphs. Given an edge-coloured graph G, a subgraph H of G is rainbow if each edge of H has a different colour. Moreover we say that a graph G is uniformly edge-coloured in a set of coloursif each edgeof G gets a colour independently and uniformly at random from . Instances of this problem in the random graph setting can be found in <cit.>. Here we focus on the perturbed setting.Let H=H(n) be a given n-vertex graph, δ∈ (0,1) andbe a set of colours. Suppose we would like to find (with high probability) a rainbow copy of H in a uniformly coloured perturbed graph G ∼ G_0 ∪np, with G_0 ∈_δ, n and colours in . In particular, G must contain a copy of H and thus p needs to satisfy p ≥ Cp̂ where p̂ is a perturbed threshold at δ for the containment of H, and C is a large enough constant. Moreover, because in a rainbow copy each edge gets a different colour, the colour setmust satisfy || ≥ e(H). Our following result implies that, in many cases, these two conditions are asymptotically enough to guarantee a rainbow H with high probability. Let p, ∈ (0,1), set μ := (1-p)-p/(1+)(1-p) and q:=(1+^-1)p, and suppose that μ > 0. Letbe a collection of subgraphs of K_n, each with m edges, and G_0 be an n-vertex graph. Let G_0' be the random subgraph of G_0, where each edge is sampled independently with probability μ. Then[ [ G_0' ∪np contains;some H ∈ ]] ≤[ [a uniformly edge-coloured G_0 ∪nq,; with colours in [(1+)m], contains a rainbow H ∈ ]] . The proof of <Ref> builds upon an ingenious coupling idea of McDiarmid <cit.>, which has already been used for rainbow problems in random graphs by Ferber and Krivelevich <cit.>.Our result is extremely versatile and provides a general machinery to translate existence results into rainbow ones. Indeed, let p=o(1) and fix ∈ (0,1) as in the statement of the theorem. Let δ∈ (0,1) and suppose that G_0 ∈_n,δ. Then δ(G_0') ≥δ n / 2 with high probability. Now observe that if p is large enough so that, for every G_1 ∈_n,δ/2, the perturbed graph G_1 ∪np contains a copy of H with high probability, then <Ref> implies the following: for every G_0 ∈_n,δ, if G_0 ∪n(1+^-1)p is uniformly edge-coloured in [(1+)e(H)], then with high probability it contains a rainbow copy of H. I.e. a perturbed threshold at δ/2 for containing H provides an upper bound on the `rainbow perturbed threshold' at δ for containing a rainbow H when we are allowed to use (1+)e(H) colours.Note that in general we cannot conclude that this gives the optimal edge probability for a rainbow H.For example, consider the case of H being a K_3-factor. Then n^-2/3 is a perturbed threshold for 0<δ<1/3 (see <cit.>), log n/n is a perturbed threshold for δ=1/3 (see <cit.>) and n^-1 is a perturbed threshold for 1/3 < δ < 2/3 (see <cit.>), while by the Corrádi-Hajnal Theorem no random edges are needed when δ≥ 2/3. In particular, observe that the perturbed threshold has a `jump' at δ=1/3. Therefore, while the existence threshold at 1/3 is log n/n, <Ref> needs p ≥ C n^-2/3 to guarantee a rainbow H with high probability when δ = 1/3, where C is a large enough constant.However, if we have a function p̂ which is a perturbed threshold for all δ∈ (0,δ_0), then <Ref> implies that p̂ is also a rainbow perturbed threshold at every δ∈ (0,δ_0) when colouring with (1+o(1))e(H) colours. For example, this is the case for rainbow trees. Krivelevich, Kwan and Sudakov <cit.> proved that for any δ∈ (0,1/2) the function n^-1 is a perturbed threshold for containing a given spanning bounded-degree tree. Thus <Ref> immediately implies the following. For any , δ∈ (0,1) and d ≥ 2 an integer, there exists C=C(,δ,d) such that the following holds. Let T be an n-vertex tree with maximum degree at most d. Then a uniformly coloured G_0 ∪nC/n with G_0 ∈_δ,n and colours in [(1+)n] admits a rainbow copy of T, with high probability. Observe that, because of the result from <cit.> cited above, <Ref> has the optimal edge probability (up to a constant factor). Moreover, it improves upon a result of Aigner-Horev, Hefetz and Lahiri <cit.>, who proved the same conclusion with the C/n-term in the probability replaced by ω(1)/n, and confirms their conjecture that C/n is already enough. Similarly, <Ref> has consequences for rainbow Hamilton cycles. Indeed, it implies that for any , δ∈ (0,1), there exists C=C(,δ) such that for any G_0 ∈_δ,n we have that a uniformly coloured G_0 ∪nC/n with colours in [(1+)n] admits a rainbow Hamilton cycle with high probability. We remark that this was already proved by Aigner-Horev and Hefetz <cit.> using different, ad hoc methods.<Ref> still leaves open the question if the extra colours are needed. Namely, are e(H) colours enough to guarantee a rainbow H with high probability? This seems to be a much more challenging problem and, even for specific choices of H, not much is known. To the best of our knowledge, the only known rainbow result in the perturbed graph setting with the exact number of colours is offered by <cit.>, where we show that a uniformly coloured G_0 ∪nC/n with colours in [n] contains a rainbow Hamilton cycle with high probability (in fact, we prove a version of this for directed graphs). This is clearly best possible both in terms of the edge probability (up to a constant factor, from the result of <cit.> cited above) and the number of colours (since a Hamilton cycle has n edges). Here we pursue this direction further and give an exact result for the containment of rainbow bounded-degree trees.Let δ∈ (0,1) and let d ≥ 2 be a positive integer. Then there exists C=C(δ,d) > 0 such that the following holds. Let T be a tree on n vertices with maximum degree at most d, let G_0 be a graph on n vertices with minimum degree at least δ n and suppose G ∼ G_0∪nC/n is uniformly coloured in [n-1].Then, with high probability, G contains a rainbow copy of T. <Ref> provides a rainbow variant of the result of Krivelevich, Kwan and Sudakov <cit.> cited above. Observe that <Ref> has the optimal number of colours, andalso has the optimal edge probability (up to a constant factor). For comparison, this improves upon the result of Aigner-Horev, Hefetz and Lahiri <cit.>, who required edge probability ω(n^-1) and n additional colours to get the same conclusion.An important step of our proof of <Ref> relies on the following theorem which allows to embed in a rainbow fashion an almost-spanning bounded-degree tree in a uniformly coloured np provided p ≥ C/n for large enough C. Let ∈ (0,1) and let d ≥ 2 be a positive integer. Then there exists C=C(,d)>0 such that the following holds. Let T be a tree on at most (1 - )n vertices with maximum degree d and suppose that ∼nC/n is coloured uniformly in [n]. Then, with high probability,contains a rainbow copy of T. Aigner-Horev, Hefetz and Lahiri <cit.> already proved that the same conclusion holds when C/n is replaced by ω(1)/n. They conjectured that ω(1)/n can be replaced by C/n and thus <Ref> resolves their conjecture. We remark that <Ref> is an immediate consequence of two previous results: the uncoloured version of <Ref> proved by Alon, Krivelevich and Sudakov (c.f. Theorem 1.1 in <cit.>) and a general tool of Ferber and Krivelevich <cit.> (c.f. <Ref>) which allows to translate uncoloured results into rainbow ones. However, for our purposes, <Ref> will not be enough and we will need a more general version, namely whenis a random subgraph of a pseudorandom graph. For a precise definition of what we mean by pseudorandom and why this generalisation is needed, we refer the reader to <Ref>. Organisation. The rest of the paper is organised as follows. In <Ref> we prove <Ref> and in <Ref> we prove a more general version of <Ref> (c.f. <Ref>). <Ref> provides an outline of our arguments for <Ref>, together with the tools and auxiliary lemmas we use in its proof. The proof splits into two cases, according to the structure of the tree T we wish to embed: when T has many leaves (c.f. <Ref>, proved in <Ref>) and when T has many bare paths (c.f. <Ref>, proved in <Ref>). We then finish by some concluding remarks in <Ref>. One supplementary proof is moved to <Ref>. Notation. Given a graph G, a vertex v ∈ V(G) and a subset X ⊆ V(G), E_G(v,X) denotes the set of edges of the form vx with x ∈ X, and N_G(X) denotes the set of vertices with at least one neighbour in X. A bare path in G is a path whose interior vertices have degree 2 in G. For a graph G and p∈[0,1], the p-random subgraph of G, denoted by G_p, is the random graph resulting from sampling each edge of G independently with probability p.A digraph D is a set of vertices together with a set of ordered pairs of distinct vertices and the minimum semi-degree δ^0(D) of D is the minimum over in- and out-degrees of vertices in D.Moreover np denotes the binomial random digraph on n vertices, that is the digraph on [n] where each ordered pair of distinct vertices forms a directed edge independently with probability p. Given an edge-coloured graph G, we denote the colour of an edge e by (e) and the set of colours on the edges of a subgraph G' by (G'). Moreover we say that G' is spanning in a colour set ' if (G') = '.For a, b, c ∈ (0, 1], we write, for example, a ≪ b ≪ c in our statements to mean that there are increasing functions f, g : (0, 1] → (0, 1] such that whenever a ≤ f (b) and b ≤ g(c), the subsequent statement holds.Throughout, log n denotes the natural logarithm.§ MCDIARMID ARGUMENT FOR RANDOMLY PERTURBED GRAPHS As alluded to in the introduction, the proof of <Ref>, relating the probability of finding a rainbow H in a perturbed graph to that of finding a copy of H in an uncoloured perturbed graph, is based on a coupling argument. This is inspired by a result of Ferber and Krivelevich <cit.>, which in turn uses a coupling trick due to McDiarmid <cit.>.Let :=[(1+)m] be the palette of colours. We define a sequence of graphs Γ_0, …, Γ_N, where N := n2 and each graph is equipped with a colouring of its edges, where we allow an edge to be coloured with multiple colours.Let e_1, …, e_N be an arbitrary enumeration of all the edges of K_n. For 0 ≤ i ≤ N define Γ_i as follows. * For 1 ≤ j ≤ i, * If e_j ∈ E(G_0), add e_j to Γ_i and assign it a colour uniformly at random from . * If e_j ∉E(G_0), add e_j to Γ_i with probability q and assign it a colour uniformly at random from . * For j > i, * If e_j ∈ E(G_0), then add e_j to Γ_i with probability /(1+) and assign it all colours from . * If e_j ∉E(G_0), then add e_j to Γ_i with probability p and assign it all colours from . We remark that all the random choices mentioned above are mutually independent. We claim that Γ_0 is distributed as G_0' ∪np, with edges assigned all colours in . Indeed, we have [e ∈ E(G_0' ∪np)] = {[ μ + (1-μ)p = /1+,ife ∈ E(G_0);p,ife ∉ E(G_0) ]., which is exactly the probability that e ∈Γ_0, as claimed. Because all edges in Γ_0 have all colours in , it is therefore the case that [G_0' ∪np has a copy of some H ∈] = [Γ_0 has a rainbow copy of some H ∈]. We also have [ [G_0 ∪nq has; a rainbow copy of some H ∈ ]] = [ [Γ_N has; a rainbow copy of some H ∈ ]], since Γ_N is distributed as G_0 ∪nq with edges coloured uniformly in . Therefore, in order to complete the proof, it is enough to show [ [ Γ_i-1 contains; a rainbow copy of some H ∈ ]] ≤[ [Γ_icontains; a rainbow copy of some H ∈ ]] ,for each i ∈ [N]. Observe that there are three mutually exclusive scenarios: *Γ_i-1 contains a rainbow copy of some H ∈ not using e_i;*Γ_i-1 does not contain a rainbow copy of any H ∈, not even if we add e_i and assign it all colours;*Γ_i-1 contains a rainbow copy of some H ∈ if we add e_i and assign it all colours, but does not contain a rainbow copy of some H ∈ that avoids e_i. To prove (<ref>), it suffices to show that the inequality holds when conditioning each side on each of <ref>, <ref> and <ref>. This holds with equality if <ref> or <ref> holds: if <ref> holds, both sides are 1, and if <ref> holds, both sides are 0. Now consider <ref>.[ [ Γ_i-1 contains; a rainbow copy of some H ∈ ]| <ref>]= /(1+),if e_i ∈ E(G_0)p,if e_i ∉E(G_0).The crucial observation is that if e_i can complete a rainbow copy of some H ∈ in Γ_i-1 then there are at least m colours (out of the (1+)m in the palette) for e_i which yields a rainbow copy of H in Γ_i.Therefore[ [ Γ_i contains; a rainbow copy of some H ∈ ]| <ref>]≥/(1+),if e_i ∈ E(G_0)q /(1+) = p,if e_i ∉E(G_0). Thus (<ref>) holds, completing the proof. § ALMOST SPANNING RAINBOW TREES IN RANDOM GRAPHS In order to prove <Ref>, about finding a rainbow copy of a spanning bounded-degree tree in a perturbed graph, we have to embed a rainbow copy of a fixed bounded-degree spanning tree T in a uniformly coloured G_0 ∪np, with colours in [n-1], using both edges of G_0 and random edges of np. We will do so by first embedding a certain almost-spanning subtree of T in np only (in a rainbow fashion) and then completing it to a (rainbow) embedding of T in the full perturbed graph G_0 ∪np. Therefore, we first show that we can embed almost-spanning trees with bounded degree in uniformly coloured random graphs in a rainbow fashion. However, rather than standard random graphs, we consider random subgraphs of pseudorandom graphs, for the following reason. To prove <Ref> when the tree has many bare paths (c.f. <Ref>), we will first build an absorbing structure, using edges of both G_0 and np, and then embed a rainbow almost-spanning forest in the remainder, using only edges from np. In order to guarantee that the colouring is random for the second step as well, we use the following trick. We partition the edges of K_n randomly into two sets. Then for the first step we only use the edges of G_0 ∪np which appear in the first set and, similarly, for the second step, we only use those spanned by np among the second set of edges. Hence, the latter is not a random subgraph of the complete graph, but a random subgraph of a graph that has pseudorandom properties with high probability. We will use the following definition of pseudorandomness, where the number of edges between any two not-too-small disjoint vertex sets is close the expected number of such edges in n1/2. A graph G on n vertices is pseudorandom if for any two disjoint subsets of vertices A,B with A·B≥ 250 n, we have e(A,B) ≥AB /3. We can now state the main result of this section. Let 1/C ≪, 1/d <1 with d ≥ 2 being a positive integer. Let T be a tree on at most (1 - )n vertices with maximum degree d, let G be a pseudorandom graph on n vertices, and write p := C/n. Suppose that G_p is coloured uniformly with n colours. Then, with high probability, G_p contains a rainbow copy of T. Observe that <Ref> is now a simple corollary of <Ref>, since the complete graph on n vertices is a pseudorandom graph. In order to prove <Ref>, we first prove an uncoloured version of it. Let 1/C ≪, 1/d<1 with d ≥ 2 being a positive integer. Set p:=C/n and let G be a pseudorandom graph on n vertices. Then, with high probability, G_p contains a copy of every tree with at most (1-)n vertices and maximum degree at most d. This is a generalisation of a result due to Alon, Krivelevich and Sudakov <cit.> who already proved that the same conclusion holds when G is the complete graph on n vertices. Their result relies on the fact that sparse and almost-regular `robust expanders' contain a copy of every almost-spanning bounded-degree tree (c.f. <Ref>). We will employ the same approach and prove <Ref> using essentially the same arguments, but with minor differences in calculations. For the sake of completeness, we give a proof in <Ref>. The next result allows to translate <Ref> into <Ref>, and is a simple consequence of a general result of Ferber and Krivelevich <cit.> for binomial random subgraphs of uniformly edge-coloured hypergraphs. Let , p ∈ (0,1) and set q := ^-1p. Suppose thatis a collection of subgraphs of K_n with at most (1-)n edges. Then [ [ np contains;some H ∈ ]] ≤[ [a uniformly edge-coloured nq,; with colours in [n], contains a rainbow H ∈ ]] . It is now easy to prove <Ref>. Let T be a tree on at most (1-)n vertices with maximum degree d and G be an n-vertex pseudorandom graph on V. Let C_0 be given by <Ref> on inputand d, and set p_0:=C_0/n. Letbe the collection of labelled copies of T in G. Observe that G_p_0 contains a copy of T with high probability, by <Ref>. Let np_0 be the binomial random graph on V, coupled with G_p_0 so that G_p_0⊆np_0. Then it follows that with high probability np_0 contains a graph in . Set C := ^-1 C_0 and p:=C/n. Let np be the binomial random graph on V, coupled with G_p so that an edge e in G is in G_p if and only if it is in np, and colour each of its edges uniformly in [n]. Then <Ref> implies that with high probability np contains a rainbow H∈ and, becauseis a collection of subgraphs of G and by the coupling, it follows that so does G_p. That is, with high probability, G_p contains a rainbow copy of T. § OVERVIEW OF <REF> Let G_0 be a graph on vertex set [n] with minimum degree at least δ n. Let G∼ G_0 ∪nC/n and suppose G is uniformly coloured in [n-1]. Let T be an n-vertex tree with maximum degree at most d that we wish to embed in a rainbow fashion in G. To aid our embedding, we seek simple structures in T, for which we use the following observation of Krivelevich <cit.>, where we recall that a bare path in T is a path whose interior vertices have degree 2 in T. For any integers n, k > 2, a tree with n vertices either has at least n/4k leaves or a collection of at least n/4k vertex-disjoint bare paths, each of length k.Our proof splits into two cases, according to the structure of the tree T. Let 1/C ≪ζ, δ, 1/d<1 with d ≥ 2 an integer and δ∈ (0,1). Let T be a tree on n vertices with maximum degree at most d, containing at least ζ n leaves. Let G_0 be an n-vertex graph with minimum degree at least δ n, and suppose that G ∼ G_0∪nC/n is uniformly coloured in [n-1].Then, with high probability, G contains a rainbow copy of T.Let 1/C ≪ζ≪δ, 1/d<1 with d ≥ 2 andζ n / 24 being integers and δ∈ (0,1). Let T be a tree on n vertices with maximum degree at most d, containing at least ζ n/24 bare paths, each with length 6/ζ. Let G_0 be an n-vertex graph with minimum degree at least δ n, and suppose that G ∼ G_0∪nC/n is uniformly coloured in [n-1].Then, with high probability, G contains a rainbow copy of T. <Ref> easily follows by combining <Ref> and <Ref>. For both the two theorems above, we employ the following strategy. We remove the paths or leaves from T and embed the remaining almost-spanning forest in a rainbow fashion using <Ref>. The challenge is then to embed the deleted paths or leaves covering exactly the remaining vertices and using exactly the remaining colours. We discuss this informally now, mentioning all auxiliary lemmas that we will need, and postpone the precise proofs to subsequent sections. Besides the lemmas below, the only other tool we shall need is Chernoff's bound. Let X be the sum of mutually independent indicator random variables. Then for any δ∈ (0,1) we have [|X- [X]| ≥δ·[X]] ≤ 2exp(-δ^2/3·[X]). §.§ Embedding trees with many leaves Suppose that T has Ω(n) leaves and let L be a maximal collection of leaves with distinct parents M. By the maximum degree assumption, we have |L| = Ω(n). Let V be the vertex set of the perturbed graph G. We first embed T∖ L in a rainbow fashion in nC/n using <Ref>. Completing this to a rainbow embedding of T essentially amounts to finding a rainbow perfect matching between the image of M and the uncovered vertices of V, which uses all the unused colours. This can be reduced to finding a rainbow directed Hamilton cycle in a suitable auxiliary edge-coloured perturbed directed graph. For that we will apply the following result of the authors, where we recall that np denotes the binomial random digraph on n vertices with edge probability p. Let 1/C ≪δ < 1 and D_0 be a directed graph on vertex set [n] with minimum in- and out-degree at least δ n. Suppose D ∼ D_0 ∪nC/n is uniformly coloured in [n]. Then, with high probability, D has a rainbow directed Hamilton cycle. §.§ Embedding trees with many bare paths Suppose now that T has Ω(n) not-too-short disjoint bare paths. Consider r such paths of length ℓ (where r = Ω(n) and ℓ is a constant which is not too small), and denote the ends of the i-th path by s_i and t_i. Let F be the forest resulting from removing the interior vertices of these bare paths from T. We will use <Ref> to embed F in G. However, in order to be able to turn this into a rainbow embedding of T (by embedding a rainbow collection ofr paths of length ℓ, with the i-th path having the images of s_i and t_i as endpoints), we first prepare an absorbing structure. We remark that this is the reason why <Ref> is stated for random subgraphs of pseudorandom graphs rather than np directly. We will state here several lemmas (namely <Ref>) from a manuscript <cit.> by the first two authors, where they proved an undirected version of <Ref>. These lemmas all have analogues in <cit.> for the directed setting, and in most cases have very similar proofs, but the directed versions do not immediately imply the undirected ones (due to parallel directed edges xy and yx being coloured independently). Absorber. Before building our absorber, we set aside a set of flexible vertices and flexible colours, where flexible here refers to the fact that they can be used to connect arbitrary pairs of vertices into short rainbow paths using an arbitrary colour. Let 1/C ≪ν≪μ≪δ < 1. Let G_0 be an n-vertex graph on V with minimum degree at least δ n and G ∼ G_0 ∪nC/n be uniformly coloured in :=[n-1]. Then there exist V_⊆ V and _⊆ of size μ n such that with high probability the following holds. For all u,v ∈ V, c∈, and V'_⊆ V_ and _' ⊆_ of size at least (μ - ν)n, there exists a rainbow path of length seven with endpoints u,v, internal vertices in V'_ and colours in _' ∪{c}, that contains the colour c. The building block of our absorber is given by the so-called (v,c)-gadget. These have been introduced by Gould, Kelly, Kühn and Osthus <cit.> in the context of random optimal proper colourings of the complete graph, and have already been used for perturbed graphs in <cit.>. Let v be a vertex and c a colour. A (v,c)-gadget, denoted by A_v,c, is the edge-coloured graph on 11 vertices depicted in <Ref>. With reference to the notation in <Ref>, we call P:=u_1vu_2P_1w_2w_3P_2w_1w_4 the (v,c)-absorbing path and P':=u_1u_2P_1w_2w_1P_2w_3w_4 the (v,c)-avoiding path. Moreover, we call u_1 the first vertex of the absorber, and w_4 the last vertex. Finally, we say that V(A_v,c)∖{v} are the internal vertices of A_v,c and (A_v,c) ∖{c} are the internal colours. Observe that P and P' in the definition of a gadget are both rainbow paths and share the same endpoints, which are the first and last vertex of the absorber. Moreover, P is spanning in V(A_v,c) and (A_v,c) and, similarly, for P' we have V(P') = V(P)∖{v} =V(A_v,c)∖{v} and (P') = (P)∖{c} = (A_v,c)∖{c}. The existence of (v,c)-gadgets is guaranteed by the following lemma. Let 1/C ≪ν≪δ < 1. Let G_0 be an n-vertex graph on V with minimum degree at least δ n and G ∼ G_0 ∪nC/n be uniformly coloured in :=[n-1]. Then with high probability the following holds. For any v ∈ V and c ∈ and for all V' ⊆ V and ' ⊆ that have size at least (1-ν)n, there exists a (v,c)-gadget with internal vertices in V' and internal colours in '. <Ref> allows to find many vertex- and colour-disjoint gadgets. In order to build a system of paths with a global absorbing property, we connect several of them. Suppose, for example, we are given a (v,c)-gadget A_v,c and a (v',c')-gadget A_v',c', that are vertex- and colour-disjoint. By connecting the last vertex of A_v,c to the first vertex of A_v',c' with a short rainbow path (vertex- and colour-disjoint of the gadgets), we obtain a structure which can absorb the pairs (v,c) and (v',c') simultaneously. The existence of such short rainbow paths is guaranteed by the following lemma. Let 1/C ≪ρ, λ≪δ. Let G_0 be an n-vertex graph on V with minimum degree at least δ n andbe a set of colours of size n-1. Let G ∼ G_0 ∪nC/n be uniformly coloured in . Then with high probability the following holds. For all subsets V' ⊆ V and ' ⊆ of size at least (1 - ν)n and any distinct u, v ∈ V, there exists a rainbow path of length three with u and v as endpoints, with internal vertices in V' and colours in '. Template. Note that we only have enough space to accommodate O(n) gadgets. The way we choose which pairs (v,c) to absorb (in order for the final structure to have strong absorbing properties) will be dictated by an auxiliary template graph. This technique has been introduced by Montgomery <cit.> and has already found a number applications. Let 1/n ≪ζ≤ 1 and suppose that ζ n is an integer. Then there exists a bipartite graph H on vertex classes R and S_1 ∪ S_2 with |R| = (2 - ζ)n, |S_1| = |S_2| = n and d_H(x)=40 for each x ∈ R, such that the following is true. Given any subset S_2' ⊆ S_2 with |S_2'| = ζ n, there is a matching between R and S_1 ∪ (S_2 ∖ S_2'). <Ref> can be proved identically to the proof of Lemma 2.8 in <cit.>. We call the graph given by <Ref> an (n,ζ)-template graph on (R,S_1 ∪ S_2). § EMBEDDING TREES WITH MANY LEAVES Let , λ>0 be such that 1/C ≪≪λ≪ζ, δ, 1/d , let _1 ∼nC/n, so that G ∼ G_0 ∪_1, and set V:=V(G) and :=[n-1]. We embed T in G in two steps. First, we remove a small linear number of leaves with distinct parents, and embed the resulting almost-spanning tree in a rainbow fashion using <Ref> and _1. Then we will find a rainbow perfect matching between the images of the parents and the uncovered vertices of V, using all remaining colours. Let L be a maximal collection of leaves of T with distinct parents. Since T has at least ζ n leaves and maximum degree at most d, we have |L| ≥ d^-1ζ n ≥λ n and we pick an arbitrary subset of L of size λ n, which, abusing notation, we denote by L. Let M be the collection of parents of the leaves in L and observe that |L| = |M|. Finally let T' = T ∖ L and note T' is a tree on (1-λ)n vertices. Let R be a random subset of V of size (λ-) n. Then, by Chernoff's bound and the union bound, with high probability, every v∈ V satisfies N_G_0(v) ∩ R≥1/2·δλ n. We assume that this holds. We claim that _1[V ∖ R] contains a rainbow copy of T', with high probability. Writing n' :=|V ∖ R|= (1 - (λ - ))n, let ' be a subset ofof size n', and let _1' be the subgraph of _1[V ∖ R] consisting of edges coloured '. Then _1' is a copy of a random graph n'C'/n', where C' = (|'|/||) · C ≥ C/2, which is uniformly coloured in '. As |V(T')| = (1 - λ)n ≤ (1 - /2)n', <Ref> implies that ' contains a rainbow copy of T', as claimed. Assume that a rainbow T' as above exists, and fix an embedding of it in V ∖ R. Let _0 be the set of colours innot used for the embedding of T', let M_0 be the image of M in the embedding, and let V_0 be the set of vertices in V ∖ R that are not used in the embedding. Then |_0| = n -1 - (V(T')-1) = λ n,V_0= n - V(T') - R =n, |M_0| = λ n. We claim that, with high probability, every v ∈ V satisfies |N_G_0(v) ∩ M_0| ≥1/2·δλ n. Indeed, observe that M_0 is distributed uniformly at random among all subsets of V ∖ R of size λ n[ Let us give some more formal details to convince the reader that this line of reasoning is valid. Suppose we embed T' in a random graph with vertex set V', with V' ∩ V = ∅ and V' = n. Choose uniformly at random a bijection π : V' → V. Then the image of V(T') under π is distributed uniformly at random among all subsets of V of size V(T'). ], thus the assertion in (<ref>) follows from a standard application of Chernoff's bound and the union bound, using ≪λ. We are left to find a rainbow perfect matching between M_0 and V_0 ∪ R using all colours in _0, and we will do that in two phases, by first finding a rainbow matching saturating V_0. Write G_0':=G_0 ∖_1 and note that so far we have only revealed colours of the edges of _1[V ∖ R], and thus the colours of the edges of G_0'[V_0, M_0] are yet to be revealed. With high probability, there is a rainbow matching in G_0'[V_0, M_0] which saturates V_0 and uses colours in _0. For v ∈ V_0, write X_v for the number of colours from _0 appearing on edges in G_0'[{v}, M_0]. We claim that, with high probability, X_v ≥ 2 n for every v ∈ V_0. Note that this implies the claim as, since |V_0|= n, the required rainbow matching can be constructed greedily. To estimate the probability that X_v < 2 n, note that if this holds then there is a subset ' ⊆_0 of size 2 n such that all edges of G_0' between v and M_0 are coloured using colours in (∖_0) ∪'. Here we use (<ref>) and |(∖_0) ∪'|/||≤ 1-(λ - 2), as well as the easy fact that, with high probability, |N__1(v) ∩ M_0| ≤ 100 log n for every v ∈ V_0. [X_v < 2 n] ≤λ n2 n·(1 - (λ - 2))^|N_G_0 ∖_1(v) ∩ M_0|≤(eλ/2)^2 nexp(-(λ - 2) ·δλ n/4) ≤exp((2·log(eλ/(2)) - δλ^2/8 )n) = o(n^-1), using ≪λ, δ. It follows that X_v ≥ 2 n for every v ∈ V_0, with high probability. Let _1 be set of colours still available and M_1 the unsaturated vertices in M_0. Then |M_1| = |_1| = |R| = (λ - )n. Note that, so far, we have not revealed any colours of edges touching R, nor edges of _1 touching R. Also, using (<ref>), (<ref>) and the fact that |M_1|=|M_0|- n, the graph G_0[M_1, R] is a balanced bipartite graph on 2(λ - )n vertices, with minimum degree at least δλ n/2 -n ≥δλ n/4. We define three random bipartite graphs H_0, _1, _2, with bipartition (M_1, R), as follows. Let the edges in H_0 be the edges in G_0[M_1,R] that are not in _1 and whose colour is in _1, let the edges in _1 be the edges in _1[M_1,R] that have a colour in _1, and include each pair in M_1 × R in _2 with probability λ C/(2n), independently. Fix an outcome of H_0. Then _1 can be coupled with _2 so that H_0 ∪_2 ⊆ H_0 ∪_1. Note that it suffices to prove [e ∈ E(_1)|e ∉ E(H_0)] ≥λ C/2n for every e ∈ M_1 × R. To see this, note first that if e ∉ E(G_0) then [e ∈ E(_1)|e ∉ E(H_0)] = [e ∈ E(_1)] = [e ∈_1] ·[(e) ∈_1] = C/n·|_1|/n-1≥λ C/2n. Now consider e ∈ E(G_0). Then [e ∈ E(_1)|e ∉ E(H_0)] = [e ∈ E(_1) ∖ E(H_0)]/[e ∉ E(H_0)] = [e ∈_1and (e) ∈_1]/[e ∈ E(_1)] + [e ∉ E(_1)and (e) ∉_1] = C/n·|_1|/n-1/C/n + (1 - C/n) ·(1 - |_1|/n-1)≥λ C/2n. By Chernoff's bound and δ(G_0[M_1,R]) ≥δλ n/4, with high probability δ(H_0) ≥δλ^2 n/8. Fix such an outcome of H_0, write H := H_0 ∪_2, and fix a coupling so that H ⊆ H_0 ∪_1. Then we know that all edges in H_0 ∪_2 are coloured in _1, but we have not yet revealed the colours. Thus H is coloured uniformly in _1. We are done if we can find with high probability a rainbow perfect matching in H with colours in _1. To this end, we define the following auxiliary digraph D with vertex set [m], where m := |R| = |M_1| = |_1| = (λ - )n. Let σ_1:[m] → M_1 and σ_2:[m] → R be arbitrary bijections. Let D_0 and _2 be the digraphs on [m] with the following edges: for distinct x,y ∈ [m] we have xy ∈ E(D_0) if and only if σ_1 (x) σ_2 (y) ∈ E(H_0), and xy ∈ E(_2) if and only if σ_1 (x) σ_2 (y) ∈ E(_2). Then define D:=D_0 ∪_2. It is easy to check that δ^0(D_0)=δ(H_0) ≥δλ^2 m/8, each directed edge is present in E(_2) independently with probability at least λ^2 C/(4m), and each edge of D is coloured independently and uniformly at random in _1. Therefore, D satisfies <Ref> and thus, with high probability, D has a rainbow directed Hamilton cycle (x_1,, x_m). Then, from the definition of D, it follows that {σ_1(x_1) σ_2(x_2), σ_1(x_2) σ_2(x_3), , σ_1(x_m)σ_2(x_1)} is a rainbow perfect matching in H that uses all colours in _1, as desired. § EMBEDDING TREES WITH MANY BARE PATHSLet μ, ν satisfy1/C ≪ζ≪μ≪ν≪δ, 1/d,such that μ/ζ, μ n, ζ n/24 are integers. Set V:=V(G), r:= ζ n /24 ∈ andk:= 24 (2μ - ζ)/ζ∈. Pick C' so that (1 - C'/n)^2 = 1 - C/n (then C' ≥ C/2).Let _1, _2 ∼nC'/n and _3 ∼n1/2 be independent binomial random graphs on V, so that G∼ G_0 ∪_1 ∪_2. Let G_0', _1' be the subgraphs of G_0, _1 respectivelywith edges in E(_3), and let G' ∼ G_0' ∪_1'. Let G be the spanning graph of K_n with edges disjoint from E(_3) ∪ E(_1). Straightforward applications of the union bound and Chernoff's bound yield that, with high probability, simultaneously nolistsep * G_0' has minimum degree at least δ/3, * every disjoint vertex sets A, B in G with |A| |B| > 240n satisfy e_G”(A,B) ≥ |A| |B| / 3. In particular, every induced subgraph of G” on at least 0.99n vertices is pseudorandom (as 250 · 0.99 n ≥ 240 n). Notice that G' and G are edge-disjoint since E(G') ⊆ E(_3) and E(G) ∩ E(_3) = ∅. First we will use G' to construct the absorbing structure. Then we will use the random subgraph of G, with edges in E(_2) and colours disjoint from those on the absorbing structure, to embed the almost spanning forest resulting from removing r=ζ n/24 bare paths (whose precise length will be determined later). Finally, we will use the absorbing structure to complete this to a rainbow copy of T. We start by setting aside the sets needed to build our absorbing structure. By the union bound, the conclusions of <Ref> hold simultaneously and with high probability for G', so we assume they all hold. Let V_⊆ V and _⊆ be the sets of size μ n given by <Ref>. Let V_ and _ be arbitrary subsets of V ∖ V_ and ∖_ respectively, each of size μ n. Let X:={x_i:i ∈ [r]}, Y:={y_i:i ∈ [r]} and W be pairwise disjoint subsets of V ∖ (V_∪ V_) with |W|=(2μ-ζ)n. Letbe a subset of ∖ (_∪_) with ||=(2μ-ζ)n. Our absorber will be able to absorb a small set of vertices and a small set of colours while connecting, for each i ∈ [r], the vertex x_i to the vertex y_i through a rainbow path. The vertex set W and the colour setwill only be used to make our approach work and do not play a special role.Let H be the (μ n, ζ/μ)-template graph on (R,S_1 ∪ S_2) given by <Ref>, with |R|=(2μ-ζ) n and |S_1|=|S_2|=μ n, and notice e(H)=40 · |R|. Let π_v:S_1 ∪ S_2 → V_∪ V_ be a bijection such that π_v(S_1)=V_ (and thus π_v(S_2)=V_). Similarly, let π_c:S_1 ∪ S_2 →_∪_ be a bijection such that π_c(S_1)=_ (and thus π_c(S_2)=_). Observe that we have |W|=||=|R|=r · k (recalling that r = ζ n / 24 and k = 24(2μ-ζ)/ζ), and thus we can write W={w_x :x ∈ R} and ={d_x: x ∈ R}.With high probability, G' contains a (π_v(y),d_x)-gadget and a (w_x,π_c(y))-gadget, for each xy ∈ E(H) with x ∈ R and y ∈ S_1 ∪ S_2, with the following property. The internal vertices of any two of them are pairwise disjoint and disjoint of V_∪ V_∪ X ∪ Y ∪ W; similarly, the internal colours of any two of them are pairwise disjoint and disjoint of _∪_∪.Let 𝒜 be a maximal collection of gadgets as in the statement of the claim and suppose for contradiction |𝒜| < 2 |E(H)| = 2 · 40 · |R| = 80 (2μ - ζ)n. Let V_0 (resp. _0) be the union of the vertices (resp. colours) spanned by the gadgets in 𝒜 and those in V_∪ V_∪ X ∪ Y ∪ W (resp. _∪_∪). Then |V_0|,|C_0| = O(μ n) < ν n, where we used μ≪ν. Hence, by the conclusion of <Ref> for G', we can add another gadget to 𝒜, contradicting its maximality.Partition R into r sets R_1,…,R_r each of size R/r = k and let A_i^1,…,A_i^80kbe the gadgets for the edges incident to R_i. Note that there are precisely 80k of them since d_H(x)=40 for each x ∈ R, and each edge incident to x has two associated gadgets. We will now connect x_i to y_i via short rainbow paths and the gadgets associated to R_i. With high probability, G' contains a collection {P_i^1, …, P_i^80k+1: i ∈ [r]} of (80k+1) · r rainbow paths of length three such that the following holds. All their interior vertices (resp. colours) are distinct, and disjoint from V_∪ V_∪ X ∪ Y ∪ W (resp. _∪_∪) and the set of vertices (resp. colours) spanned by the gadgets given by <Ref>. Moreover, for each i ∈ [r], the path P_i^1 starts with x_i and ends with the first vertex of A_i^1; the path P_i^j starts with the last vertex of A_i^j-1 and ends with the first vertex of A_i^j, for each 2 ≤ j ≤ 80k; the path P_i^80k+1 starts with the last vertex of A_i^80k and ends with y_i. Letbe a maximal collection of connecting paths as in the statement of the claim. Suppose for contradiction || < (80k+1) · r and let u and v be a pair of vertices as in the statement of the claim not connected by any path in . Let V_0 (resp. _0) be the union of the vertices (resp. colours) spanned by the connecting paths of , the gadgets given by <Ref> and the vertices in V_∪ V_∪ X ∪ Y ∪ W (resp. the colours in _∪_∪). Observe that |V_0|,|_0| = O(μ n) < ν n, using μ≪ν. By the conclusion of <Ref> for G', there exists a rainbow path of length three with endpoints u and v, that avoids V_0 and _0. Therefore we can add another connecting path to , contradicting its maximality.Recall that for each pair (x_i,y_i) we built 80k gadgets and 80k+1 connecting paths. As we will see shortly, when using the absorbing structure, for each pair we will use the absorbing paths (each of length 10, c.f. <Ref>) in precisely 2k gadgets, and the avoiding path (each of length 9, c.f. <Ref>) in every other gadget. Together with the connecting paths (each of length 3), we will get a path of lengthℓ := 3(80k+1) + 9 (80k-2k) + 10· 2k = 962 k + 3between x_i and y_i. Letℓ' := 28 and let F be the forest resulting from removing the internal vertices of r bare paths of length ℓ + ℓ' from T. Observe that T does indeed have r bare paths of length ℓ+ℓ', since ℓ + ℓ' = 962 k + 31 ≤ 1000 k= 1000 · 24(2μ-ζ) ζ^-1≤ 6ζ^-1. We removed the internal vertices of paths of length ℓ+ℓ' as opposed to length ℓ, as this will allow us to cover, using <Ref>, the leftover vertices and colours after the embedding of F in G via <Ref>. In fact, ℓ' has been chosen so that the path length we get from <Ref> matches the length of the paths removed from T, and only this choice of ℓ' works.Let Ṽ (resp. ) be the set of all vertices (resp. colours) used to build the gadgets of <Ref> and the connecting paths of <Ref>. Then|Ṽ| = 2 e(H) · 10 +++ W + X + Y + (80k+1)· r · 2= (962k + 28)rand similarly||= 2 e(H) · 9 +++ D + (80k+1)· r · 3= (962 k + 27)r.Moreover, since we removed (ℓ + ℓ' -1) · r vertices from T, we haveV(F) = n - r(ℓ+ℓ' - 1) = n- (962k+30)r. With high probability, G[V∖Ṽ] contains a rainbow copy of F, withedges in E(_2) and colours disjoint from . Let _0 be a subset of ∖ of size |V ∖Ṽ| (such a set exists as |∖| ≥ |V ∖Ṽ|), and write V_0:= V ∖Ṽ. Let H := G”[V_0]. By <ref>, H is pseudorandom. Let H' be the subgraph of H, consisting of edges that are in _2 and are coloured in _0. This means that each edge of H is included in H' with probability p = (C'/n) · |_0|/||. Now reveal the colouring of H', and observe that it is distributed uniformly at random in _0. Moreover we have |V(F)| = |V_0| - 2r = |V_0| - ζ n / 12 ≤ (1 - ζ/12)|V_0| and |_0| = |V_0|. Hence, we can apply <Ref> to H' and get that, with high probability, H' contains a rainbow copy ofF. Fix an embedding of F in G”. LetV' and ' be the vertices and colours not in Ṽ andand not used for the embedding of F. Then V' = n-|Ṽ|-V(F)=2r and ' = (n-1)-||-(V(F)-r-1) = 4r, where we used that F is a forest with r+1 components. Thus we can write V'={v_i^1, v_i^2: i ∈ [r]} and ' = { c_i^11, c_i^12, c_i^21, c_i^22: i ∈ [r]}. Let x_i' and y_i' be the embedded endpoints of the i-th bare path of T.With high probability, there is a collection of pairwise vertex- and colour-disjoint rainbow paths { Q_i^11, Q_i^12,Q_i^21,Q_i^22: i∈ [r]} such that * the ends of Q_i^11 are x_i',v_i^1, of Q_i^12 are v_i^1, x_i, of Q_i^21 are y_i',v_i^2, and of Q_i^22 are v_i^2,y_i;* each path has length 7 and all its 6 internal vertices in V_;* for each s,t ∈{1,2}, the path Q_i^st has colours in _∪{c_i^st} and uses the colour c_i^st.The collection can be found greedily using <Ref>. Indeed suppose we are not done yet and we still need to make a connection, say from x_i' to v_i^1. Let V_0 ⊆, _0⊆ be the vertices and colours already used. Then, using r = ζ n /24, we have V_0 = _0 <24r=ζ n. Therefore, we can apply the conclusion of <Ref> and find a rainbow path Q_i^ii of length 7 with endpoints x_i' and v_i^1, internal vertices in V_∖ V_0 and colours in (_∖ C_0) ∪{c_i^11}, that uses the colour c_i^11. Therefore the claim holds. Let ', ' be the vertices inand colours inused in the paths in <Ref>. Then ' = ' = 24r = ζ n.Let H_1 = H - π_v^-1( ')and H_2 = H - π_v^-1( ' ). Then by choice of H according to <Ref>, H_1 and H_2 have perfect matchings M_1 and M_2 respectively. Recall that each pair (x_i, y_i) is associated with the gadgets A_i^1, , A_i^80k, two for every edge incident to each x∈ R_i. For xy ∈ E(H_1) with x ∈ R and y ∈ S_1 ∪ S_2, let P_M_1(xy) be the absorbing path of the (π_v(y), d_x)-gadget if xy ∈ E(M_1), and the avoiding path otherwise. Similarly, for xy ∈ E(H_2) with x ∈ R and y ∈ S_1 ∪ S_2, let P_M_2(xy) be the absorbing path of the (w_x, π_c(y)-gadget if xy ∈ E(M_2), and the avoiding path otherwise. Then defineP_i:= ⋃_j∈ [80k+1]P_i^j∪⋃_xy ∈ E(H_1):x∈ R_i P_M_1(xy) ∪⋃_xy ∈ E(H_2):x∈ R_i P_M_2(xy),and observe that P_i is a path with endpoints x_i and y_i.Each x∈ R_i lies in exactly one edge of M_1 and exactly one edge of M_2, so P_i consists of 2 |R_i| = 2k absorbing paths (of length 10), 80k -2k avoiding paths (of length 9), and 80k+1 connecting paths (of length 3). Hence, P_i has length ℓ (c.f. <Ref>). Finally defineP_i' := (Q_i^11∪ Q_i^12) ∪ P_i ∪ (Q_i^21∪ Q_i^22),and observe that P_i' is path with endpoints x_i' and y_i' and has length ℓ + 4· 7 = ℓ +ℓ'. By construction, the paths P_i' are rainbow, pairwise vertex- and colour-disjoint, and use no vertex or colour in the embedding of F.Recall that we obtained F from T by removing the internal vertices of r bare paths of length ℓ+ℓ' and that x_i' and y_i' were the images of the endpoints of the i-th bare path. Therefore the image of F together with ⋃_i=1^r P_i' gives a rainbow embedding of T in G, as desired.§ CONCLUDING REMARKS In this paper we studied the problem of finding rainbow subgraphs of uniformly coloured randomly perturbed graphs. First, we gave a general result (<Ref>) applicable when the number of colours is asymptotically optimal. Then, we gave a result concerning rainbow bounded-degree spanning trees when the number of colours is exactly optimal (<Ref>). We showed that any given bounded-degree spanning tree typically appears when a linear number of random edges are added to any dense graph, and all the edges are uniformly coloured with colours in [n-1]. It would be interesting to improve our result to a universality statement: namely, is it true that a uniformly coloured randomly perturbed graph with colours in [n-1] typically contains a rainbow copy of every bounded-degree spanning tree at once? The uncoloured universality question was already considered by Böttcher, Han, Kohayakawa, Montgomery, Parczyk and Person <cit.>, who proved that for every α∈ (0,1) and d ∈ℕ there exists C=C(α,d)>0 such that, if G_0 is an n-vertex graph with δ(G_0) ≥α n, then with high probability G_0 ∪(n,C/n) contains every n-vertex tree T with Δ(T) ≤ d. We remark that the edge density is optimal (up to a constant factor).amsplain§ APPENDIX - PROOF OF <REF> The proof of <Ref> uses the following result of Alon, Krivelevich and Sudakov <cit.>, for which we need the following definition. Given two positive numbers c and α<1, a graph G is called an (α,c)-expander if every subset of vertices X ⊆ V(G) with |X|≤α |V(G)| satisfies |N_G(X)| ≥ c |X|. Let d ≥ 2 be an integer and 0 <<1/2. Then for n large enough the following holds. Let G be a graph on n vertices of minimum degree δ and maximum degree Δ such that, with K := 20d^2 log(2/)/, we have *Δ^2 ≤1/Kexp(δ/8K-1), and *every induced subgraph G_0 of G with minimum degree at least δ/2K is a (1/2d+2,d+1)-expander. Then G contains a copy of every tree T on at most (1-)n vertices of maximum degree d. Using <Ref>, the proof of <Ref> reduces to verifying that the conditions of <Ref> hold with high probability for a C/n-random subgraph of a pseudorandom graph G. The argument and calculations follow very closely those for the proof of Theorem 1.1 in <cit.>, where this was verified for nC/n, which can be seen as the C/n-random subgraph of the complete graph on n vertices. We first show that a random subgraph of a pseudorandom graph with high probability contains a nearly spanning subgraph with good local expansion property. Let 1/C ≪, 1/d and set θ := 0.01 and D := C/10. Let G be a pseudorandom graph on n vertices and p:=C/n. Then, with high probability, G_p contains a subgraph G^∗ that satisfies the following. * V(G^∗)≥ (1-θ)n; * D ≤_G^∗(v) ≤ 25 D, for all v ∈ V(G^∗); * Every induced subgraphG_0 of G^∗ with δ(G_0) ≥ 100d log D is a (1/2d+2, d+1)-expander. Then we state easy facts about random subgraphs of pseudorandom graphs. Let G be a pseudorandom graph on n vertices and p =p(n) ∈ (0,1) such that np>20. With high probability the following holds for G_p. * For any disjoint vertex subsets A and B with A=a, B=b and a b p ≥ 250 n, the number of edges between them is at least abp/4 and at most 3 abp/2. * Every subset of vertices A of size a ≤ n/4 spans at most apn/2 edges. For the first part of the claim note that, because G is pseudorandom and ab ≥ 250 n, we have e_G(A,B) ≥ a b /3, so [e_G_p(A,B)] ≥ p a b /3, and then the lower bound follows from a standard application of Chernoff's bound and the union bound over the at most 2^2n choices for A,B. For the upper bound, the trivial upper bound e_G(A,B) ≤ a b implies [e_G_p(A,B)] ≤ a b p and then again a standard application of Chernoff's bound and the union bound yield the desired result. The second part follows directly from the second part of <cit.> because we can couple G with np so that G ⊆np. We now deduce <Ref> from <Ref>. Let 1/C ≪,1/d and set θ := 0.01 and D := C/10. Let G^∗ be the subgraph of G_p given by <Ref>. We verify that G^∗ satisfies the conditions of <Ref> with parameters d and _1 :=- θ/1-θ∈ [0.99, ], and so K = K(,d) = 20 _1^-1 d^2log( 2 /_1). Using that D ≤δ(G^∗) ≤Δ(G^∗) ≤ 25 D, D=C/10 and 1/C ≪, 1/d, we have (Δ(G^∗))^2 ≤ 625 D^2 ≤1/Kexp(D/8K-1) ≤1/Kexp(δ(G^∗)/8K-1) and hence the first condition is satisfied. To check the second condition, it suffices to show that _1D/40 d^2 log(2/_1)≥ 100 d log D, which follows from the fact that1/C ≪_1, 1/d and that the function D/log D is increasing. Hence G^∗ contains a copy of every tree of maximum degree d and of up to (1-_1)(1-θ)n ≥ (1-)n vertices. Finally we prove <Ref> Assume that the conclusions of <Ref> hold. Let X be the set of θ n /2 vertices of largest degree in G_p. Part <ref> of <Ref> implies that the number of edges in G_p[X] is at most X p n/2 = 5DX. On the other hand, since p X (n-X) ≥ 2 D θ n ≥ 250 n, using D^-1 = 10 C^-1≪ = 100 θ, Part <ref> of <Ref> implies that, with high probability, e_G_p(X, V(G)∖ X ) ≤ 3 X n p/2 = 15 DX. Hence, ∑_v∈ X_G_p (v) ≤ 25 D X, which implies there is a vertex in X with degree at most 25 D. By the definition of X, it follows that G_p has at most θ n /2 vertices of degree greater than 25 D. Delete these vertices, denote the remaining graph by G' and observe |V(G')| ≥ (1-θ/2)n. We greedily remove from G' vertices of degree less than D, until none are left. Suppose that we deleted at least θ n /2 vertices. Let Y be a subset of size θ n /2 of the deleted vertices. Then _G_p(y, V(G')∖ Y) ≤ D for each y ∈ Y, so e_G_p(Y, V(G')∖ Y) ≤ D Y. On the other hand, p YV(G') ∖ Y≥ 5 D θ (1-θ) n ≥ 250 n (using D^-1≪θ), so Part <ref> of <Ref> implies that e_G_p(Y, V(G') ∖ Y) ≥YV(G') ∖ Y p/4 > 2 D Y, which is a contradiction. Hence, the number of vertices that we deleted is at most θ n /2. Denote the resulting graph by G^∗ and observe that it satisfies the first two properties of the lemma. Suppose G^∗ fails to satisfy the third property of the lemma. Then there exist U⊆ V(G^∗) such that G^∗[U] has minimum degree at least 100dlog D and is not a (1/2d+2, d+1)-expander. This implies there is X⊆ U, of size t≤1/2d+2U, such that for the set Y:=N_G^∗[U](X) it holds that Y≤ (d+1) t. We first consider the case t ≤log D/D n. By the minimum degree condition, we have e_G_p(X,Y) ≥ 50d t log D. Let A_t be the event that there exist vertex sets X, Y with X = t, Y≤ (d+1) t, and e_G_p(X,Y) ≥ 50d t log D. Then we can bound [A_t] as follows, where we remark that for the first inequality we use that the binomial coefficient is increasing until the middle layer and that for the penultimate inequality we use t < (d+1) t < (d+1) log D/Dn < n/2 since D^-1 = 10 C^-1≪ d, and log(3e/10) ≤ -0.1. [A_t]≤ntn(d+1) tt (d+1) t50 d t log D p^50 d t log D≤( n/t·(n/(d+1)t)^d+1·( (d + 1) t^2 p/50dt log D)^50 d log D)^t≤( ·(n/t)^2d·( /d+1)^d+1( t/n/log D /D)^50d log D·((d+1)/5d)^50d log D)^t ≤( ·( t/n/log D /D)^50d log D-2d·(D/log D)^2d·(3 /10)^50 d log D)^t ≤( ^-5d log D + 2dlog D +1 ·( t/n/log D /D)^40d log D)^t ≤( ^-2d log D·( t/n/log D /D)^40d log D)^t. For t< log n, this is at most ( log n/n/log D /D)^40d log D≤ n^-79 and for log n ≤ t ≤log D/D n it is at most ^-2d log D ·log n≤ n^-4 so in either case [A_t] = o(n^-1). Finally we consider the case t≥log D/D· n. Set Z := U ∖ (X ∪ Y), and notice that e_G_p(X, Z) = 0, since G^∗[U] is an induced subgraph of G_p. From U≥ (2d + 2)t, |X|=t and Y≤ (d+1) t it follows that Z≥ dt. Let B_t be the event that there exist vertex subsets of size t and dt, with no edge between them in G_p. Then [B_t]≤ntndt (1-p)^dt^2≤(n/t·( n/dt)^d ·^-pdt)^t≤( (n/t)^2d· ^-pdt)^t = ( (n/t)^2 ^-pt)^dt≤( (n/n log D /D)^2^-10 D/n·log D/D n)^dt≤( D^2 D^-10)^dt = o(n^-1). Hence G^∗ fails to satisfy the third condition of the lemma with probability at most ∑_t=1^n ([A_t] + [B_t]) =o(1). Therefore, by the union bound, we conclude that with high probability G^∗ satisfies all conditions of the Lemma. | http://arxiv.org/abs/2310.18284v1 | {
"authors": [
"Kyriakos Katsamaktsis",
"Shoham Letzter",
"Amedeo Sgueglia"
],
"categories": [
"math.CO"
],
"primary_category": "math.CO",
"published": "20231027172152",
"title": "Rainbow subgraphs of uniformly coloured randomly perturbed graphs"
} |
A Chebyshev Confidence Guided Source-Free Domain Adaptation Framework for Medical Image Segmentation Jiesi Hu, Yanwu Yang, Xutao Guo, Jinghua Wang*, Ting Ma* This work was supported in part by grants from the National Natural Science Foundation of P.R. China (62276081, 62106113), Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine (NO:ZYYCXTD-C-202004), Basic Research Foundation of Shenzhen Science and Technology Stable Support Program (GXWD20201230155427003-20200822115709001) and The Major Key Project of PCL (PCL2021A06). (Corresponding author: Jinghua Wang, Ting Ma.) Jiesi Hu , Yanwu Yang, and Xutao Guo are with School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, and The Peng Cheng Laboratory.(e-mail: [email protected], [email protected], [email protected]) Jinghua Wang is with School of Computer Science and Technology, Harbin Institute of Technology at Shenzhen. (e-mail: [email protected]) Ting Ma is with School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, The Peng Cheng Laboratory, Guangdong Provincial Key Laboratory of Aerospace Communication and Networking Technology, Harbin Institute of Technology, Shenzhen, and International Research Institute for Artifcial Intelligence, Harbin Institute of Technology, Shenzhen. (e-mail: [email protected])Received XXX; accepted YYY ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Successful data-driven science requires complex data engineering pipelines to clean, transform and alter data in preparation for machine learning, and robust results can only be achieved when each step in the pipeline can be justified, and its effect on the dataexplained.In this framework, our aim is to provide data scientists with facilities to gain an in-depth understanding of how each step in the pipeline affects the data, from the raw input to training sets ready to be used for learning.Starting from an extensible set of data preparation operators commonly used within a data science setting, in this work we present a provenance management infrastructure for generating, storing, and querying very granular accounts of data transformations, at the level of individual elements within datasets whenever possible. Then, from the formal definition of a core set of data science preprocessing operators, we derive a provenance semantics embodied by a collection of templates expressed in PROV, a standard model for data provenance. Using those templates as a reference, our provenance generation algorithm generalises to any operator with observable input/output pairs. We providea prototype implementation of an application-level provenance capture library to produce, in a semi-automatic way, complete provenance documents that account for the entire pipeline. We report on that reference implementations ability to capture provenance in real ML benchmark pipelines and over TCP-DI synthetic data. We finally show how the collected provenance can be used to answer a suite of provenance benchmark queries that underpin some common pipeline inspection questions, as expressed on the DataScienceStackExchange.§ INTRODUCTIONDataset selection and data wrangling pipelines are integral to applied Data Science workflows.These typicallyculminate in the generation of predictive models for a broad range of data types and application domains through training. A number of critical choices are made when these pipelines are designed, starting with the choice of which datasets to include or exclude, how these should be merged <cit.>, and which transformations are required to produce a viable training set, given a choice of target learning algorithms. The main intended consequence of these transformation pipelines is to optimise the predictive performance and generalisation characteristics of the models that are derived from the ground data.There are however also unintended consequences, as these transformations alter the representation of the domain thatthe learning algorithms generalise from, and they may remove or inadvertently introduce new bias in the data <cit.>.In turn, this may reflect on non-performance properties of the models, such as their fairness. The term, formally defined in terms of statistical properties of the model's predictions <cit.>, broadly refers to the capability of a model to ensure that its predictions are not affected by an individual belonging to one of the groups defined by some sensitive attribute(s), such as sex, ethnicity, income band, etc.Motivation. Models that are provably fair are also perceived as more trustworthy, an important feature at a time when machine learning models are increasingly used to support and complement human expert judgment,in areas where decisions have consequences on individuals as well as on businesses. Substantial recent research has produced techniques for explanation using: counterfactuals <cit.>, local explanations <cit.>, data <cit.>and meta-models <cit.>.While these techniques focus primarily on the model itself, relatively little work has been done into trying to explain models in terms of the transformations that occur before the data is used for learning.The ultimate goal of this work is to enable explanations of the effect of each transformation in a pre-processing pipeline on the data that is ultimately fed into a model <cit.>. As an initial step in this direction, we have developed a formal model and practical techniques for recording data derivations at the level of the atomic elements in the dataset, for a general class of data transformation operators. These derivations are a form of data provenance and are expressed using the PROV data model <cit.>, a standard and widely adopted ontology. Data derivations form a corpus of graph-structured metadata that can be queried as a preliminary step to support user questions about model properties.Problem scope. [id=pm]In this paper we focus on data transformations that are commonly found in data science processing pipelines and across application domains, and we further limit the scope to structured tabular data.[However, we are not going to consider more specialised data pre-processing steps that may apply to data types such as video, audio, images, etc.]These steps have been systematically enumerated in multiple reviews (see e.g. <cit.>)and include, among others: feature selection, engineering of new features; imputationof missing values, or listwise deletion (excluding an entire record if data is missing on any variable for that record); downsampling or upsampling of data subsets in order to achieve better balance, typically on the class labels (for classification tasks) or on the distribution of the outcome variable (for regression tasks); outlier detection and removal; smoothing and normalisation; de-duplication, as well as steps that preserve the original information but are required by some algorithms, such as “one-hot” encoding of categorical variables.A complex pipeline may include some or all of these steps, and different techniques, algorithms, and choice of algorithm-specific parameters may be available for each of them. These are often grounded in established literature but variations can be created by data scientists to suit specific needs. In this work we consider the space of all configured pipelines that can potentially be composed out of these operators.[id=pm]Regarding the data that these operate on, we focus on structured two-dimensional tabular data, namely dataframes, which are commonly supported by R and python as well as by a dedicated Spark API, and excluding tensors and multidimensional matrices. While this is done to simplify our proof-of-concept implementation, we observe that considering higher-dimension tabular structures has practical implications as it increases the complexity of the derivations from input to output elements, however the underpinning provenance templates are fundamentally the same.Overview of the approach.Firstly, we providea formalisation and categorisation of a core set of these operators. Then, with each class of those operators, we associate a provenance template that describes the effect on the data of each operator in the class at the appropriate level of detail, i.e., on individual data elements, columns, rows, or collections of those. By mapping operators to these fundamental templates, we are then able to identify the transformation type based on observation of the operator's input and outputs alone. By abstracting to this level, we can automatically create the appropriate provenance for an operator in a data science pipeline if it follows the pre-identified input-output patterns, even if the operator itself has never been seen before. Contributions. Our contributions can be summarised as follows. * A formalisation and categorisation of a core set of operators for data reduction, augmentation, transformation, and fusion that move beyond the relational algebra (Section <ref>), showing how common data pre-processing pipelines can be expressed as a composition of these operators.* The semantics of the provenance that is generated for white-box transformations, as reduced to the core set of operators (Section <ref>).* A method for capturing the provenance of a pipeline, based on observing the changes to the data, not the operator that was applied (Section <ref>). * An application-level provenance capture facility for Python, underpinned by the formal model, that (i) identifies the operation under execution to capture its provenance and (ii) is backed by a Neo4J database used as a provenance store (Section <ref>). This new approach almost entirely removes the older requirement for pipeline designers to programmatically “drive” provenance generation, making most of the process transparent; * [id=ac]Using a reference implementation, we report on: (i) the impact of adding provenance capture to real-world pipelines (Section <ref>), (ii) the ability to capture provenance in real ML benchmark pipelines and over TCP-DI synthetic data<cit.> (Section <ref>), (iii) a use case analysis showing that provenance queries can provide support to data scientists in the development of real-world machine-learning pipelines (Section <ref>) (iv) how data provenance collected with our approach can be inspected through user-friendly interfaces (Section <ref>), and (v) a comparison to other similar provenance capture systems (Section <ref>). [id=ac]A scalability analysis showing that, while the overall provenance document can be arbitrarily large, it is created incrementally in a persistent data store, making the entire process scalablein the number of operators (Section <ref>). Werun extensive experiments on a synthetic TPC-DI dataset at multiple scales <cit.>, and report on the time and space overhead of using the provenance functions.§ MODELS AND PROBLEM STATEMENT§.§ Data modelThe data collected for ML tasks are usually represented as tables or statistical data matrices in which columns represent specific features of a phenomenon being observed, and rows are records of data for those features describing observations of the phenomenon. To capture both formats, we will refer to these generically as datasets, similar in spirit to notions of ordered relations <cit.> and dataframes <cit.>.A (dataset) schema S is an array of distinct names called features (or attributes) S= [ _1, …, _n ]. Each feature is associated with a domain of atomic values (such as numbers, strings, and timestamps). With a little abuse of notation, hereinafter we will compare schemas using set containment over their features.A dataset D over a schema S=[_1, …, _n] is an ordered collection of rows (or records) of the form: i:(d_i1, …, d_in) where i is the unique index of the row and each element d_ij(for 1≤ j≤ n) is either a value in the domain of the feature _j or the special symbol , denoting a missing value. Row indexes can be implemented in different ways (e.g., with RID annotations <cit.>). We only assume here that a row of any dataset can be uniquely identified.Given a dataset D over a schemaS we denote by D_i the value for the featureof S occurring in the i-th row of D. We also denote by D_i∗ the i-th row of D, and by D_∗ the column of D associated with the featureof S. A possible datasetD over the schema S=[CId, Gender, Age, Zip ] is as follows:[ 1|c|CId GenderAgeZip;1113F 2498567;2241M 28 ;3375C 32768;4578F 4432768;]D_∗Age and D_2∗ denote the third column and the second row of D, respectively. [id=pm]Note that, as mentioned in the introduction, in this work we focus on dataframes, which are described by a schema. Extensions to tensors and multidimensional matrices are left for future work.§.§ Data manipulation modelA general classification.As part of this work, we analyzed several packages that allow users to build data pre-processing pipelines. Table <ref> contains an example overview of the available operators from the ML pipeline building tool Orange <cit.> and the popular SciKit packages <cit.>. As indicated on the left-hand side of the table, all of them can be classified into four main classes, according to the type of manipulation done on the input dataset(s) over a schema S: 0pt 0pt 0pt* Data reductions: operations that take as input a dataset D on a schema S and reduce the size of D by eliminating rows (without changing S) or columns (changing S to S'⊂ S) from D;* Data augmentations: operations that take as input a dataset D on a schema S and increase the size of D by adding rows (without changing S) or columns (changing S to S'⊃ S) to D;* Data transformations: operations that take as input a dataset D on a schema S and, by applying suitable functions, transform (some of) the elements in D without changing its size or its schema (up to possible changes to the domain of the involved features of S)* Data fusions: operations that take as input two datasets D_1 and D_2 on schema S_1 andS_2 respectively and combine them into a new dataset D on a schema S involving the features of S_1 andS_2.We now introduce a number of basic operators of data manipulation over datasets belonging to one of the above classes of data manipulations, as indicated in the right-hand side of Table <ref>. This approach is in line with the observation that most of the operations of current data exploration packages rely on a rather small subset of operators <cit.>. Data reductions. Two basic data reduction operators are defined over datasets. They are simple extensions of two well-known relational operators. π_C: the (conditional) projection of D on a set of features of S that satisfy a boolean condition C over S, denoted by π_C(D), is the dataset obtained from D by including only the columns D_∗ of D such thatis a feature of S that satisfy C;σ_C: the selection of D with respect to a boolean condition C over S, denoted by σ_C(D), is the dataset obtained from D by including the rows D_i∗ of D satisfying C.The condition of both the projection and the selection operators can refer to the values in D, as shown in the following example that uses an intuitive syntax for the condition.Consider the dataset D in Example <ref>. The result of the expression π_{features without nulls}(σ_Age<30(D))is the following dataset: [ 1|c|CId GenderAge;1113F 24;2241M 28;]Data augmentations. Two basic data augmentation operators are defined over datasets. They allow the addition of columns and rows to a dataset, respectively. _f(X): Y: the vertical augmentation of D to Y using a function f over a set X=[_1 …_k] ⊆ S of features, is obtained by adding to D a new set of features Y=['_1 …'_l] whose new values d_i'_1… d_i'_l for the i-th row areobtained by applying f to d_i_1… d_i_k; _X:f(Y): the horizontal augmentation of D using an aggregative function f is obtained by adding one or more new rows to D obtained by first grouping over the features in X and then, for each group, by applying f to π_Y(D) (extending the result to S with nulls if needed). [id=pm]Note that horizontal augmentation generates new rows based on grouping, i.e., by X, followed by an aggregationf(Y) applied to the values for Y in each group.Consider again the dataset D in Example <ref> and the following functions: (i) f_1, which associates the string young when age is less than 25 and the string adult otherwise, and (ii) f_2, which computes the average of a set of numbers. Then, the expression _f_1(Age): ageRange(D)produces the following dataset: [ 1|c|CId GenderAgeZip ageRange;1113F 2498567young;2241M 28 adult;3375C 32768 ;4578F 4432768adult;][id=pm]In expression E_2=_Gender:𝑎𝑣𝑔(Age)(D)first group by Gender is computed, yielding two groups (for M and F), then 𝑎𝑣𝑔(Age) is executed on each group, resulting in the new rows 5,6 in the dataframe below: [ 1|c|CId GenderAgeZip;1113F 2498567;2241M 28 ;3375C 32768;4578F 4432768;5 F 34 ;6 M 28 ;] Note that new data can be added to a dataset using a horizontal augmentation where X=∅, Y=S, and f denote the procedure for adding records (e.g., by asking them to the user). Note also that horizontal augmentation allows us to combine, in the same dataset, entities at different levels of granularity, a feature that can be very useful to a data scientist (e.g., to compute, in the example above, the mean deviation).Data transformation. One basic data transformation operator is defined over datasets:τ_f(X): the transformation of a set of features X of D using a function f is obtained by substituting each value d_i with f(d_∗), for each featureoccurring in X.Let D be the dataset in Example <ref> and f be an imputation function that associates to the 's occurring in a featurethe most frequent value occurring in D_∗. Then, the result of the expressionτ_f(Zip)(D)is the following dataset: [ 1|c|CId GenderAgeZip;1113F 2498567;2241M 2832768;3375C 32768;4578F 4432768;]Data fusion.Given D^L and D^R on schemas S^L and S^R respectively, the two basic data fusion operators join and append allow the combination of a pair of datasets. * the join D^L ^t_C D^R of D^L and D^R based on a boolean condition C is the dataset over S^L∪ S^R obtained by applying standard join operation of type t (where t can be equal to , () ) based on the condition C; * the append D^LD^R of D^L to D^R is the dataset over S^L∪ S^R obtained by appendingD^L to D^R and possibly extending the result with nulls on the mismatching columns (S^L ∪ S^R) ∖ (S^L ∩ S^R). Let D^L be the dataset in Example <ref> (which we report here for convenience) and D^R the dataset that follows. D^L: [ CId GenderAgeZip;1113F 2498567;2241M 28 ;3375C 32768;4578F 4432768;]D^R:[ CId name;1241Jim;2578 Mary;] Then, the result of the expressionD^L^ inner_D^L.CId=D^R.CId D^Ris the following dataset: [ 1|c|CId GenderAgeZip Name;1241M 28 Jim;2578F 4432768 Mary;]On the other hand, the result of the expressionD^L D^Ris the following dataset: [ 1|c|CId GenderAgeZip Name;1113F 2498567 ;2241M 28;3375C 32768 ;4578F 4432768 ;5241 Jim;6578Mary;]We note that the data manipulation model presented here has some similarities with the Dataframe algebra <cit.>. The main difference is that we have focused on a restricted set of core operators (with some of that in <cit.> missing and others combined in one) with the specific goal of providing a solid basis to an effective technique for capturing data provenance of classical preprocessing operators. We point out that our algebra can be easily extended to include operators implementing other ETL/ELT-like transformations whose fine-grained provenance capture has been described elsewhere <cit.>. §.§ Data provenance modelThe purpose of data provenance, in this setting, is to support the generation ofsimple explanations for the existence (or the absence) of some piece of data in the result of complex data manipulations. Along this line, we adopt as the provenance model a subset of the PROV model <cit.> from the W3C, a widely adopted ontology that formalises the notion of provenance document and which admits RDF and other serialisation formats to facilitate interoperability. The minimal elements of the model are graphically describedas shown in Figure <ref>.In PROV an entity represents an element d of a dataset D and is uniquely identified by D and the coordinates of d in D (i.e., the corresponding row index and feature). An activity represents any pre-processing data manipulation that operates over datasets. For each element d in a dataset D' generated by an operation o over a dataset D we represent the facts that:(i) dwasGeneratedBy o, and (ii)dwasDerivedFrom a set of elements in D. In addition, we represent: (iii) all the elements d of D such that d was used by o and (iv) all the elements d of D such that dwasInvalidatedBy (i.e., deleted by) o (if any).Note that in PROV derivation implies usage, but the inverse is not true and this is why this notation is not redundant.Let E be the first expression in Example <ref> and D'=E(D). A fragment of the data provenance generated by this operation, for two of the dataset elements, is reported in Figure <ref>. §.§ [id=rt]Limitations and possible extensions [id=rt] The models for data representation, manipulation, and provenance generation introduced in the previous sections cover a large body of data preparation pipelines, but they are clearly not exhaustive. In particular, we have assumed that the input data is in a bi-dimensional, tabular format (e.g., csv files), with rows representing observations of some phenomena and columns representing interesting features of the observations. However, multi-dimensional data,including tensors and matrices, are common in many machine learning applications.Our model can be extended by assuming that each value is indeed a measure for a combination of features, possibly at different levels of aggregation, similar to logical multidimensional data models that have been proposed in the literature for data warehousing and OLAP (e.g., <cit.>).This would also make it possible to include multi-level aggregation operations (roll-up, drill-down, slicing, and dicing) by extending the data model as described e.g. in <cit.>. These extensions add complexity to the resulting provenance graphs, as the derivations must be traced through multiple aggregations, however, this would not add to our conceptual framework, as these simply extend the fundamental provenance patterns used for standard one-level aggregations (see Sec.<ref>). Supporting such extensions is therefore currently beyond the scope of our proof-of-concept implementation. §.§ Problem StatementWe consider compositions of the operators introduced in Section <ref> into pipelines thattake as input a collection of datasets D_1,…,D_n and produce a dataset D', denoted D' = p(D_1,…,D_n), by applying a (partially ordered) sequence of operators. Note that E can be represented as a tree, where the internal nodes are operators and leaves are datasets. Note also that although in principle any combination is possible, in practice there are constraints on the ordering of the operators, because some operators may alter the dataset schema.The performance of themodel learned fromp(D) is dependent upon theoperators involved in p. As the data scientist iterates over versions of the models, they may wish to inspect and understand exactly what happened at each step within the pipeline. This can be a complex manual task for any realistic pipeline.The provenance collected by the system presented here is intended to allow the data scientist to review, understand, and debug what happened in past runs of any given pipeline. Depending on the granularity of the provenance, this can be as coarse-grained as a dataset, p was used with transformations T_1, T_2, ... to a very fine-grained version which allows users to track individual data items as described previously. Classic provenance queries include: Why <cit.>, How <cit.> and Why Not <cit.>. Instances of each of these queries are shown in Table <ref> as Queries 2, 3, and 7-9 respectively.In addition to these classic provenance queries, we have analyzed questions posed to the Data Science Stack Exchange (DSSE) about problems posed by users, encountered when trying to understand and debug the pipelines. An explanation of the use cases and the provenance queries in Table <ref> that they relate to can be found in Table <ref>. Through this analysis, we have identified an additional 6 provenance queries based on the use cases from DSSE: All Transformations (1); Dataset-level Feature Operation (4); Record Operation (5); Item-level Feature Operation (6); Impact on Feature Spread (12) and Impact on Dataset Spread (13). Queries 1, 4, and 5 are similar to How-provenance but are focused only on the transformations. The difference between them is the granularity of focus - dataset, feature, record, or individual value. Queries 10 and 11 have been implemented to emphasize how provenance can help when developing a pipeline. They show what an item was and what it will be, highlighting potential errors or imperfections. Queries 12 and 13, however, present a new usage of provenance, and thus a distinctly new provenance query type. In the DSSE use cases, it became clear that a question being asked was “what operations were performed to the data and how did those change the data profile”. This is a reasonable question as these transformations may entail unintended consequences, as they alter the representation of the domain thatthe learning algorithms generalize from, and they may remove or inadvertently introduce new bias in the data <cit.>.In turn, this may reflect on non-performance properties of the models, such as their fairness. Fairness, formally defined in terms of statistical properties of the model's predictions <cit.>, broadly refers to the capability of a model to ensure that its predictions are not affected by an individual belonging to one of the groups defined by some sensitive attribute(s), such as sex, ethnicity, income band, etc. Queries 12 and 13 provide a mechanism that computes the statistical properties of the data before and after an operation to identify when there are major shifts in distributions. Thus, the problem within this work is to: a) define the set of operations for data manipulation available within a pipeline; b) establish a set of provenance templates that can be used to reason over and capture the provenance of these operations over the data; c) show that our approach can support typical provenance queries in an effective and scalable way.§ DATA PROCESSING OPERATORSIn this section, we illustrate a number of common data pprocessing operators that are often used in data preparation workflows, showing how they can be suitably expressed as a composition of the basic operators introduced in Section <ref>. §.§ Data Reduction Feature Selection. This operation consists of selecting a set of relevant features from a given dataset and dropping the others, which are either redundant or irrelevant to the goal of the learning process.Feature selection over a dataset D with a schema S can be expressed by means of a simple pipeline involving only the projection operator with a condition that selects the set of features I⊂ S of interest:FS(D)=π_C(D)where C={∈ I}.A special case of feature selection is an operation that drops columns witha value rate of missing values higher than a threshold t. In this case, the condition of the projection operator is more involved as it requires introspection of the dataset:C={∈ S |(D_i=, 1≤ i≤ n) < t }. Instance Selection. The aim of this operation is to reduce the original dataset to a manageable volume by removing certain records with the goal of improving the accuracy (and efficiency) of classification problems.Also in this case, instance selection over a dataset D with a schema S can be expressed by means of a simple pipeline involving only the selection operator with a condition that identifies the set of relevant rows of D by means of a predicate p:IS(D)=σ_C(D)where C={ D_i∗∈ S | p(D_i∗)}.Similar to feature selection, a relevant case of instance selection drops rows witha value rate of missing values higher than a threshold t. In this case, C={ D_i∗∈ D |(D_ij=, 1≤ j≤ m) < t }§.§ Data Transformations By data transformation, we mean any operation on a given dataset that modifies its valueswith the goal of improving the quality of D and/or making the process of information extraction from D more effective.The following operator is meant to capture data transformation (𝐷𝑇 in its generality:𝐷𝑇(D)=τ_f(X)(D)where f is any scalar function that generates a new value f(x) from values of feature set X of S. Several cases of transformations are common in pre-processing pipelines, as illustrated in the following. Data repair. It is the process of replacing inconsistent data items with new values. In this case, f is a simple function that converts values and the data transformation possibly operates on the whole dataset. Binarization. It is the process of converting numerical features to binary features. For instance, if a value for a given feature is greater than a threshold it is changed a 1, if not to 0.Normalization. It is a scaling technique that transforms all the values of a feature so that they fall in a smaller range, such as from 0 to 1. There are many normalization techniques, such as Min-Max normalization, Z-score normalization, and Decimal scaling normalization. This operation operates on a single feature at a timeDiscretization. It consists of converting or partitioning continuous features into discrete or nominal features. It performs a value transformation from categorical to numerical data.Imputation. It is the process of replacing missing data (nulls in our data model) with valid data using a variety ofstatistical approaches that aim at identifying the values with the maximum likelihood. §.§ Data augmentationSpace Transformation. This operation takes a set of features of an existing dataset and generates from them a new set of features by combining the corresponding values. Usually, the goal is to represent (a subset of) the original set of features in terms of others in order to increase the quality of learning.The application of this operation to a dataset D over a schema S can be expressed by means of an expression involving a vertical augmentation that operates on a subset X of the features in S and produces a new set of features Y, followed by a projection operator that eliminates the features in X,thus maintaining those in Z=(S∪ Y)-X: ST(D)=π_{ features in Z}(_f(X): Y(D)) Instance Generation: [id=pm]These operators include grouping and aggregation, and their effect is to fill regions in the domain of the problem, which does not have representative examples in original data, or to summarize large amounts of instances in fewer examples. These are also denoted prototype generation methods, as the artificial examples created tend to act as a representative of a region or of asubset of the original instances.The application of this operation to a dataset D over a schema S can be expressed by means of an expression involving a horizontal augmentationthat, if needed, groups over a subset X of the features in S andthen apply a summary function f over another subset of S:IG(D)=_X:f(Y)(D). This operation can be preceded by a data reduction operator (a projection or a selection) to isolate the portion of the original dataset on which we intend to operate.String Indexer. This operator encodes a feature involving strings into a feature of string indices. The indices are in [0, numLabels).It is a special case of Space transformation.One-Hot Encoder.This operation maps a feature involving strings to a set of boolean features.Specifically, it creates one column for each possible value occurring in the feature. Each new feature gets a 1 if the row contained that value and a 0 if not. It is a special case of space transformation. §.§ Data fusion Data preparation pipelines often require combining datasets coming from different data sources. For this reason, packages for data pre-processing are usually equipped with facilities for combining datasets in two main ways, as follows.Data integration. It is the process of combining rows of two datasets on the basis of some common property. This can be useful when, for instance, we need to extend the features of observations of a phenomenon or objects of interest [id=rt]stored in a dataset D_1 (e.g., the technical information of smartphones on sale) with further features of the same observations or of the same objects gathered elsewhere [id=rt]stored in a dataset D_2 (e.g., the ratings of the same smartphones available on a review site).[id=rt]This activity can be supported by an expression involving the join operator over the datasets under consideration and can be preceded by a data reduction operator to isolate the portion X of the original dataset on which we intend to operate, as follows:π_X(D_1 ^ left_C D_2)where C specifies the condition that rows of different datasets must satisfy to be combined (e.g., they share the same standardized product identifier). [id=rt]The join operator ^t_C, in which C specifies the condition that rows of different datasets must satisfy to be combined (e.g., they share the same standardized product identifier), serves for such purposes.This operation can be preceded by a data reduction operator (a projection or a selection) to isolate the portion of the original dataset on which we intend to operate. The join operator can also be equipped with some sophisticated techniques for joining rows, such as one based on entity resolution. Data expansion. It is the process of putting together rows of two datasets that contain data referring to different observations of the same phenomenon or to different objects of the same type. This can be useful when, for instance, a training set is built by accumulating data coming from diverse data sources, [id=rt]say D_1, D_2 and D_3 (e.g., experimental data of a medical treatment produced by [id=rt]three different laboratories). The append operatorcan be used in such scenarios [id=rt]possibly preceded by some data reduction operators, for example as follows:D_1 π_C_1(D_2) σ_C_2(D_3) .As shown in Example <ref>, this operator also accounts for situations in which we need to merge datasets that involve different features of the same phenomena. § ABSTRACT ANALYSIS OF PROVENANCE CAPTUREIn order to capture the provenance of a pipeline p of a combinationof pre-processing operations _1,…,_n forming a tree,we introduce an abstract provenance-generating function (prov-gen), and associate it with each operation _k occurring in p. In accordance with the provenance model presented in Section <ref>,each element d_ij of a dataset D produced during the execution ofp is represented by a PROV entity in the provenance document. The properties of this entity include the row index i and feature j in D, and an identifier k denoting the fact that d_ij is in the result of the operation _k in p.Similarly, each operation _k in p is represented by a PROV activity in the provenance document, whose properties specify the operator(s) illustrated in Section <ref> that implement(s) _k, and the list of the features on which _k operates. §.§ Provenance templatesWe now present example instances ofprovenance-generating (prov-gen) functionsfor the main types of operations observed in data science pipelines, discussed in Section <ref>.To recall, these are: (i) data reduction: D' = π_C(D),D' = σ_C(D); (ii) Data augmentations:_f(X): Y, _X:f(Y);(iii) Data transformations: τ_f(X); and (iv) Data fusion.A prov-gen function takes as inputs the sets of input and output values D,D' for the operator, and produces a PROV document that describes the transformation produced by the operator on each element of D, as reflected in D'. Note that for binary operators, namely join and append, D includes inputs from both operands.Take for example the case of Vertical Augmentation (VA): _f_1(Age): ageRange(D)which we used in Example <ref>, where attribute Age is binarised into {young, adult} based on a pre-defined cutoff, defined as part of f(). The prov-gen function for VA will have to produce a collection of small PROV documents, one for each input-output pair ⟨ D_i,Age, D'_i,AgeRange⟩ as shown in the example.As these documents all share the same structure, we define a commonPROV template which is then instantiated multiple times, once for each input/output pair.A template is simply a PROV document that may contain variables, indicated by the namespace var:, which are used as placeholders for values. Here templates are designed to capture the transformation at the level of individual elements of D, or its rows or columns, as appropriate. Thus a template will have a used set of entities, which refer to the subset of data items in D which have been used by o, and a generated set of new entities, corresponding to new elements in D' (for projection and selection, it will have an invalidated set of entities instead, as these operators remove data from D).The PROV template for (VA) is shown in Figure <ref>, where we use the generic attribute names X,Y to indicate the old and new feature names. One or more binding generators are associated with each template: they determine how values found in D, D' upon execution of the operator are substituted for the variables. Each variable substitution results in a [id=pm]small PROV document, which represents all derivations through a single operator. [id=pm]In the following we are going to refer informally to these documents as provlets, to indicate that a complete PROV document representing a complexderivation chain can be produced by joining multiple such provlets on their data identifiers, as described in Sec <ref> below.which we refer to as a provletIn the VA example, the transformation between D and D' is 1:1 and thus a new provlet is created fromeach value D_*,Age of column Age and the corresponding value in AgeRange.Using a list comprehension notation, the binding generator for the variables used in the template in Figure <ref> are defined as:[ ⟨F = Age, I=i, V=D_i,Age, F' = AgeRange , J=i, V'=f(D_i,Age) ⟩ |i:1 … n ] These are the new entities for the newly created data elements in the new column D_*,AgeRange∈{young, adult}. Two of the n PROV documents for thisspecific example are shown in Figure <ref>. §.§ Template binding rules We define templates for each of the five core operators, shown in Figure <ref> and the corresponding binding generators for used, generated, and invalidated sets of entities.Note that we do not need to create complete provlets for all entities in any given output dataset. If f(D) does not change d_ij,then no provenance record needs to be generated. However if f(D) discards elements of D, then a provlet containing an invalidation relationship is required. Whenever a new entity is generated, i.e. when f(D) creates anew or updated value in d_ij, a complete provlet is also required. In other words, we only require provenance statements that capture different versionsbetween elements in the dataset.§.§.§ Data reduction, selection Data reduction invalidates existing entities. For selection: D' = σ_C(D), the bindings specify that an entire row i is invalidated whenever condition C is False when evaluated on that row. This affects all features X ∈ S: [ ⟨ F=X, I=i⟩ | X ∈ S, i:1… n, C(D_i,* = False) ]A wasInvalidatedBy relationship is established between each of these entities and a single Activity, representing the selection.§.§.§ Data reduction, projectionConditional projection D' = π_C(D) invalidates all elements in column X ∈ S whenever C returns True when evaluated on elements of X:[ ⟨ F=X, I=i⟩ | X ∈ S , i:1… n, C(D_*,X = True )]Similar to the selection, here too a wasInvalidatedBy relationship is established between each of these entities and a single Activity, representing the projection.§.§.§ Vertical augmentation _f(X): Y) takes a set X ⊂ S of features and adds a new setY of features, Y ∩ S = ∅ to D' as shown in Ex. <ref>. The provenance consists of n PROV documents, one for each row i of D, and in each such document entities for D_i,X_m, X_m ∈ X are used to generate entities for the new features Y_h ∈ Y. Thus, the bindings are defined as follows: Fori:1 … n:used entity: [⟨ F = X_m, I=i, V=D_i,X_m⟩ |X_m ∈ X ]generated entity:[⟨ F' = Y_h, J=i, v=f(D_i,X) ⟩ |Y_h ∈ Y ] These entities are then connected to a single Activity, as shown in Figure <ref> and in the examples (Figg. <ref>, <ref>), using Used and wasGeneratedBy relationship. For each pair of used, generated entities having the same index on each side (i.e., where var:I = var:J after template instantiation), a wasDerivedFrom relationship is also added, to assert a stronger relationship (derivation occurs through the Activity that connects the entities).§.§.§ Horizontal augmentation, [id=pm]grouping and aggregation The _X:f(Y) operator groups records according to columns X ⊂ S, producing a list G = [g_1 … g_h] of h groups. Then for eachg_i ∈ G it computesf(Y) from the records in the group, producing a new record containing the aggregated value in column A, the values that define the group in each column X_m ∈ X, which we denote 𝑣𝑎𝑙(𝑋_𝑚, 𝑔_𝑖), and null in all other columns(see Ex. <ref> in Section <ref>). Thus, the operator produces h records, and let 𝑟𝑜𝑤𝑠(G) = [n+ 1,n+2,...n+h] denote their new row indexes in the dataset.[id=pm]The corresponding provenance template and binding rules are similar to those for Vertical Augmentation (Figure <ref>), but with some differences, and are best illustrated initially using an example. Consider the following dataframe: [ X_1 A B; 1 x_110 b_1; 2 x_230 b_2; 3 x_120 b_3; 4 x_240 b_4; 5 x_130; 6 x_270; ] [id=pm]and grouping operator _X:f(Y) where X = [X_1], f(Y)= ∑A, where the sum on A values occurs for each group.[id=pm]The set of provlets that represent the derivations of the elements in the new rows 5, 6 are depicted in Fig. <ref>. Values x_1, x_2 in rows 5,6 identify the groups and are derived from the corresponding values in rows 1,3 and 2,4, respectively.Similarly, the two values in column A are obtained by adding upthe corresponding A values in the same groups of rows (1,3 and 2,4). Finally, the null values in column B are generated by the operator, but their values are not derived from any inputs.[id=pm]Generating these provlets requires maintaining the association between each group: group 1 in row 5, and group 2 in row 6, and the corresponding input rows (1,3 and 2,4).Implementations can achieve this in different ways. Formally, we assume that each group g_i ∈ G maps to a set of “group input” rows, i.e., 𝑔𝑖𝑛𝑝𝑢𝑡(g_i). In our example, we have𝑔𝑖𝑛𝑝𝑢𝑡𝑠(g_1) = {1,3}, 𝑔𝑖𝑛𝑝𝑢𝑡(g_2) = {2,4}. [id=pm]Then, the binding rule for group g_i in row i and elements in each of the columns C ∈ X can be written as follows. [id=pm]For generated entities:⟨ F = C, I=i, V=D_i,C⟩For used entities:[⟨ F = C, I=j, V=D_i,C⟩ |j ∈𝑔𝑖𝑛𝑝𝑢𝑡(g_i) ] [id=pm]Similarly, for the values in columns C' ∈ Y, the rule for generated entities is:⟨ F = C', I=i, V=D_i,C'⟩and for used entities:[⟨ F = C', I=j, V=D_i,C'⟩ |j ∈𝑔𝑖𝑛𝑝𝑢𝑡(g_i) ]Finally, the rule for the null values applies to values in columns C" = S ∖ Y ∖ X, and they only have the generatedBy side:⟨ F = C, I=i, V=Null ⟩[id=pm]Note that aggregation operators reduce the granularity of the derivations. Typical Value Transformation operators, for instance a normaliser, would map each input element to a corresponding output element. Aggregations, on the other hand, produce a “provenance bottleneck” where n rows are mapped to m<n rows, where m is the number of groups, because the provenance of any “downstream” dataframe that makes use of the groups will have to include one of the group rows. In practice, aggregations may produce a pipeline pattern as shown in Fig. <ref>, where some of the operators () use the aggregations, and the provenance of new dataframe elements produced by these operators will map to grouped rows and not to the upstream un-aggregated dataframes, leading to some loss of granularity. In the Figure, the bottom provenance dependencies (thick dotted lines) for elements ofmust include some of the group rows, and those in turn are derived from each of the inputs.Note also, however, that the loss of granularity depends on the number of groups. In the extreme case where the grouping operator produces a single group consisting of all input rows, for instance, the result is a provenance graph where all inputs contribute to the grouping, and all outputs depend on the grouping, producing a complete bottleneck.§.§.§ Data transformationτ_f(X) takes features X ⊂ S and computes derived values, which are used to update elements of D, but without generating new elements. The bindings reflect such in-place update, but as the new value for each element is defined by f(), we assume for simplicity that all values are updated, although, in reality, some will stay the same, as shown for instance in Ex. <ref> (imputation). The resulting bindings reflect this many-many relationship, where (potentially) all values in a column X_m ∈ X are used to update (potentially) all values in that same column (and this applies to each column). Thus, the provenance document consists of |X| provlets, one for each [id=pm]column, with bindings defined as follows. Used entities:[⟨ F=X_m, V = D_i,X_m, I=i⟩ | i:1… n]Generated entities:[⟨ F' = X_m, V'= f(D_*,X_m), J=i ⟩ | i:1… n]Used and wasGeneratedBy relationships, mediated by an Activity, are created between each Generated entity and all of the Used entities having the same X_m, along with the corresponding wasDerivedFrom relationships.[id=pm]It is worth clarifying one potential limitation that occurs when the data derivation operator contains parameters whose values are set by inspecting the input dataframe. In our approach, these values are not “used” by the operator, despite the fact that, in reality, the operator is input-dependent. As an example, consider a Scaling operator, which scales each value in column X using a range that is defined by the min and max values found in X. According to the template just defined, this operator produces a set of 1-1 derivations, namely from each output value in X, back to its corresponding input value. However, in the current approach the fact that the Scaler depends on the input values of X, which it has inspected, is not captured. §.§.§ Join In Section <ref> we introduced a join operator:D' = D^L ^t_C D^R where condition C may involve any columns F ⊂ S^L ∪ S^R.As an example, consider S^L = [A, B, C], S^R = [A, C, D, E] and C ≡ D^L.A = D^R.A 𝐚𝐧𝐝 D^L.B = D^R.D, thus F = { D^L.A, D^R.A, D^L.B, D^R.D }.[id=pm]Let D^L_i = [x, y, c_1], D^R_j = [x, c_2, y, e] be two tuples that contribute a result tuple D'_h = D^L_i ^t_C D^R_j = [ x, y, c_1, x, c_2, y, e]. Let D^L_i, D^R_j be two tuples that contribute a result tuple D'_h to D', i.e., D^L_i ^t_C D^R_j = D'_h. Note that [id=pm]D^L_i, D^R_jthese two tuples correspond precisely to the witness tuples in the why-provenance of D'_h,f, [id=pm]for some attribute f ∈ F, as defined in <cit.>.The why-provenance of D'_h,f can be expressed formally in terms of the two contributing tuples, i.e., using the polynomial notation proposed in <cit.>.However, here we are interested in the more granular derivations at the level of the single values, rather than of the entire tuple.To express the fine-grained provenance of a value D'_h,f in the result, we first consider the values D^L_i,f, D^R_j,f, f∈ F,used by the join operator to evaluate C:𝑢𝑠𝑒𝑑 ={ D^L_i,f∪D^R_j,f | f ∈ F }[id=pm]= [D^L_i,A = x, D^L_i,B = y, D^R_j,A = x, D^R_j,D = y]We apply template (1) in Fig. <ref> to assert that each value in D'_h,f was generated by the join operator and that the operator used all the values in the 𝑢𝑠𝑒𝑑 set.This is achieved using the following binding generator:forf∈ F: iff ∈ S^L: ⟨ F=f, NDX=i, H=h, V=D^L_i,f, V'=D'_h,f⟩iff ∈ S^R: ⟨ F=f, NDX=j, H=h, V=D^R_j,f, V'=D'_h,f⟩ Secondly, we express that each value in the result was derived from the corresponding value in one of the two operands, and that the derivation is supported by a usage/generation pair as shown in template (2) in Fig. <ref>.Note that this template covers both the case where a feature is used as part of an equijoin condition, such as A in the example, and also the case where null values are generated as part of an outer join. Template 2 is instantiated using the following bindings generator: forf∈ S^L: ⟨ F=f, NDX=i, H=h, V=D^L_i,f, V'=D'_h,f⟩forf∈ S^R: ⟨ F=f, NDX=j, H=h, V=D^R_j,f, V'=D'_h,f⟩ Each of the two templates generates a PROV fragment, and these are then combined by virtue of their common entities and activity (the join operator). Fig. <ref> [id=pm](where the actual values of D'_h,f are only shown in the first provlet, to avoid overloading the Figure) shows the provenance fragments for values D'_h,f for a generic tuple h and for each f. Note in particular that the generation relationships in templates 1 and 2 do not result in multiple generation arcs in the final provenance, as those have identical source and sink nodes (i.e. the entity representing the value and the activity representing the join). §.§.§ AppendConsider again Example <ref> in Section <ref> (page sec:datamodel), where a dataset D^L with schema S^L = [CId,Name] is appended to D^R with schema S^R = [CId,Gender, Age,Zip,Name]: D' = D^LD^R.Let n,m be the number of rows in D^L, D^R, respectively. Observing that the order of the rows in the operands is preserved in the result, we identify four types of output values D'_i,f: (1) values derived from a corresponding D^L_i,f, when i<n_1 and f∈ S^L;(2) values derived from a corresponding D^R_i,f, when i ≥ n_1 and f∈ S^R;(3) Null values when i<n_1 and f∉ S^L;(4) Null values when i ≥ n_1 and f∉ S^R.A derivation relationship is created for cases (1) and (2), which is supported by a corresponding generation-usage pair of relationships, with the operator as the mediating activity; while for cases (3) and (4), only a generation relationship is created. Fig. <ref> shows the PROV template for this pattern.The binding generator function for derivations, generation, and usage of copied values is defined as follows: fori:0 … n_1 -1: iff ∈ S^Lthen ⟨ F=f, NDX=i, V=D'_i,f⟩# template (1) applies fori:n_1 … n_2 -1: iff ∈ S^Rthen ⟨ F=f, NDX=i⟩# template (2) applies § PROVENANCE GENERATIONThe combination of provenance templates and corresponding binding rules, embodied by the prov-gen functions, which we just presented, provide a formal description of the provenance semantics associated with each of the core operator classes: data reduction, augmentation,transformation, and fusion. In this section, we present a concrete approach to provenance generation that is grounded in this formalisation. §.§ The approach Provenance generation operates by (i) observing the execution of operators that consume and generate datasets,(ii) analysing the value and structural changes between the input(s) and output datasets for that command, and (iii) based on the observed change pattern, select one or more of the templates described in the previous section, to capture the dependencies between the elements of the datasets that have changed.This approach ensures that the topology of the resulting provenance graph is consistent with the templates, but it also broadens the scope of the operators for which provenance is generated, namely to any operator that transforms an input into an output dataset.As a simple example, consider an imputation operation that causes some previously null values to be set to 0, in some (or all) of the columns. This values change pattern is easily recognised and is used to trigger provlet generation using the appropriate template, in this case data transformation (cf. <ref>).More general transformation patterns can be captured using more than one template, and by composing the resulting provlets. For example, consider a pipeline like the following, in which D_a, D_b, and D_c are the input datasets and f is an imputation function over a feature K of D_a:[ D_1=τ_f(K)(D_a); D_2=D_b ^ outer_K_1=K_2 D_c;D_3=D_1D_2 ] Its execution results in a collection of three provlets, each accounting for the dependencies between elements of the datasets (1) D_1 and D_a; (2) D_2 and D_b, D_c; and (3) D_3 and D_1, D_2, respectively.At the end of the execution, these provlets are consolidated into a single, final provenance document that accounts for all transformations across the entire pipeline. In this example, this will create a graph of dependencies where elements of D_3 are linked through derivation relationships to elements of D_a, D_b, D_c.In the cases above, the change analysis identifies the appropriate template without the need for syntactic analysis of the source code. In particular, these examples illustrate simple provenance generation, so called because provlets are independently generated for each input/output datasets pair. More complex composite provenance generation can also be achieved, which captures the provenance of an operation implemented by a sequence of commands.We illustrate this in the next Section for the case of one-hot encoding transformation.§.§ Change analysis algorithmWe now present the dataset change analysis algorithm that is responsible for generating each of the provlets. The algorithm considers unary and binary operators separately, with help from lightweight code instrumentation. In the following, we only discuss the case of unary operators, as a complete example of join and append provenance has been provided earlier.Implementing join provenance efficiently presents new challenges, however, and these are discussed separately below (Section <ref>).Details of the code instrumentation required to support provenance generation are provided in the next Section, along with details of the Observer pattern <cit.> used to monitor changes in datasets through execution.The algorithmlooks at changes in either shape or values between the input and output datasets, denoted D and D', respectively. The cases listed below are summarised in Figures <ref> and <ref>.Shape changes are detected simply by comparing the number of rows m, m' or the number of columns n, n' in D, D'. Value changes are detected by reviewing values within each column. Shape changes.When m'<m or m'>m, the horizontal augmentation (cf. <ref>) or reduction by selection(cf. <ref>) templates are applied, respectively.Adding columns is interpreted similarly, i.e., n'>n triggers the application of the Vertical Augmentation template (<ref>).However, this condition also causes a list to be created which contains the added columns.If D' is then used as input to the next command, producing D”, and it is the case that n”<n' when the list of added columns is non-empty, then this is interpreted as a sequence where first a set of columns are added, and then other columns are removed.This enables the provenance generator to infer dependencies between such columns. Derivation relationships are thus added accordingly to the provlet that represents the provenance for D,D', and D”.This composite behavior makes it possible to detect patterns like one-hot encoding, which both adds and removes columns but does so using more than one operator as shown in Figure <ref>.The following sequence of operations, in which an input dataset D is first extended by encoding with the function h the values occurring in the feature B and then the such feature is deleted, is routinely used to achieve the result. [one-hot encoding] [D_1=_h(B)(D); D_2=π_{ A and the features not occurring in D}(D_1); ] After the first operation, the generator would only know that a number of columns have been added (one for each value in column B) but would be unable to determine any other dependencies. A list is created with these column names, which will only exist within the scope of the next command. Executing the second operation results in column B being removed.Rather than two connected provlets, here a single provlet is generated, whichaccounts for the change in dataset structure, and where derivation relationships are added between each newcolumn and column B (which is then itself invalidated in the provlet).In Fig. <ref> we refer to this as the composite data transformation template. [Provenance of one-hot encoding] Consider the transformation in Fig. <ref> implemented by the operations in Ex. <ref>.After execution of the first operation, a VerticalAugmentation activity is created to account for the generation of the new features. However at this stage, we do not know which elements of the input have been used, thus we are also unable to add derivation relationships. After executing the second operation, feature B has been removed, and in the provlet this is recorded by introducing a newConditionalProjection activity that invalidates B. Additionally, however, the composite variant of data transformation mentioned above is applied, resulting in a new relationship:VerticalAugmentation used as well as derivations: wasDerivedFrom ,wasDerivedFrom .The complete provlet includes the statements: wasInvalidatedBy ConditionalProjectionwasDerivedFrom wasDerivedFromValue changes. This analysis considers one column at a time.If some or all of the values have changed in a column C, the data transformation template isapplied,with the assumption that there is a one-to-one dependency between each new value d'_ij od D' and the corresponding original value d_ij of D, and this is mediated by the function represented by the operator, as described in Section <ref>. The case of value imputation is handled separately.This is detected simply by comparing the number of null values in D (identified by ) to those in D'.Imputation has occurred when the nulls have been reduced.In this case, the data transformation template is used (<ref>), where by default each value in an imputed column C in D' are assumed to be derived from all values in the same column C in D.Notice that the generator does not have further information to make the derivation more granular. Also, when multiple columns have been imputed, each of those is considered independently from the others. This may miss derivations, again for lack of information. For instance, using the MICE algorithm <cit.> will impute multiple columns, where each new value is derived from values in multiple source columns. This generalisation is not captured by the algorithm. §.§ Benefits and limitations of the change analysis approachThe approach of using dataset change as the trigger to choose the provenance template and to apply and generate provenance information [id=pm]has two main advantagesprovides two inherent capabilities.[id=pm]Firstly, it makes it possible to capture provenance when the internal logic of the operators is not accessible to the observer. This has been referred to as the “black-box” problem by the provenance community <cit.>.[id=pm]Secondly, it enables capturing the provenance of operator compositions. In the presentation of this work, we mainly describe the provenance generated for a single operator execution. However, by looking only at dataset change, we can allow multiple operators to execute and generate the provenance record for this group of operators. An example is the “stateful” shape change analysis above, which keeps track of data transformations across more than one operator, in order to accurately infer derivation dependencies.[id=pm]One limitation that is intrinsic to this approach is in complex cases such as when UDFs are employed. In this case, while the algorithm can detect which tuples have changed, it cannot identify which inputs caused the change, thus it must assume that all inputs were used by default. The ability to group operators is beneficial for many reasons. Provenance is often unwieldy, capturing interactions and relationships meaningless for later use. Past works utilize variations in “Composite” to help with various tasks. ZOOM <cit.>used the concept of composite step-classes to develop a notion of user views, allowing a user to more easily view and understand a provenance graph. More recently, Ursprung <cit.> contains provenance at different composite levels based on the capture mechanism able to be deployed in a given situation. In our work, the developer can choose a composite that is correct for their ultimate end needs by having the provenance observer wait for other commands to complete and only look at the final dataset.§ IMPLEMENTATION AND ARCHITECTUREIn this section, we provide details on (i) the data architecture used in the implementation, (ii) the code instrumentation required for the provenance generator to operate, and (iii) the efficient implementation of provenance capture and provlet composition. §.§ System architecture We have [id=ac]created a reference implementation of the approach to provenance generation illustrated in Section <ref> using the pandas/python library, representing datasets as pandas dataframes[<https://pandas.pydata.org/>].The overall architecture for provenance capture, storage, query, and visualisation is shown in Fig. <ref>. The Provenance-Tracker automates the process of detecting and tracking the provenance of a user-defined pipeline of data preparation. It includes a Prov-generator that produces the provenance of each operator in the pipeline by analyzing its effect on the underlying dataset. This is done at execution time by: (i) identifying the operator under execution on the basis of a series of comparisons between the input and the output datasets,(ii) executing the prov-gen function of the core operation that captures the identified operator by suitably instantiating the function template, and(iii) storing the provenance data produced by the prov-gen function on an underlying repository.Since provenance data have a natural graphical representation, Neo4j, a world-leading, industry-grade, scalable graph database management system, is used for this purpose. [id=pm]Note that while provenance graphs are written to the database at runtime, i.e., while the script is executing, those writes can happen asynchronously, as the graph will only be queried “post mortem” after the script has finished executing. This also removes the need to consider a high-performance back end such as an in-memory database.Theallows the user to perform several types of analyses of the data provenance collected for a given data preparation pipeline, by translating a specific data-provenance exploration chosen from a menu of a graphical interface into a query expressed in Cypher, the query language of Neo4j, [id=rt]as it will be illustrated in Section <ref>.[id=ac] With the current reference implementation we have made several upgrades to the earlier version <cit.>: (1) the data representation format; (2) the storage method, as we now use the Neo4J graph database to store the final provenance graph natively in contrast to <cit.>, where all provenance was serialized in PROV-JSON <cit.> (an interoperability format for the PROV data model); and (3) observing provenance from dataframes instead of specifically coding each pandas operator. [id=pm]We have chosen to represent provenance graph using the standard PROV data model, to ensure some degree of interoperability across applications that want to use the provenance graphs. However, we are also aware that the standard PROV serialisations documented as part of the W3C specificationare not concerned with space utilisation and query performance.Thus, our prototype implementation aims to strike a balance between performance and interoperability goals.In particular, all framework elements that have changed through an operator are materialised in the provenance (but no new entities are introduced to represent data items that have not changed).As all required entities are manifested in the graph, new provenance queries can be written simply using standard Cypher, with minimal knowledge of the internal representation. In a future implementation, one may introduce entities that represent entire tuples or columns, but with the understanding that queries must be aware of these optimisations. §.§ Code instrumentation[id=ac] A number of different approaches for capturing provenance from a running process have been documented in the literature. These range fromintentionally placing capture calls within the notebook, to utilizing libraries to compare dataframes for automatic detection, to engaging interactively with the user. A key distinction concerns how much burden can be placed upon the user. Works such as <cit.> or <cit.> insist on no-human involvement, while others believe that users should be invested in the process of improving their scripts, specifically <cit.> or <cit.> allowusers to enter provenance capture calls at appropriate places. Other contemporary systems, such as MLInspect <cit.>, require the development of specialised add-ons to the code (using a visitor pattern) to create observers. The level of developer involvement is still an open research question for the data science community. [id=rt]In this work we aim to implement the strategy described in Section <ref> while minimising user intervention. This is achievedusing ansoftware pattern that acts as a wrapper for dataframes and relies on aobject for deriving the provenance of a transformation based on the inspection of the dataframes in input and output.From an operational standpoint, the Provenance Tracker is equipped with afunction that allows users to subscribe to one or multiple dataframes for tracking their provenance.Basically, the invocation of this function returns the corresponding wrapped dataframes as objects that encapsulate nearly all of the methods inherited from the pandas DataFrame class, enabling provenance generationin a transparent way during dataframe transformations.The only required instrumentation is the following:.7 [id=rt] [id=rt] After this, the signature and syntax of methods that operate on a dataframe remain unchanged, as in the examples that follow. However, they now operate on the wrapped dataframes and invoke the internal provenance-capture functionality through the Provenance Tracker..7 [id=rt] [id=rt] Similarly, provenance generation for the join operation can be done without the need to invoke additional auxiliary functions, as follows..7 [id=rt][id=rt]The activity of the Provenance Tracker can be temporarily disabledto capture the provenance of an operation made of several basic data transformations. This is done by using the property as in the example that follows, which implements the provenance capture of the one-hot encoding sequence illustrated in Example <ref>. .7 [id=rt] [id=rt] In this example, the changes made by the horizontal augmentation on the original dataframe are produced but taken into account only during the subsequent operation, when theproperty is set to `true'.[id=rt] The case of joins and append commands requires additional instrumentation because the tracker needs to observe two operands. It also needs to be told which keys are used in the join operation, in order to correctly track dependencies from key values to non-key values, as described later in this Section.The following fragment implements a tracked (left) join betweenandusing keysand : [id=rt] The pattern for Append is similar, and examples are omitted. §.§ Composing provlets into a complete provenance documentA complete provenance document is produced by combiningthe collection of provlets that results from each instance of change analysis. Specifically, one provlet is generated for every transformation and every element in the dataframe that is affected by that transformation. The final document is composed of such a collection of provlets, where entity identifiers match across provlets, and never needs to be fully materialised, as explained shortly.Consider for instance the following pipeline:σ_C(_f_1(Age): ageRange(D))where C = {AgeRange≠} and D is the dataset of Example <ref>. The corresponding provenance document is represented in Figure <ref>.Applying vertical augmentation produces one provlet for each record in the input dataframe, showing the derivation from Age to AgeRange.The second step, selectingrecords for `not young' people, produces the new set of provlets on the right, to indicate invalidation of the first record, as per the template at the bottom of Figure <ref>. Note that the “used” side on the leftrefers to existing entities, which are created either in thepipeline from the input dataset, or by an upstream data generation operator.Provlet composition requires looking up the set of entities already produced, whenever a new provlet is added to the document. One simple way to accomplish this is by eagerly keeping the entire document in memory, along with an index for all entities, and by mapping each entity to the corresponding data element it represents. While this can be accomplished using readily available Python PROV libraries <cit.>, it does not scale well to the volume of entities required to represent large dataframes in cases where more than a handful of transformation operators are involved.Instead, we have followed a continual append approach for provenance composition in which each p-gen function generates a set of provlets (in the worst case one for each element in the dataframe) that are just collected in a partial document and stored in the underlying repository.This allows the provenance to be collected quickly at the execution of each script, and be assembled later, minimizing execution dependencies and possible bottlenecks during the actual execution of the pipeline.§.§ Efficient provenance generation The overhead for provenance collection and composition described above can be minimised by using Python's multiprocessing library to parallelise the most expensive operations, observing that (i) dataframes can be split into chunks and provenance entities generated independently for each chunk, and (ii) the provlets generated by each parallel process can be independently written to disk, and then asynchronously inserted into Neo4J.Assuming that provenance graphs are only queried after the end of script execution, this provides a scalable back-end solution despite the potential limitations of Neo4J's centralised architecture.Using parallel processes to write provlets to disk is straightforward, as there are no dependencies amongst these processes.As an example, for one-hot encoding provenance consisting of about 2M entities, we observe a stable 70% improvement in writing times using 12 processes, relative to a sequential baseline. In practice, at most one chunk is created for each available CPU thread and allocated to one process. One slight complication is that assigning each generated entity to its corresponding dataframe element requires keeping track of the relative order of the chunks in the dataframe. This is accomplished using a queue (further details omitted). Unlike for write operations, here performance gains depend on the complexity of the specific operator, i.e., of the template used. Empirical results indicate an average of 60% improvement relative to the sequential baseline. Performance figures from our comprehensive evaluation are reported in the next section.Joins present an interesting implementation twist to provenance generation mechanism. A naive implementation of join provenance that creates instances similar totemplate in Section <ref>, would simply link each row of the output dataframe to the two input DataFrames using rules to infer the derivations for every single item in a row. Unfortunately, joins expose one of the problems of our approach which looks at the input/output datasets and not the operator itself. Because we are not linked directly to the join operator, which may or may not have the standard guarantees of a database system, re-creating which rows in the input dataframes and their relationship to the output row in the dataframe takes effort.Consider the naive implementation of creating join provenance records using our data-observation approach. For every row in the output dataframe, the join key(s) must be identified within the data, and the actual data values in the remaining features noted. Then, the input dataframes must be scanned to locate the key(s), and the row examined to determine if it contains the appropriate data values to match the output dataframe row. Initial experiments indicate that the scan operation takes 0.07s per row.To overcome this problem, a more efficient implementation makes use of hash tables. Specifically, two hash tables with the same structure are generated, one for each input dataframe, having, as key, a hash obtained from each row and, as value, the original index of the row (Fig. <ref>). To derive the provenance, the output dataframe D is then decomposed into two dataframes obtained by projecting D on the columns of the input ones. Then, the two dataframes so obtained are hashed using the same function above (Fig. <ref>). This allows us to derive easily the provenance of each row of the join as shown in Fig. <ref>.§ EVALUATION All experiments illustrated in this section were performed on a MacBook Pro with 2.6 GHz Intel Core i7 6 core and 32GB RAM 2400MHz. We focus our evaluation on pandas operators for data cleaning and pre-processing, and can theoretically accommodate ML libraries such as scikit-learn as shown in Table <ref> although our reference implementation does not explore using their libraries.Given the reference prototype nature of the implementation, the evaluation does not address scalability and performance requirements of a production-grade system.§.§ Analysis with real world pipelinesDatasets.In Table <ref> reported at page tab:provqueries we have shown classic provenance queries in terms of data input and output. In order to evaluate if we can answer those queries, we have captured data provenance in three real world pipelines involving different types of preprocessing steps. The datasets are described in Table <ref>. The goal of the German Credit pipeline is to predict whether an individual is a good lending candidate.On the other hand, the Compas Score pipeline is aimed at predicting the recidivism risk of an individual,whereas the goal of the Census pipeline is to predict whether annual income for an individual exceeds $50K. Table <ref> shows the preprocessing steps for each of these machine learning pipelines. Capturing provenance.Our work focuses on fine-grained provenance and, as such, it turned out that all provenance queries in Table <ref> were answerable.Figure <ref> shows the impact of adding provenance capture to a pipeline.The percentage of overhead of capturing provenance is large compared to executing the system without any provenance at all. However, the actual time to capture the provenance itself is rather low: 1.8s for German Credit, 1.4s for COMPAS, and 28s for Census. These results are 10x faster than the times required for the generation of the same provenance in <cit.>. This improvement is mainly due to the new format used to represent the provenance, and the backend used to store the data, as described in Section <ref>. As expected, provenance capture adds computational time to any pipeline execution. However, we note that there are certain complex operations that have a larger impact than others. For instance, in the Census pipeline, the generation of the provenance for operation C2 (One Hot encoding of 7 different columns) requires 22ms. However, this operation introduces 90 new features while the number of records remains unchanged (32,561). Therefore, it generates 32,561× 90 new provenance entities. Similarly, operation A3 in the German Credit pipeline is a One-Hot encoding that operates over 11 different columns and creates 38 new features. It follows that this operation creates 1,000× 38 new provenance records. Operation B0 in the Compass Score pipeline, which selects 9 columns of data and removes 44 features, is also costly as it generates 7,214× 44 provenance records.Basically, all the other operations generate a limited amount of data provenance and for this reason, they introduce a limited overhead.The size of provenance generated for the various pipelines is as follows: German Credit 20 MB; Compas Score 71 MB; Census 1.04 GB. [id=ac]For the join operations, we report in Figure(The figure has been deleted) the differences between an implementation that makes use of the NumPy library of Python[[id=ac]<https://numpy.org/>] [id=ac]and our optimization based on hash tables, as described in Section <ref>. It turns out that, differently from what happens for the basic implementation, our technique scales very well over the size of the input dataset.Querying Provenance.Provenance would be useless without the ability to query it efficiently. For this, we run all the types of queries reported in Table <ref> over the Census dataset, expressing them in Cypher, the query language of Neo4j. Each query was run three times and the resulting time is the average of the three runs. Queries 2 through 6 operate over a single item, a single record, or a single feature, while the others operate over the entire dataset. For the former type of query,data items, records, and features have been chosen randomly from the output dataset each time the query is run. As shown in Figure <ref>, the basic provenance queries, particularly those that find or trace paths, are fast. The high-cost queries, such as Query 11, requireadditional processing beyond graph traversal. Recall from Section <ref> that queries 10 and 11 traverse the provenance graph and search for all future and past derivations of an element. Obviously, depending on the complexity of the operations, the operation can require longer time. §.§ A closer look at operators[id=ac]The previous experiments look mainly at the performance of the methods for capturing the provenance in real world scenarios but do not test the overall scalability of the approach. To accomplish this, wWe use the TPC-DI benchmark <cit.> and used DIGen, the data generator provided by TPC, for creating source data and audit information in order to create a known dataset at a larger scale to characterize the behaviour of the reference implementation across a wider range of operators, including Joins and Appends. Specifically, we have created datasets of increasing sizes as described in Table <ref>: the datasets 1, 2, and 3 involve the trade fact table and the account dimension table, and have been used to measure the [id=ac]effectscalability of unary operators in Table <ref> (DR, FT, ST, IG, VT). Datasets 4, 5, and 6 involve the trade.txtand HoldingHistory.txt files and were used to measure the [id=ac]effectscalability of the join operator (JO in Table <ref>). Datasets 7,8,9 involve the FINWIRE files and were used to measure the [id=ac]effectscalability of the append operator (AP in Table <ref>). Figure <ref> shows how long and how much space it takes to capture and record provenance for each operation.The capture mechanism scales rather well with the size of the dataset and it turns out that pre-processing operations that only affect a small number of data values, such as Instance Generation (IG), are fast. Value Transform (VT) and Imputation (I), in this particular evaluation setup, are also fast as they only operate over a small number of items. On the other hand, the operations that generate more provenance, such as Feature Transformation (FT), Space Transformation (ST), and Dimensionality Reduction (DR), take more time. In particular, ST needs to create provenance data for every new value in the new column. Join (JO) and Append (AP) operations require more time as they need to generate a quite large quantity of provenance. In addition, JO is more costly as it operates over two input tables. The fact that provenance capture scales gracefully with the dataset size is confirmed by another experiment, whose results are reported in Figure <ref>, in which we have executed the same types of operations in Table <ref> by just varying the number of records of a fixed input (the Census dataset). Since the evaluation setup here is different, the various operators exhibit a different behavior in terms of relative performances, but their computational time remains quite low and grows linearly with the dataset size. §.§ Use Case Analysis Table <ref> contains a collection of real-world scenarios in which data scientists try to understand what is happening within a machine-learning pipeline. These use cases have been gathered from the Data Science Stack Exchange[<https://datascience.stackexchange.com/>] (DSSE) by selecting questions about the construction of a data preparation pipeline using the Orange framework. The provenance queries that can provide support to these issues refer to those in Table <ref> (Page tab:provqueries), in which, for each query, it is reported the input data and the expected output that can help the developer to debug the pipeline.To highlight how the fine-grained provenance captured with our approach can be used to answer one of these questions consider, for instance, the UC8 use case. In this scenario, the user is struggling with an incorrect high accuracy of a model. Ultimately, this is because of an imbalanced input dataset. Using the Provenance Query Impact on Feature Spread from Table <ref> on the input dataset, it is possible to identify the change of feature spread after a pre-processing operator that rebalances the dataset.§.§ Provenance ExplorationUnlike many provenance systems, which focus on the presentation and navigation of the provenance graph, we have developed a tool[The code of this tool is publicly available on GitHub.] in which the provenance graph is mainly used as a backbone to explore and identify problems within the pipeline through a user-friendly interface,which does not require the specification of complex queries over the data provenance. This is done by automatically extracting, from the provenance and other metadata, useful information on the changes operated by the individual operations on the input dataset(s).An example of this kind of interaction is shown in the GUIs of our tool reported in Figures <ref> and <ref> respectively: basically, depending on the type of operator that was applied, the data scientist can “zoom in” to a transformation of interest (bottom of the figures) and inspect the “before/after” effect of its execution either at the level of values within a column, in the case of a local transformation, or at the level of the entire dataset, in the case of a global transformation. A local type of transformation is illustrated in Figure <ref>, where a data transformation operation, which modifies values in , is represented in the provenance fragment at the bottom. The user is then able to navigate through the retrieved provenance, identify the pre- and post- states for , and visualise their differences in terms of summary statistics (top right), values distribution (center), and optionally each value can be inspected (top middle). In contrast, Figure <ref> shows the effect of an imputation step that operates globally. As this may change more than one column at a time (for instance, using Multiple Inference), here the GUI displays salient differences at the dataset level. We can see for instance that the operation has not changed the number of rows and columns of the dataset (top left), but the imputation has updated the content of several columns (col2, col4, col5, col6, see top middle), and has altered the percentage of null values (bar chart). We can also see the changes in the correlation between each pair of columns in the dataset before and after the operation is performed.§.§ Comparison to other provenance collection systems There are many provenance systems that can be deployed to capture provenance of workflow-like executions. In this section, we look at some of the main players and compare them to the provenance in this work. Because the implementation in this work was a reference implementation for exploration, not production deployment, we feel that an execution benchmark between the systems is uninteresting. Perm <cit.>.Perm uses query rewrites to add and propagate provenance attributes to the output of the original query. It can capture and propagate provenance for ASPJ queries and set operations. It is implemented on PostgreSQL and tested against TPC-H with an overhead on TPC-H queries of 3-4x. The queries outside of this norm include very complex queries with aggregation, such as an aggregation over a join on 8 tables with a grouping on a functional expression. Perm allows lazy and eager computation of provenance, SQL query facilities, and support for external provenance. Perm outperformed previous approaches by a factor of 30. While the execution of Perm is impressive, it fundamentally relies on a technology that is not appropriate for the problem within notebooks focused on in this work. MLInspect <cit.>. The MLInspect system uses Python’s inspect module or Monkey patch to identify function calls within python scripts and build a DAG of relationships and interactions in the pre-processing pipeline. This run-time representation is updated as the developer changes the scripts based on the standard dataframe operators. A user can annotate tuples, and specify the inspections that need to occur (e.g. inspect for statistical parity of protected group). As the script is executed, this DAG is stepped through, and the operator is passed for inspection.The focus of the MLInspect is to analyze data distribution after operators based on the pre-specified inspections. Provenance at the tuple level is supported through the adaptation of user annotations by recording the pre-assigned tuple-id and the operator applied. Our work has a very different focus, and as such the provenance requirements are different. In MLInspect, provenance can be added at the tuple level and used to support data distribution change analysis; our work provides a much finer-grained provenance at the attribute level allowing for debugging specific value changes.Vamsa <cit.>. The Vamsa system uses static analysis to build a syntax tree to identify inputs, parameters and libraries. It then uses a knowledge base to provide semantic meaning to these items and log it in the provenance. However, Vamsa relies upon a pre-populated knowledge base which maps the set of functions identified in the code to semantic "operators". This approach restricts provenance generation to known operators. The Vamsa experimentation shows that their coverage ranges between74.48% - 97.08%. Our approach relies on inspection of the dataframe before and after to identify the type of transformation that occurred (e.g. horizontal reduction) instead of relying upon a pre-created knowledge base. In addition to this collection difference, Vamsa identifies and store provenance information at the dataset level. For instance, it will identify entire rows retained or dropped; it can identify whether a column within a dataframe is used. However, it does not track individual changes to attributes. While these can be later derived by understanding what operators were applied to which rows or columns, this information is not innately stored. A combination of Vamsa and this work would be interesting future work, in which Vamsa is used to identify the pre-stocked operators, and for the remainder, DPDS identifies what is happening in the via dataframe changes and templates of provenance. § RELATED WORKThis paper substantially advances previous work <cit.> by: (i) extending the set of core operations with methods for combining different datasets to any operator that modifies a dataframe, (ii) replacing the manual instrumentation at the script level required by theanalysts with a method for the identification of provenance for most of the operators through dataset change (iii) adopting a graph-based data management system for storing and querying in an effective and efficient way the collected provenance, (iv) [id=ac]increasing the efficiency and scalability of the approach by an order of magnitude through multiple optimizations, and (v) performingexperiments for empirical validation [id=ac]and qualitative comparison to previous work.Established techniques and tools are available to generate provenance, and provenance polynomials through query instrumentation. However, these operate in arelational database setting and assume that queries use relational operators <cit.>. While we show how some of the pipeline operators considered in this work map torelational algebra, this is not true for all of them, so we prefer to avoidtechniques that are tightly linked to SQL or to first-order queries <cit.> as these would preclude other types of operators from being included in the future.We, therefore, consider this an unwise strategy in an “open world” of data pre-processing operators, consider e.g. one-hot andother kinds of categorical data encodings.We also note that tools that operate on a database back-end,like GProm <cit.>, Smoke <cit.> and older ones like Post-it <cit.>for provenance capture cannot be used in our setting. Interestingly, extensions to the polynomials approach have been proposed to describe the provenance of certain linear algebra operations, such as matrix decomposition and tensor-product construction <cit.>. While these can potentially be useful, it is a partially developed theory with limited and specialised applicability. Moving beyond relational data provenance, capturing provenance within scripts is also not new, but efforts have mostly focused on the provenance of scriptdefinition, deployment,and execution <cit.>. Specifically, a number of tools are available to help developers build machine learning pipelines <cit.> or debug them <cit.>, but these lack the ability to explain the provenance of a certain data item in the processed dataset. Others link provenance to explainability in a distributed machine learning setting <cit.> but without offering specific tools. Amazon identifies that there are common and reusable components to a machine learning pipeline, but that there is no way to track the exploration of pipeline construction effectively, and calls for metadata capture to support reasoning over pipeline design <cit.>.Vamsa <cit.> attempts to tackle some of these problems by gathering the provenance of pipeline design. However, the resulting provenance documents contain information such as the invocation of specific ML libraries, by way of automated script analysis, rather than data derivations.Some systems are designed to help debug ML pipelines. BugDoc <cit.> looks at changes in apre-processing pipeline that cause the models to fail, where high-level script and orders are used to identify bad configurations.Others provide quality assurance frameworks <cit.> or embedded simulators to estimate the fairness impacts of a particular pipeline <cit.>.Again, however, these are not geared for deep data introspection. Priu <cit.>, helps users understand data changes, particularly deletions, that are used in regression models. Unfortunately, this work only tracks deletions and not additions or updates to data. Recently, <cit.> have utilized provenance to understand the changes in data distribution in the ML pipeline using predefined “inspections” that look at the data at specific operators within the pipeline, which supports the reason for undertaking this work and which we expand by unobtrusively capturing provenance from any operator. Meanwhile, <cit.> combines system level provenance information with application-level log files to recreate the provenance of data science pipelines without impacting the pipeline developer. Other tools record the execution of generic (python) scripts, but fail to capture detailed data provenance, like NoWorkflow <cit.>. This has been combined with YesWorkflow <cit.> which provides a workflow-like description of scripts, but again without a focus on data derivations.A further class of tools instrument scripts that are specifically designed for Big Data processing frameworks: <cit.> (Hadoop), <cit.> (Spark). They provide detailed information mostly for debugging purposes but are restricted in their scope of applicability. Recently, a method for fine-grained provenance capture that is application-agnostic has been proposed <cit.>. Here, provenance from the low-level OS through to high-level application-specific logs is merged to create a provenance record that contains the maximum information available for the minimum impact on developers. However, it is not obvious what fine-grain provenance can be extracted from such an approach, while our work provides a firm basis for the provenance information that should be captured. Interesting future work includes determining how much of the provenance we specify can be collected by<cit.>.Finally, the method proposed within Section <ref> in which the change of the data is observed instead of the operator is similar to techniques discussed in <cit.>. While Blount describes the general setup of inferring the provenance record based on identified changes in the data, our work provides a functioning implementation for a large class of operators.§ CONCLUSIONS AND FUTURE WORK In this work, we focus on fine-grained data provenance for machine learning pipelines irrespective of the pipeline tool used. Because a substantial effort goes into selecting and preparing data for use in modelling, and because changes made during preparation can affect the ultimate model, it is important to be able to trace what is happening to the data at a fine-grain level. We highlight several real use cases to motivate the need for fine-grained provenance from the Data Science Stack Exchange (DSSE)[1]. We identify the classic provenance queries that are needed to provide information to answer these use cases. We then identify a set of provenance templates that can be deployed across a set of machine learning pipeline operators and implement them. We depart significantly in this work from previous implementations within python and ML environments, by using observed changes in the data to determine the provenance.Based on observations of the changes between dataframes, we choose the appropriate template for provenance generation. We have tested our implementation over real-world ML benchmark pipelines for utility and basic performance [id=ac]with both classic ML pipelines and TCP-DI. [id=ac]In order to investigate scalability issues with our design, we also use the TCP-DI generator and apply several operators over that data at scale. Our results indicate that we can collect fine-grained provenance that is both useful and performant.Future investigation into optimization techniques that aim at reducing the provenance data, using composite generation, to the minimum that is needed to support given provenance queries, as well as methods for taking advantage of collected provenance data to support the design of new pipelines is required to continue making provenance more efficient and useful. [id=ac]This work looks expressly at the pre-processing tools leading up to the machine learning black box, thus it does not track provenance models for the trained data, e.g. between predictions and training data. However, this work has been used by <cit.> to create an entire tracking of data from pre-processing through deep learning. Future work in this area includes understanding the granularity of provenance required for users of deep learning systems.unsrt | http://arxiv.org/abs/2310.18079v1 | {
"authors": [
"Adriane Chapman",
"Luca Lauro",
"Paolo Missier",
"Riccardo Torlone"
],
"categories": [
"cs.DB",
"68",
"H.1; H.2"
],
"primary_category": "cs.DB",
"published": "20231027120022",
"title": "Supporting Better Insights of Data Science Pipelines with Fine-grained Provenance"
} |
Signs of the rates in the Lindblad master equations can always be arbitrarily determined and Andrew N. Jordan January 14, 2024 ========================================================================================empty empty Accurate and efficient localization with conveniently-established map is the fundamental requirement for mobile robot operation in warehouse environments. An accurate AprilTag map can be conveniently established with the help of LiDAR-based SLAM. It is true that a LiDAR-based system is usually not commercially competitive in contrast with a vision-based system, yet fortunately for warehouse applications, only a single LiDAR-based SLAM system is needed to establish an accurate AprilTag map, whereas a large amount of visual localization systems can share this established AprilTag map for their own operations. Therefore, the cost of a LiDAR-based SLAM system is actually shared by the large amount of visual localization systems, and turns to be acceptable and even negligible for practical warehouse applications. Once an accurate AprilTag map is available, visual localization is realized as recursive estimation that fuses AprilTag measurements (i.e. AprilTag detection results) and robot motion data. AprilTag measurements may be nonlinear partial measurements; this can be handled by the well-known extended Kalman filter (EKF) in the spirit of local linearization. AprilTag measurements tend to have temporal correlation as well; however, this cannot be reasonably handled by the EKF. The split covariance intersection filter (Split CIF) is adopted to handle temporal correlation among AprilTag measurements. The Split CIF (in the spirit of local linearization) can also handle AprilTag nonlinear partial measurements. The Split CIF based visual localization system incorporates a measurement adaptive mechanism to handle outliers in AprilTag measurements and adopts a dynamic initialization mechanism to address the kidnapping problem. A comparative study in real warehouse environments demonstrates the potential and advantage of the Split CIF based visual localization solution. § INTRODUCTIONRobot localization is fundamental for mobile robotics and is involved in a large variety of practical applications <cit.>. Visual localization plays an important role in mobile robotics, thanks to its commercial competitiveness. It is true that a visual localization system may be susceptible to light conditions and has comparatively short perception range in contrast with LiDAR-based localization which also plays an important role in mobile robotics. However, these limitations of visual localization are naturally overcome in the context of indoor mobile robotics such as warehouse applications, because in indoor environments artificial light conditions are usually stable and the need for long-range perception (such as that in outdoor applications) is rare.Visual simultaneous localization and mapping (SLAM) <cit.> is a special form of visual localization, usually without a priori map. Despite its popularity and its merits for flexible exploration in an unknown environment, visual SLAM without a priori map is unlikely to be a proper solution for a known environment of which an accurate map can be established. After all, an accurate map can largely facilitate visual localization.How to conveniently establish an accurate map is a concern for many researchers. Some researchers rely on artificial markers, such as binary BCH code<cit.>, fiducial markers<cit.>, and AprilTag <cit.>. The AprilTag, which is illustrated in blue comment boxes of Fig. <ref>(a) and (b), is a kind of popular and commonly-used artificial marker, thanks to the ease of its deployment and to the richness of information that it can convey. We also adopt the AprilTag and base the intended visual localization on an accurate AprilTag map established a priori.An AprilTag map may be established in the visual SLAM way <cit.>. However, considering the natural advantage of LiDAR-based SLAM systems over visual SLAM systems in terms of accuracy and robustness, we choose to establish an intended AprilTag map of the warehouse environment with the help of LiDAR-based SLAM. It is true that a LiDAR-based system is usually not commercially competitive in contrast with a vision-based system, yet fortunately for warehouse applications, only a single LiDAR-based SLAM system is needed to establish an accurate AprilTag map, whereas a large amount of visual localization systems can share this established AprilTag map for their own operations. Therefore, the cost of a LiDAR-based SLAM system is actually shared by the large amount of visual localization systems, and turns to be acceptable and even negligible for practical warehouse applications.Once an accurate AprilTag map is available, visual localization is realized as recursive estimation that fuses AprilTag measurements (i.e. AprilTag detection results) and robot motion data <cit.>. AprilTag measurements may be nonlinear partial measurements. For example, the performance of the AprilTag detection module deteriorates as the view distance and the view angle increase <cit.>. Sometimes the AprilTag detection module may output distance measurements only instead of complete pose measurements <cit.>. This is likely to happen when its sub-step of homography computation encounters the view singularity problem due to visual data errors and noises. Nonlinear partial measurements can be handled by the well-known extended Kalman filter (EKF) in the spirit of local linearization <cit.> <cit.>. AprilTag measurements tend to have (unknown) temporal correlation; however, this cannot be reasonably handled by the EKF. The split covariance intersection filter (Split CIF) <cit.> <cit.> is adopted to handle temporal correlation among AprilTag measurements — The Split CIF, which may be regarded as a generalization of both the (extended) Kalman filter and the covariance intersection filter, can reasonably handle both known independent information and unknown correlated information in source data. It has been applied in a number of intelligent vehicle applications <cit.>.AprilTag measurements may have outliers as well. Discarding abnormal data whenever found is an easy choice, yet this may result in loss of some “not so bad” and exploitable information <cit.>. Inspired by the Gauss-Newton iterative methods <cit.> and the innovation-based adaptive estimation methods <cit.>, a measurement adaptive mechanism is incorporated to handle outliers in AprilTag measurements. Besides, the robot may accidentally encounter the kidnapping problem and this is handled by a dynamic initialization mechanism.The solution of split covariance intersection filter based visual localization with accurate AprilTag map is proposed for warehouse robot navigation. It is realized as recursive estimation that fuses AprilTag measurements and robot motion data. It can naturally handle temporal correlation among AprilTag measurements and can handle AprilTag nonlinear partial measurements as well. It incorporates a measurement adaptive mechanism to handle outliers in AprilTag measurements and adopts a dynamic initialization mechanism to address the kidnapping problem. A comparative study in real warehouse environments is presented to demonstrate the potential and advantage of the proposed solution of the Split CIF based visual localization with accurate AprilTag map.§ APRILTAG MAP ESTABLISHMENT Visual simultaneous localization and mapping (SLAM) <cit.> is nowadays popular and has its merits for flexible exploration in an unknown environment. However, visual SLAM without a priori map is unlikely to be a proper solution for a known environment of which an accurate map can be established. After all, an accurate map can largely facilitate visual localization.LiDAR-based SLAM systems <cit.> possess natural advantage over visual SLAM systems <cit.> in terms of accuracy and robustness, especially for outdoor application environments such as traffic environments and large indoor application environments such as warehouse environments. So we choose to establish an intended AprilTag map of the warehouse environment with the help of LiDAR-based SLAM. More specifically, we rely on LiDAR-based SLAM to obtain accurate estimation of robot ego-poses during AprilTag map establishment, such that a priori registration of AprilTags have “good anchors” for visual mapping. The process of AprilTag map establishment using LiDAR-based SLAM is illustrated in Fig. <ref>. For the LiDAR-equipped robot deployed for sake of AprilTag mapping (we call it the mapping robot), camera-LiDAR co-calibration can be done a priori <cit.>. Equipped with the LiDAR and camera, the mapping robot can drive within the AprilTag's visible area and record LiDAR point cloud and camera image data while driving. We adapt the LiDAR SLAM method to acquire a trajectory of the robot, which can be used as the ground truth. At the same time, we know the calibration parameters of the camera and robot, as well as the AprilTag image data at every moment by AprilTag detection. Then, we can solve the relative pose of each AprilTag relative to the robot at each moment. Finally, the graph optimization module to solve the relative pose among each AprilTag, and transform the AprilTag pose to the ground plane coordinate system to generate the map.Specifically, the LiDAR SLAM module can be any representative LiDAR SLAM framework, and here we use the lightweight and ground-optimized LOAM (LeGO-LOAM) <cit.> for its superior performance with limited resources. The Apriltag3 [https://github.com/AprilRobotics/apriltag][http://wiki.ros.org/apriltag_ros]algorithm is adapted for the AprilTag detection module due to the improved performance and detection efficiency compared with the previous version. It can also solve the relative pose of the AprilTag in the camera coordinate system. For the graph optimization module, the pose of the robot at each moment ^R_0 T _R_k is known and fixed. ^R_0 T _tag_m is the item to be optimized, which denotes the pose of observed AprilTag{tag_m, m ∈ M_k}, where M_k is the ID set of the Apriltags observed at k relative to the robot coordinates system R. We use the GTSAM <cit.> optimizer to solve such pose graph optimization problem that find the optimal pose of all Apriltags relative to the robot coordinate system at the starting moment {^R_0 T _tag_m, m ∈ M}. The nodes, factors and constraints of the graph optimization process are the ^R_0 T _tag_m, the relative pose of the robot coordinate system at each time R_k relative to that at the starting moment ^R_0 T _R_k and the pose of Apriltags relative to the robot coordinate system at each moment ^R_k T _tag_m respectively, which also can be seen in the graph optimization module in Fig. <ref>.It is true that a LiDAR-based system is usually not commercially competitive in contrast with a vision-based system, yet fortunately for warehouse applications, only a single LiDAR-based SLAM system (namely the unique mapping robot) is needed to establish an accurate AprilTag map, whereas a large amount of visual localization systems (namely dozens and even hundreds of warehouse robots that actually operate) can share this established AprilTag map for their own operations. Therefore, the cost of a LiDAR-based SLAM system is actually shared by the large amount of visual localization systems, and turns to be acceptable and even negligible for practical warehouse applications.§ SPLIT COVARIANCE INTERSECTION FILTER BASED VISUAL LOCALIZATION§.§ Split Covariance Intersection FilterThe detailed derivation of split covariance intersection filter (Split CIF) theory has been shown in <cit.>. In real implementations, for two estimations to-be fused: {X_1,P_1} and {X_2,P_2}, where the X and P are the estimate state and covariance, respectively. X_1 is supposed to complete observation of the true state X_true in general i.e., X_1 = X_true, whereas the X_2 is the complete or partial observation, which is generally denoted as X_2 = HX_true,where H denotes the measurement matrix. The Split CIF decomposes the covariance into the correlated component (subscripts d) and the independent component (subscripts i). Therefore,the two data sources {X_1,P_1i+P_1d} and {X_2,P_2i+P_2d} can be fused by the Split CIF formula as follows:P_1 = P_1d/ω _opt + P_1i P_2 = P_2d/(1 - ω _opt) + P_2i K = P_1H^T(HP_1H^T + P_2)^ - 1 X = X_1 + K(X_2 - HX_1) P = (I - KH)P_1 P_i = (I - KH)P_1i(I - KH)^T + KP_2iK^T P_d = P - P_iWhere ω _opt∈ [0,1], and ω _opt is determined by solving a convex optimization problem (see <cit.> for details). In addition, about the Split CIF, it can be regarded as a generalization of the Kalman filter, as shown in above equation. Specially, let P_1d and P_2d be zero, and above equation will become similar to the Kalman filter. Concerning the instability of the AprilTag detection module, the quality of measurements will be different and there exists many low accurate observations that can be regarded as outliers. We incorporate a measurement adaptive mechanism to handle outliers in AprilTag measurements. For these different quality observations, we process them with different way, which also can be called soft abandon mechanism. It is assumed that the predicted value is X_k+1/k, and then the current measurements is Z_k + 1. When ||Z_k + 1-X_k+1/k|| is greater than a screening threshold, Z_k + 1 is discarded. When observations are not discarded but are of lower quality, we drive an adaptive and accurate observation noise model which fits the relationship of observation error and view distance (denote as L) and view angle (denote as α) according to experimental error statistics to dynamically evaluate the observation uncertainty by its noise covariance as follows: R_k+1=0.25(L/(α^2))||X_k+1/k-Z_k + 1|| Herein, the greater of the deviation between observation and the prediction state, the smaller the weight in the fusion, and its final impact on the fusion result is always limited. Compare with some Gauss-Newton iterative method or innovation-based method, such adaptive noise model is more flexibility and can fundamentally suppress the influence of outliers to a greater extent. Also, even though the measurements exceed the screening threshold, they are not discarded directly, and may still function properly without causing a complete waste of information. Of course, if the observation is very unreasonable, in order to avoid unreasonable errors, we will directly discard it. Implementation of the Split CIF that incorporates the measurement adaptive mechanism in the context of AprilTag-based visual localization is given as pseudo code in Alg. <ref>. In this algorithm, X, P_i, P_d denote the state vector and its independent and dependent covariance matrix in different estimation processes about time k. The Split CIF implementation uses the state evolution model i.e. the system model to obtain the prediction state with its split covariance {X_k+1/k,P_i,k+1/k+ P_d,k+1/k}, as shown in lines 1-3 in the prediction part of the following algorithm. u_k is the input control vector. G_x_k and G_u_k are the Jacobian matrices of the state evolution model g(.) with respect to the state vector X_k and the control variable u_k, respectively. Q_k is the known covariance of the process noise w_k. P_pre,i,k denotes the prediction model error because this predictive model is a simplified vehicle model. For the pose of the AprilTag with identify information (ID)) relative to the camera obtained from the AprilTag detection module, we use the ID to perform map matching to obtain the global pose of the AprilTag, then use the calibration information of camera and robot to obtain the 6 DoFs global pose of the robot at each moment. For this 6 DoFs pose provided by AprilTag detection module, we use the x_msr,y_msr,θ_msr as new measurements in Z_k + 1 with its covariance in split form {R_i,k+1+R_d,k+1} of the observation noisev_k+1 to update the current obtained from prediction process. H_k+1 is the Jacobian matrix of the observation model h(.) in {Z_k + 1 = h(X_k+1/k)+v_k+1} with respect to the state vector X_k+1/k. §.§ Recursive Robot State Estimation With Accurate AprilTag Map Once an accurate AprilTag map is available (see details in Section II(A)), visual localization is realized as recursive estimation that fuses AprilTag measurements (i.e. AprilTag detection results) and robot motion data, the overall system of which is illustrated in Fig. <ref>. The Split CIF that incorporates the measurement adaptive mechanism is used for fusion of AprilTag measurements and robot motion data.The installation positions of the IMU (three-axis accelerometer and three-axis gyroscope), the encoder (linear encoder and rotary encoder) and the vision sensor are shown in the Fig. <ref>. The initialization can be realized by dynamic initialization mechanism, which can also initialize the pose in the process of pose estimation for solving the kidnapping. The prediction process is based on the derived forklift kinematic model with the motion data. The update process with complete AprilTag detection observation can prevent the damage of low quality observations via the presented observation noise model, and the extra partial measurement update process can use the hard-to-use observations for avoiding information loss. Moreover, for the existing latency situations in the image processing process, we integrate the back propagation into the recursive robot state estimation, which can effectively deal with the delay and guarantee the localization accuracy.§.§.§ Robot State Dynamic Initialization Before the recursive robot state estimation, the robot can obtain its initial global pose X__0 by the dynamic initialization, which can be calculated according to the first frame of AprilTag detection data matching with the map. And the state covariance P__0, process noise covariance Q_0 can be set according to motion and visual sensors or real applications. Specifically, the Q can be set according to product parameters of IMU and odometer that input the control variables. In addition, the initial independent covariance P__i,0 can be the same as P__0, and dependent covariance P__d,0 can be set to 0. And the independent part of observation noise covariance R_i,k+1 can be replaced by the adaptive observation noise model in equation (2), and the initial dependent part can be regarded zero in the complete update observation process. Meanwhile, in partial update process, the initial independent part of split observation noise covariance can be set according to the statistic of the error of visual sensor detecting the distance in real applications, and the initial dependent part can be regarded zero. Moreover, when the kidnapping problem occurs, that is, at a certain moment in the navigation, a wrong pose with a large deviation may occur because of unexpected sudden collision, movement or human push, resulting in several consecutive estimated poses being discarded because they are inconsistent with this pose, and dynamic initialization mechanism will be enabled at this moment. Herein, the next measurement of the discarded measurements is used as the initial pose, and the state estimation process is restarted to ensure that the robot pose can be returned normally.§.§.§ Robot State Evolution (Prediction) The robot state evolution model g(.) in Alg. <ref> reflects the relationship between the previous states X_k and its current state X_k+1, which can be expressed in discrete form according to the forklift kinematic model:( [x_k + 1;y_k + 1; θ _k + 1 ]) = ( [ x_k + Δ dcos (β + θ _k + Δθ/2); y_k + Δ dsin (β + θ _k + Δθ/2);θ _k + Δθ ]) + ( [ ε _x; ε _y; ε _θ ])where x, y, θ denote the robot pose. β is the steering angle of front wheel, and the relationship of θ _k and β denotes sin(θ _k) /L = sin(β)/(v ·Δ T). Δθ = ω·Δ T and Δ d = v ·Δ T. v and ω denote the robot velocity and yaw rate; Δ T denotes the time step; ε _x,y,θ are the process noise that arise from the encoder or wheels slipping.§.§.§ Robot State Update with AprilTag Complete Measurements The measurement adaptive mechanism that is incorporated into the Split CIF based visual localization solution consists in the adaptive AprilTag measurement noise model, details of which are formalized in Equation (2). The predicted robot state is updated with the complete pose measurement Z_k+1 provided by the AprilTag detection module. Refer to the lines 1 to 7 of the part “Update incorporating the measurement adaptive mechanism” in Alg. <ref>. §.§.§ Robot State Update with AprilTag Nonlinear Partial Measurements The measurement matrix H_k+1 in above-mentioned measurement model is easily obtained. However, because of the existing extreme viewing distance and angle in image collection, the pixel detection error is magnified, leading to pose solving singularity problems, especially for the estimated angle. As a consequence, AprilTag complete measurements cannot be obtained, whereas AprilTag nonlinear partial measurements (namely AprilTag distance measurements) can only be obtained in such cases. Direct discarding of these AprilTag nonlinear partial measurements causes “information waste”, because they can still contribute to robot state update. In the spirit of local linearization on which the extended Kalman filter (EKF) relies <cit.>, the Split CIF also enables update with nonlinear partial measurements. More specifically, suppose the warehouse robot has only a measurement of its distance to an AprilTag that has the globally-registered position {x_g, y_g} in the accurate AprilTag map established a priori. The nonlinear partial measurement model is as follows, and we can locally linearize the measurement model about the prediction state {x̃_k + 1/k,ỹ_k + 1/k} for the current period as follows:Z_k + 1 = h(X_k + 1,k + 1) = √((x_k + 1 - x_g)^2 + (y_k + 1 - y_g)^2)≈√((x̃_k + 1/k - x_g)^2 + (ỹ_k + 1/k - y_g)^2)+ (x̃_k + 1/k - x_g)(x_k + 1 - x̃_k + 1/k)/√((x̃_k + 1/k - x_g)^2 + (ỹ_k + 1/k - y_g)^2)+ (ỹ_k + 1/k - y_g)(y_k + 1 - ỹ_k + 1/k)/√((x̃_k + 1/k - x_g)^2 + (ỹ_k + 1/k - y_g)^2)[Z_k + 1≈D_k + 1 + C_k + 1(x_k + 1 - x̃_k + 1/k); + S_k + 1(y_k + 1 - ỹ_k + 1/k); ⇔Z_k + 1 - D_k + 1 + C_k + 1x̃_k + 1/k+ S_k + 1ỹ_k + 1/k; ≈C_k + 1x_k + 1 + S_k + 1y_k + 1_Z_k + 1; ⇔Z_k + 1≈[ C_k + 1S_k + 10]_H_k + 1[ [x_k + 1;y_k + 1; θ _k + 1 ]] ] Where Z_k + 1 = Z_k + 1 - D_k + 1 + C__k + 1x̃_k + 1/k + S__k + 1ỹ_k + 1/k; and H_k + 1 denotes the [ C_k + 1S_k + 10]. Herein D_k + 1 = √((x̃_k + 1/k - x_g)^2 + (ỹ_k + 1/k - y_g)^2); C_k + 1 = (x̃_k + 1/k - x_g)/√((x̃_k + 1/k - x_g)^2 + (ỹ_k + 1/k - y_g)^2) and S_k + 1 = (ỹ_k + 1/k - y_g)/√((x̃_k + 1/k - x_g)^2 + (ỹ_k + 1/k - y_g)^2). Using this linearized distance measurement Z_k+1 instead of Z_k+1 as shown in Alg. <ref>, we can still update the predicted robot state X_k + 1/k with AprilTag nonlinear partial measurements via the proposed framework. §.§.§ Back Propagation In addition, our solution also consider the situation that the recursive estimation process received measurements are delayed since the image transmission and processing of the AprilTag is delayed for few seconds, which is common in practical applications and need to be resolved. The general solution that regard the error caused by delay as the random measurement error is not suitable for such large delay. Therefore, we apply the back propagation (BP) using motion data technology <cit.> to compensate the delayed state estimate. Specifically, suppose the robot pose estimation (either via prediction or via update) at current time period and previous time periods are X_k and X_k-1,X_k-2, ...,X_k-m, .... For example, we can treat the motion data period as the estimate time period. Besides, store previous motion data u_t,u_k-1, ... as well. We can store previous pose estimates and motion data in dynamically-adjusted queue (e.g. arrays) structures. Herein, the system prediction model is denoted as g(·), and it has {X_k ,P_k} = g( X_k ,P_k,u_k). Then, imagine at current time k, we receive a AprilTag measurement result whose timestamp is at (or near) time period k-m. In other words, we have an AprilTag measurement Z_k-m. What the back propagation using motion data does is as follows:a) Fuse (i.e. update) X_k-m with Z_k-m via the Split CIF as if Z_k-m was available at time k-m. Suppose the fused new result is {X_k-m,new,P_k-m,new}. b) Back project to update all pose estimates (together with their covariance) from time k-m to k in an iterative way as follows: { X_k-m+1,new, P_k-m+1,new}=g(X_k-m,new,P_k-m,new,u_k-m+1) { X_k-m+2,new, P_k-m+2,new}=g(X_k-m+1,new,P_k-m+1,new,u_k-m+2)...{ X_k-1,new,P_k-1,new}=g(X_k-2,new,P_k-2,new,u_k){ X_k,new,P_k,new}=g(X_k-1,new,P_k-1,new,u_k) § EXPERIMENTAL EVALUATION§.§ Experiment Conditions To verify the performance of the proposed solution of split covariance intersection filter based visual localization with accurate AprilTag map, a comparative study of various experiments in real warehouse environments is performed. Here we implement an overall performance experiment using several comparative methods to evaluate our proposed system in terms of accuracy and robustness. In addition, we also provide more experimental results and analysis in the case of occurrence of kidnapping and image process information delay to demonstrate the performance of the proposed mechanisms. The several methods involved in the comparative study are as follows:Pure AprilTag based visual localization (TagSLAM): The robot achieve the localization only rely on AprilTag detection module, without fusing the motion data.Extended Kalman filter based visual localization that incorporates the measurement adaptive mechanism (EKF-Full): The robot resorts to the EKF for robot state update with AprilTag measurements, whereas implementation of other parts is the same to that of the proposed method.Split Covariance Intersection Filter based visual localization without incorporating the measurement adaptive mechanism (SCIF-nonMA): The measurement adaptive mechanism is removed, whereas implementation of other parts is the same to that of the proposed method.Split Covariance Intersection Filter based visual localization without updating AprilTag nonlinear partial measurements (SCIF-nonP): AprilTag nonlinear partial measurements are simply discarded without being used for robot state update, whereas implementation of other parts is the same to that of the proposed method.Proposed solution of split covariance intersection filter based visual localization (SCIF-Full): The measurement adaptive mechanism is incorporated and AprilTag nonlinear partial measurements as well as AprilTag complete measurements are updated.The hardware (IMU, encoder, camera) is installed on the warehouse forklift robot, as shown in Fig. <ref>, and has accurate calibration and hardware timestamp synchronization. Apriltags are installed on warehouse walls in the way such that they will not influence and will not be influenced by warehouse operations. The forklift robot drives safely according to the navigation destination in the factory. In order to ensure the synchronization of each sensor measurement, system time is added into the measured data as the time axis. And all the experiments are implemented on a laptop with 8 CPU of Intel Core i5-8265U 1.60 GHz. §.§ Performance of the overall system In the experiments of testing the overall performance of proposed strategy, we compare the performance of forklift robot visual localization with the listed several methods executed simultaneously in two representative operation scenarios. The two test trajectories are shown in Fig. <ref>. Root mean square error (RMSE), Mean error (Mean) and standard error (STD) are used for localization error statistics and analysis, as shown in Table. <ref>.It can be seen that the performance of the Split CIF based fusion localization methods (including the SCIF-nonMA, the SCIF-nonP and the SCIF-Full) are better than the TagSLAM strategy in terms of accuracy. This verifies the advantages of the fusion localization and the proposed method can solve the problems of correlation and outlier efficiently for accurate localization. The various statistical error of the SCIF-Full visual localization method are smaller than the SCIF-nonMA visual localization method, which demonstrates that the measurement adaptive mechanism can improve the accuracy because it can adaptively adjust the weight of AprilTag measurements for robot state update. Besides, the SCIF-Full outperforms the SCIF-nonP, because it takes advantage of “reasonable information” contained in AprilTag nonlinear partial measurements instead of discarding all information directly. Moreover, the results in Table. <ref> demonstrate the robustness of the SCIF-Full in contrast with the pure AprilTag based visual localization method i.e. the TagSLAM. Moreover, in order to further verify the SCIF-Full method for solving the correlation problem, we compare the proportional reduction of error of EKF-Full method and SCIF-Full-based method relative to error of TagSLAM method, as shown in Table. <ref>. It can be seen that the SCIF-Full can significantly reduce the error and have better performance compared with the EKF-Full that ignores temporal correlation among AprilTag measurements. This reflects the SCIF-Full can better handle potential temporal correlation among AprilTag measurements. Meanwhile, the experimental results in several different paths (the forklift robot is also tested on shorter or longer paths) demonstrate that the SCIF-Full visual localization method also has good and stable localization performance when applied to different scenarios. §.§ Case 1: Occurrence of kidnapping As discussed before, the pose estimation results may be kidnapped due to a wrong pose with a large deviation, which results in several consecutive estimated poses being discarded because they are inconsistent with this pose. Therefore, we presents the dynamic initialization mechanism, which can not only obtain the initial pose estimation but also solve this problem. To highlight the performances of dynamic initialization mechanism under the condition of occurrence of kidnapping more clearly, we have designed two kidnapping situations. When the forklift dynamically initialize the pose, we add 2 meters in one of the translation and subtract 2 meters to the other in the initial position, and we add 1.5 meter in the translations of the measurement at a certain moment in the robot normal navigation process to simulate the sudden movements of the robot leading to the wrong pose and potential kidnapping problem. The Fig. <ref> (a) and (b) show the partial localization trajectory using proposed solution with dynamic initialization mechanism and error analysis (we only show a short path here, the complete path have similar results), respectively. We observe that the wrong pose can affect the localization results unreliable, but soon returned to normal, see red circle marks in (a) offigure and their corresponding error change statistics in (b). With the dynamic initialization mechanism, the kidnapping will not lead to several consecutive estimated poses being discarded due to inconsistent with the wrong pose and can maintaining a good performance in terms of robustness and accuracy.§.§ Case 2: Occurrence of delayAbout the occurrence of delay in the image transmission and processing process, it will result in the accuracy loss of the visual detection module, thereby reducing the fusion localization accuracy. Therefore, we presents the derived back propagation process in the recursive pose estimation (see details in Section II(D)), which can effectively deal with the latency and guarantee the localization accuracy. Specially, the comparative study with two test paths are shown in Table. <ref>. It can be seen that the various statistical error of proposed method are smaller than proposed method without adapting the back propagation technique, which demonstrates that not using back propagation to deal with the delay problem will cause a decrease in accuracy. The performed experimental results of whether to handle such delay in this way have proved the latency can be compensated. Meanwhile, the above technology for handling the delay have been implemented in other practical applications. §.§ Discussion In addition, for the proposed solution of split covariance intersection filter based visual localization i.e. SCIF-Full, the measurement noise model involved in the measurement adaptive mechanism is established according to AprilTag measurement statistics. Specifically, we record the error of each measurement, the view angle and the view distance, and fit the relationship among them to establish a model. When discussing the effect of the number of installed Apriltags on the results, we also explore to appropriately reduce the number of Apriltags in straight or turns section of the test path. The results show that it has little overall impact on the proposed fusion localization method.§ CONCLUSIONS Split covariance intersection filter based visual localization with accurate AprilTag map has been proposed, aiming at providing a reliable and commercially-competitive solution for warehouse applications. As highlights in the proposed solution, first, an accurate AprilTag map is established with the help of a LiDAR-based SLAM system (namely the unique mapping robot). There would be no cost concern because the cost of the mapping robot is shared by a large amount of operating robots that can all benefit from the established accurate AprilTag map. Second, once the accurate AprilTag map is available, for each of the large amount of operating robots, visual localization is realized as recursive estimation that fuses AprilTag measurements and robot motion data, taking advantage of the split covariance intersection filter that can handle temporal correlation among AprilTag measurements and handle AprilTag nonlinear partial measurements in the spirit of local linearization as well. Besides, each operating robot incorporates a measurement adaptive mechanism to handle outliers in AprilTag measurements and adopts a dynamic initialization mechanism to address the kidnapping problem. A comparative study of various experiments in real warehouse environments demonstrate the potential and advantage of split covariance intersection filter based visual localization with accurate AprilTag map.As communication devices with highly-qualified performance can be commercially deployed more and more nowadays, for future extensions in practice, multiple warehouse robots that operate in the spirit of cooperative visual localization may be studied.IEEEtran | http://arxiv.org/abs/2310.17879v1 | {
"authors": [
"Susu Fang",
"Yanhao Li",
"Hao Li"
],
"categories": [
"cs.RO"
],
"primary_category": "cs.RO",
"published": "20231027034212",
"title": "Split Covariance Intersection Filter Based Visual Localization With Accurate AprilTag Map For Warehouse Robot Navigation"
} |
Rydberg atomtronic devices Luigi Amico Accepted XXX. Received YYY; in original form ZZZ ==================================================== Long-tail learning has received significant attention in recent years due to the challenge it poses with extremely imbalanced datasets. In these datasets, only a few classes (known as the head classes) have an adequate number of training samples, while the rest of the classes (known as the tail classes) are infrequent in the training data. Re-sampling is a classical and widely used approach for addressing class imbalance issues. Unfortunately, recent studies claim that re-sampling brings negligible performance improvements in modern long-tail learning tasks. This paper aims to investigate this phenomenon systematically. Our research shows that re-sampling can considerably improve generalization when the training images do not contain semantically irrelevant contexts. In other scenarios, however, it can learn unexpected spurious correlations between irrelevant contexts and target labels. We design experiments on two homogeneous datasets, one containing irrelevant context and the other not, to confirm our findings. To prevent the learning of spurious correlations, we propose a new context shift augmentation module that generates diverse training images for the tail class by maintaining a context bank extracted from the head-class images. Experiments demonstrate that our proposed module can boost the generalization and outperform other approaches, including class-balanced re-sampling, decoupled classifier re-training, and data augmentation methods. The source code is available at <https://www.lamda.nju.edu.cn/code_CSA.ashx>. § INTRODUCTION Deep neural networks have achieved great success by applying well-designed models on large-scale elaborated datasets <cit.>. However, real-world data often exhibits a long-tail class distribution <cit.>. Learning from long-tail datasets has two main challenges, one is the class-imbalanced problem which causes the model biased towards the dominated head classes, and another is the data scarcity problem leading to the poor generalization on those rare tail classes <cit.>.One simple and intuitive approach to deal with the class-imbalanced problem is re-sampling <cit.>, i.e., create the replicate of the dataset to estimate the model parameters. Unfortunately, it has been reported that re-sampling methods achieve limited effects when applied to most long-tail datasets <cit.>. Currently, there are still few concrete and comprehensive explanations for this observation. Existing works mainly conclude that re-sampling will lead to the overfitting problem, and thus will be harmful to long-tail representation learning <cit.>. Recently, many two-stage approaches have been proposed to improve the tail-class performance by adopting re-sampling in the second training stage. For instance, DRS <cit.> adopts a re-sampling schedule at the last several episodes of the training process. cRT <cit.> first trains a preliminary model using the uniform sampler, then fixes the representations and re-trains the linear classifier using a class-balanced sampler. Last but not least, BBN <cit.> adjusts the whole model to first learn from the conventional learning branch and dynamically move to the re-balancing branch. Overall, the two-stage method has attracted widespread attention due to its basic hypothesis that uniform sampling is beneficial to representation learning, and the class-balanced sampling can be used to fine-tune the linear classifier. In light of the success of the two-stage method, a natural question is:Can re-sampling benefit long-tail learning in the single-stage framework? [pr]0.46< g r a p h i c s >Performance of re-sampling on two long-tail datasets. The sampling weight for a data point (,y) is defined as n_y^-γ where n_y denotes the class frequency of class y. To answer this question, this paper empirically studies the re-sampling strategies and finds that re-sampling leads to opposite effects on long-tail datasets. <Ref> gives a brief view of this phenomenon. Moreover, we deduce that if the training samples are highly semantically related to their target labels, class-balanced re-sampling can learn discriminative feature representations; otherwise, uniform sampling is even better than class-balanced re-sampling, as the latter suffers from oversampling redundant unrelated contexts. To verify this, we design a pair of synthetic benchmarks with the same content but different contexts, one containing irrelevant context in training samples and the other not. Experiments confirm that re-sampling achieves conspicuous different performances on these two benchmarks. In particular, when irrelevant context exists, class-balanced re-sampling learns poorer representations compared to uniform sampling, thus the irrelevant context negatively affects re-sampling methods.We believe that re-sampling can benefit long-tail learning in the single-stage framework. It fails on some long-tail datasets mainly because it overfits the oversampled irrelevant contexts and learns unexpected spurious correlations. If such spurious correlations are avoided, re-sampling can be helpful for long-tail learning. Motivated by this, we propose a new context-shift augmentation module, which transfers well-separated context from head-class data to tail-class data. Specifically, it extracts the unrelated contexts (e.g. the backgrounds or the unrelated foreground objects) from head-class images and pastes them onto tail-class images to generate diverse novel samples. In this way, it encourages the model to learn more discriminative results for the tail classes. We conduct experiments on three long-tail datasets, CIFAR10-LT, CIFAR100-LT, and ImageNet-LT. The results show that the proposed module achieves competitive performance compared to the baseline methods. In summary, our main contributions are: * We conduct empirical analyses on different datasets and discover that re-sampling does not necessarily work or fail in long-tail learning.* We deduce that the failure of re-sampling may be attributed to overfitting on irrelevant contexts, and our empirical studies confirm our hypothesis.* We propose a new context-shift augmentation module to prevent re-sampling from overfitting to irrelevant contexts in a single-stage framework. * Extensive experiments verify the effectiveness of the proposed module against class-balanced re-sampling, decoupled classifier re-training, and data augmentation methods. The rest of the paper is organized as follows. Section 2 studies the effects of re-sampling approaches. Section 3 presents the proposed context-shift augmentation module. Section 4 briefly reviews related works. Section 5 concludes the paper.§ A CLOSER LOOK AT RE-SAMPLING§.§ PreliminariesGiven a training dataset 𝒟 = {_i, y_i}_i=1^N, where _i is a training sample and y_i ∈𝒞 = [K] = {1, …, K} is the class label assigned to it. We assume that the training data follow a long-tail class distribution where the class prior distribution ℙ(y) is highly skewed so that many underrepresented classes have a very low probability of occurrence. Specifically, we define the imbalance ratio as ρ = max_y ℙ(y) / min_y ℙ(y) to indicate the skewness of data. Classes with high ℙ(y) are referred to as head classes, while others are referred to as tail classes.In practice, since the data distribution is unknown, Empirical Risk Minimization (ERM) uses the training data to achieve an empirical estimate of the underlying data distribution. Typically, one minimizes the softmax cross-entropy as followingℓ(y, f()) = -logexp(f_y())/∑_y^'∈[K]exp(f_y^'())where f_y() denotes the predictive logit of model f on class y. However, this ubiquitous approach neglects the issue of class imbalance and makes the model biased toward head classes <cit.>.To deal with the class-imbalance problem, the re-sampling strategy assigns a probability of being selected for each training sample according to its class frequency <cit.>. The probability of sampling a data point from class k can be written as:p_k = n_k^q/∑_k^'∈[K]n_k^'^qwhere n_k denotes the frequency of class k and q∈[0, 1]. When q=1, <Ref> denotes uniform sampling, where each training sample has an equal probability of being selected. When q=0, <Ref> denotes the class-balanced re-sampling, which selects samples from every class k with the identical probability of 1/K. §.§ Exploring the Effect of Re-sampling §.§.§ Re-sampling can learn discriminative representationsTo better explore the effect of the re-sampling strategy, we conduct experiments on multiple long-tail datasets, including MNIST-LT, Fashion-LT, CIFAR100-LT <cit.>, and ImageNet-LT <cit.>. We compare three different learning methods: 1) Cross-Entropy (CE) with uniform sampling; 2) Classifier Re-Training (cRT) which uses uniform sampling to learn the representation and class-balanced re-sampling to fine-tune the classifier; 3) Class-Balanced Re-Sampling (CB-RS) for the whole training process. We report the experimental results in <Ref>.According to the results, cRT performs best on CIFAR100-LT and ImageNet-LT, which is consistent with previous works <cit.>. CE and cRT use the same representation but cRT achieves higher performances, which indicates that re-sampling can help for classifier learning. However, on MNIST-LT and Fashion-LT, CB-RS surprisingly achieves the highest performance and outperforms CE and cRT by a large margin. Since cRT and CB-RS both use class-balanced re-sampling for classifier learning, the results indicate that CB-RS learns better representations than uniform sampling on MNIST-LT and Fashion-LT.To further understand the effect of re-sampling, we visualize the learned representation on MNIST-LT in <Ref>. The figures show that with uniform sampling, the representation space is dominated by head classes, and the representations of tail classes are hard to distinguish. By applying class-balanced re-sampling, the representations of both head and tail classes are discriminative.§.§.§ Re-sampling is sensitive to irrelevant contexts We have demonstrated the generalization ability of the re-sampling strategy on MNIST-LT and Fashion-LT. Nevertheless, re-sampling performs unsatisfactorily on the other two datasets. Since the training samples and target labels on MNIST and Fashion are highly semantically correlated <cit.>, while samples on CIFAR and ImageNet contain complex contexts <cit.>, we hypothesize that re-sampling is sensitive to the contexts in training samples.To support our hypothesis, we visualize the Grad-CAM of the representation learned by different sampling strategies in <Ref>. When training with a uniform sampler, models can distinguish the contexts from samples of head classes. However, when adopting a class-balanced re-sampler, the model tends to overfit the irrelevant context from the over-sampled tail data, which unexpectedly affects the representation of head classes. For example, when classifying different kinds of animals, re-sampling might focus on the posture rather than the appearance. Also, when classifying different vehicles, re-sampling is easily influenced by the human in tail-class images, and thus mistakenly focuses on the human in head-class images.To further validate our point, we design a homogeneous benchmark of MNIST-LT termed Colored-MNIST-LT (CMNIST-LT). We inject colors into MNIST-LT to artificially construct irrelevant contexts. Specifically, we design CMNIST-LT based on two considerations. First, head classes are prone to have rich contexts, so we inject different colors into the samples of each head class. Second, tail classes have limited contexts, so we inject an identical color into the samples of each tail class. We conduct uniform sampling, classifier re-training, and class-balanced re-sampling on MNIST-LT and CMNIST-LT. The experimental results are illustrated in <Ref>. The results show that when applied to MNIST-LT, CB-RS can boost the tail-class performance without degradation on head classes. However, for CMNIST-LT, CB-RS performs worse than uniform sampling and cRT on both head and tail classes, thus validating the negative impact of irrelevant contexts on re-sampling methods. Since re-sampling succeeds on MNIST-LT and fails on CMNIST-LT, we propose that re-sampling does not always fail, it can help for long-tail learning if avoiding the irrelevant contexts.§.§.§ Proposed benchmark datasetsWe follow the previous works <cit.> to construct MNIST-LT, Fashion-LT and CIFAR100-LT and set the imbalance ratio to 100. ImageNet-LT is proposed by <cit.>. For MNIST-LT and Fashion-LT, we use LeNet <cit.> as the backbone network and add a linear embedding layer before the fully connected layer to project the representation into 2-dimensional space for better presentation. We use standard SGD with a mini-batch size of 128, an initial learning rate of 0.1 and a cosine annealing schedule to train the model for 8 epochs. When applying cRT, we retrain the last fully connected layer for 4 epochs by fixing the other layers. For CIFAR100-LT and ImageNet-LT, more details are in <Ref>.To construct CMNIST-LT, we follow the idea of CMNIST <cit.> to first randomly flip the label and then inject colors into the training samples. However, different from CMNIST which converts the MNIST to a binary classification dataset, we keep the ten classes to better simulate a long-tail class distribution. To generate flipped labels on the long-tail dataset MNIST-LT, we follow the method in <cit.> and set the flipping probability to 1/4. We generate ten different colors using the seaborn[<http://seaborn.pydata.org/generated/seaborn.color_palette.html>] package. For the five head classes, we randomly inject one of these ten colors into each sample with equal probability. For the other five tail classes, we inject samples of each class with a single color.§ A SIMPLE APPROACH TO MAKE RE-SAMPLING ROBUST TO CONTEXT-SHIFT §.§ Extracting Rich Contexts from Head-class Data By studying the effects of re-sampling methods in different scenarios, we can draw a conclusion: when the training samples contain irrelevant contexts, simply over-sampling the tail-class samples might cause the model to unexpectedly focus on these redundant contexts, thereby resulting in the overfitting problem. However, the head classes have rich data to learn a model with good generalization ability. We naturally raise a question: can we utilize the rich contexts from head data to augment the over-sampled tail data to alleviate the negative impact of irrelevant contexts? Inspired by this motivation, we design a context-shift augmentation module by extracting the rich contexts from head-class data to enrich the over-sampled tail-class data.To leverage the rich contexts in head-class data, we utilize a model f^u trained with uniform sampling for context extraction. First, we select well-learned samples with fitting probability larger than a threshold δ to improve the extraction quality. The fitting probability for sample _i can be calculated by the Softmax function as followsp(y|_i,f^u)=exp(^u_i,y)/∑_y^'∈[K]exp(^u_i,y^')where ^u_i denotes the logits predicted by f^u, i.e., ^u_i=[^u_i,1,…,^u_i,K]=f^u(_i). Then, we use off-the-shelf methods such as Grad-CAM <cit.> to extract the image contexts, which is also used in previous works regarding open-set learning and adversarial learning <cit.>. Specifically, given an image _i, we calculate its class activation map (_i | f^u). Then, we inverse the map to get the background mask _i, i.e., _i=1-(_i | f^u). Here _i is a matrix of the same size as _i, with values between 0 and 1. A higher value indicates that the corresponding pixel is more likely to be the background. Different from previous works that discretize the mask matrix to binary values <cit.>, we keep the floating values in the matrix to conserve more information.After calculating the background mask _i of image _i, we paste the mask onto the original image by _i⊙_i to obtain a background image. In this way, we separate the semantically related contents from the images and keep the rest contexts for further augmentation. Finally, the extracted contexts are pushed into a memory bank Q for augmentation of re-sampled data.For the training of the uniform module, we apply the conventional ERM algorithm. For each training sample _i, we calculate its loss by^u_i =f^u(_i) Ł^u_i =ℓ^u(^u_i, y_i)where ℓ^u can be any loss function. Generally, we use the standard cross-entropy loss. §.§ Balanced Re-sampling with Context-shift Augmentation Simply adopting balanced sampling might generate many repeated samples from the tail classes and lead to the overfitting problem. Therefore, we ask for background images from the context memory bank Q and paste them onto the re-sampled images. In this way, we generate more diverse novel samples by simulating each tail-class image within various contexts. Specifically, for a re-sampled training image _i, we ask for another image _i together with its mask _i from Q, and fuse it with _i to generate a novel sample, and calculate its training loss as follow:λ ∼Uniform(0, 1)_i =λ_i⊙_i+(1-λ_i)⊙_i ^b_i =f^b(_i)Ł^b_i =ℓ^b(^b_i, ỹ_i)Here λ is randomly generated between [0, 1] to increase the diversity. Different from previous mixup-based methods<cit.>, our method does not change the target label, because the pasted background is not related to the semantics of any class labels. To reduce the computational complexity, the uniform module and the balanced re-sampling module can be trained simultaneously by sharing the same feature extractor ϕ, and training their own linear classifiers ψ^u and ψ^b, i.e., f^u(·)=ψ^u(ϕ(·)) and f^b(·)=ψ^b(ϕ(·)). Since the classifier is lightweight, it does not add much additional computational overhead.Moreover, the memory bank Q is designed as a first-in-first-out queue with a maximum volume of V for a convenient query. After extracting the context from the uniform module, we append the _i and _i pair into the context bank Q. When the size of Q reaches its maximized volume, the oldest contexts are pushed out. In practice, the volume size is set equal to the mini-batch size in order to minimize the overhead as well as ensure the querying requirements. Finally, the summarized loss function isŁ=Ł^u+Ł^b=1/N∑_i=1^NŁ^u_i+1/N∑_i=1^NŁ^b_i<Ref> gives a brief overview of the proposed module. The detailed training procedure is given in the supplementary material due to the page limit.In the inference phase, only the balanced re-sampling module is used. In other words, the uniform module only serves as an assistant to provide more rich contexts for the re-sampling module during the training phase. Formally, for a test data point , we obtain the prediction by =f^b(), and then employ the Softmax function to obtain the predictive probabilities.§.§ Empirical ResultsWe demonstrate the efficacy of the proposed module context shift augmentation by comparing it with different kinds of long-tail learning methods, including:* Re-sampling or re-weighting methods, such as Focal Loss <cit.>, CB-Focal <cit.>, CE-DRS <cit.>, CE-DRW <cit.>, LDAM-DRW <cit.>, cRT <cit.>, LWS <cit.>, and BBN <cit.>,* Head-to-tail knowledge transfer methods, such as M2m <cit.>, OLTR <cit.>, and FSA <cit.>,* Data augmentation methods, such as Mixup <cit.>, Remix <cit.>, CAM-BS <cit.>, and CMO <cit.>. We conduct experiments on three long-tail datasets, including CIFAR10-LT <cit.>, CIFAR100-LT <cit.>, and ImageNet-LT <cit.>. CIFAR10-LT and CIFAR100-LT are the long-tail versions of CIFAR datasets by sampling from the raw dataset with an imbalance ratio ρ. Following previous works <cit.>, we conduct experiments with ρ∈{100,50,10}. ImageNet-LT is a long-tail version of the ImageNet <cit.>, which contains 1000 classes, each with a number of samples ranging from 5 to 1280. For each long-tail training dataset, we evaluate the model on another corresponding class-balanced test set by calculating the overall prediction accuracy.The results for CIFAR10-LT and CIFAR100-LT are summarized in <Ref>. We report the results under imbalance ratio ρ=100,50,10. As shown in the table, our method achieves superior performance compared with the baseline methods in all settings. Specifically, the methods based on re-sampling or re-weighting such as DRS, DRW, cRT, and LWS can ease the class-imbalanced problem to some degree but the performance gain is limited due to the neglect of the representation learning. The methods based on knowledge transfer and data augmentations, such as M2m and CMO, achieve higher performance.Moreover, by combining cRT and LWS with mixup, the performance achieves an obvious improvement. However, such two-stage training methods are not end-to-end approaches. In contrast, the proposed context shift augmentation module enhances representation learning by enriching the contexts of samples and adopting the class-balanced re-sampling to ensure a balanced classifier. In this manner, it achieves more improvement with an end-to-end framework. Since cRT and the proposed module both use class-balanced sampling for classifier learning, the results indicate that the proposed context shift augmentation can achieve better representations.We further conduct the experiments on a larger scale dataset ImageNet-LT. We calculate the accuracy of the overall test set and the average accuracy of the many-shot classes (more than 100 images in the training set), the medium-shot (20∼100 images), and the few-shot classes (less than 20 images). We report the results in <Ref>. It shows that the proposed module is superior to most other baseline methods. The performance is similar to another data augmentation method CMO, but our method achieves higher accuracy on few-shot classes, which demonstrates the generalization ability of context-shift augmentation for tail-class data. We provide more detailed studies to analyze the effect of each component in the proposed module. Due to the page limit, we report the results in the supplementary material.§ RELATED WORK §.§ Re-sampling and Re-weighting Re-sampling is a widely used strategy in class-imbalanced learning <cit.>. There are two main ideas of re-sampling: 1) Over-sampling by repeating data from the rare classes. 2) Under-sampling by abandoning a proportion of data from the frequent classes. However, when the class distribution is highly skewed, re-sampling methods often fail. Previous works point out that under-sampling may discard precious information which inevitably degrades the model performance, and over-sampling tends to cause the overfitting problem on the tail classes <cit.>.Recent work <cit.> develops class-balanced re-sampling where samples from each class have an identical probability of being sampled. Class-balance re-sampling can bring performance gain for classifier learning but hurts representation learning. Therefore, two-stage approaches <cit.> adopt it at the late stage of the whole training process in order not to impact the representation.Re-weighting aims to generate more balanced predictions by adjusting the losses for different classes <cit.>. The most intuitive way is to weight each training sample by the inverse of its class frequency <cit.>. Similar to class-balanced re-sampling, re-weighting can achieve better results for tail classes but usually deteriorates its performance for head classes <cit.>.In contrast, our empirical study reveals that class-balanced re-sampling can be an effective method as long as there exist no irrelevant contexts. It fails in some cases mainly due to the unexpected overfitting towards the over-sampled redundant contexts. When applied with context-shift augmentation, class-balanced re-sampling can achieve competitive performance on long-tail datasets. §.§ Head-to-tail Knowledge Transfer As the head classes have adequate training data while the tail classes have limited data, recent works aim to leverage the knowledge gained from head classes to enhance the generalization of tail classes. Feature transfer learning <cit.> utilizes the intra-class variance from head classes to guide the feature augmentation for tail classes. MetaModelNet <cit.> uses the head data to train a meta-network to predict many-shot model parameters from few-shot model parameters, then transfers the meta-knowledge to the tail classes. Major-to-minor translation (M2m) <cit.> uses the over-sampling method and translates the head-class samples to replace the duplicated tail-class samples via adversarial perturbations. OLTR <cit.> maintains a dynamic meta-embedding between head and tail classes to transfer the semantic deep features from head to tail classes.These methods assume that the head classes and the tail classes share some common knowledge such as the same intra-class variances, the same model parameters, or the same semantic features. In this work, we regard the contexts as such knowledge and transfer the contexts from head-class data to enrich the tail-class data. §.§ Data Augmentation Several data augmentation approaches have been proposed to improve the model generalization ability. In contrastive learning <cit.>, curriculum learning <cit.>, meta-learning methods <cit.> and instance segmentation tasks <cit.>, data augmentation strategies have been shown to effectively improve the generalization of tail classes. MiSLAS <cit.> studies the mixup <cit.> technology in long-tail learning and finds that mixup can have a positive effect on representation learning but a negative or negligible effect on classifier learning. Remix <cit.> adapts the mixup method to a re-balanced version. It assigns the mixed label in favor of the tail class by designing a disproportionately higher weight for the tail class. CMO <cit.> applies CutMix <cit.> by cutting out random regions of a sample from head classes and filling the removed regions with another sample from tail classes. By this means, it enriches the contexts of the tail data; but the random cutout operation does not necessarily separate the contents and contexts.The attention or CAM-based methods have been proposed to improve long-tail learning via feature decomposition and augmentation. CAM-BS <cit.> separates the foreground and background of each sample, then augments the foreground part by flipping, translating, rotating, or scaling. Feature Space Augmentation (FSA) <cit.> uses CAM to decompose the features of each class into a class-generic component and a class-specific component, and generates novel samples in the feature space by combining the class-specific components from the tail classes and the class-generic components from the head classes. Attentive Feature Augmentation (AFA) <cit.> adopts feature decomposition and augmentation via the attention mechanism. Note that FSA and AFA can also be seen as head-to-tail knowledge transfer approaches. Nevertheless, these methods neglect that the learned model has limited generalization ability on tail classes, and most foregrounds (or class-specific components) of samples from tail classes are incredible. In comparison, our method applies CAM to separate related contents and unrelated contexts for samples mainly from head classes. It then pastes the contexts extracted from the head-class data onto the over-sampled tail-class data to enrich the contexts.§ CONCLUSION In this work, we study the re-sampling strategy for long-tail learning. Our empirical investigations reveal that the impact of re-sampling is highly dependent on the existence of irrelevant contexts and is not always harmful to long-tail learning. To reduce the influence of irrelevant contexts, we propose a new context-shift augmentation module that leverages the well-separated contexts from the head-class images to augment the over-sampled tail-class images. We demonstrate the superiority of the proposed module by conducting experiments on several long-tail datasets and comparing it against class-balanced re-sampling, decoupled classifier re-training, and data augmentation methods.§ BROADER IMPACT AND LIMITATIONS This paper investigates the reasons behind the success/failure of re-sampling approaches in long-tail learning. In critical and high-stakes applications, such as medical image diagnosis and autonomous driving, the presence of imbalanced data poses the risk of producing biased predictions. By shedding light on this problem, we aim to inspire more research on safe and robust re-sampling approaches.One may be concerned about combining the proposed module with other methods such as self-supervised learning <cit.>, logit adjustment <cit.>. We conduct additional experiments and report the results in the supplementary material due to the page limit. Nevertheless, the proposed module can not achieve comparable performance with the well-designed models <cit.>, since our intention is not to achieve performance that is on par with state-of-the-art methods. Instead, we hope that our findings will inspire future research regarding re-sampling methods. § DATA AVAILABILITY STATEMENTThe source code of our method is available at <https://www.lamda.nju.edu.cn/code_CSA.ashx> or <https://github.com/shijxcs/CSA>. This research was supported by the National Key R&D Program of China (2022ZD0114803), the National Science Foundation of China (62176118, 61921006).unsrt § TRAINING PROCEDURE The training procedure of context-shift augmentation is summarized in <Ref>.§ IMPLEMENTATION DETAILS FOR CONTEXT-SHIFT AUGMENTATION For experiments on CIFAR10-LT and CIFAR100-LT, we use ResNet-32 as the backbone network and train it using standard SGD with a momentum of 0.9, a weight decay of 2× 10^-4, a batch size of 128. The model is trained for 200 epochs. The initial learning rate is set to 0.2 and is annealed by a factor of 10 at 160 and 180 epochs. We train each model with 1 NVIDIA GeForce RTX 3090.For experiments on ImageNet-LT, we implement the proposed method on ResNet-10 and ResNet-50. We use standard SGD with a momentum of 0.9, a weight decay of 5× 10^-4, and a batch size of 256 to train the whole model for a total of 90 epochs. We use the cosine learning rate decay with an initial learning rate of 0.2. We train each model with 2 NVIDIA Tesla V100 GPUs.In all experiments, we first warm up the uniform module for 10 epochs and then train the uniform module and the balanced re-sampling module simultaneously for the rest epochs. For the uniform module, we follow the simple data augmentation used in <cit.> with only random crop and horizontal flips. For the re-sampling module, we use the proposed context-shift augmentation method. We apply the trick proposed by <cit.> to disable the augmentation in the balanced re-sampling module at the last 3 epochs to obtain further improvements, which is also applied in other baseline methods <cit.>. We set the threshold δ to 0.8.§ ADDITIONAL ILLUSTRATIONS For convenience in understanding context and content, we give an example in <Ref>. Context refers to the semantically unrelated parts in the images, and content refers to the semantically related parts. Moreover, We give a brief illustration of the generated dataset CMNIST-LT in <Ref>.§ ADDITIONAL EXPERIMENTAL RESULTS§.§ Effects of the context bank The context bank Q is a novel component of context-shift augmentation which receives diverse contexts from the uniform module and provides them to augment the data in the re-sampling module. To verify the effectiveness of the context bank, we remove it from the framework and train the model on CIFAR100-LT with an imbalance ratio of 100. The results are reported in <Ref>.The results show that without context bank Q, the performance decreases by a large margin. The performance degradation mostly comes from the medium-shot classes and the few-shot classes, which indicates that the context bank can significantly improve the generalization of tail classes.Moreover, we study the effect of different variants in the context bank. First, we study the threshold δ for sample selection and report the results in <Ref>. On the one hand, if δ is too high, the selected samples will be very few. On the other hand, if δ is too small, the selected samples might be not well learned. Nevertheless, as the training process progresses, most samples will fit well, so our method is not sensitive to δ. We set δ=0.8 considering its best performance.Second, we study the influence of the volume size V of the context bank Q and report the results in <Ref>. Since the bank Q is a first-in-first-out queue, the latest incoming contexts are more convincing. When the volume size is too large, the bank might contain more past samples. Besides, a larger size of V would bring more memory overhead. So we set the volume size V equal to the mini-batch size B in our method. §.§ Influence of augmentation variants We use a variant λ∼Uniform(0, 1) for generating novel samples. The value of λ can result in different proportions of foreground and background in the novel sample. Also, the size of the sampling space affects the diversity of the novel images. To explore the effect of λ, we try λ∼Uniform(a,b) and λ∼Beta(a,b) to train context-shift augmentation on CIFAR100-LT with imbalance ratio 100 and report the results in <Ref>.First, when λ is close to 0, the background merely takes effect, and the performance decreases a lot. Second, when λ=1, the background image might cover the important content in the foreground image. Also,the diversity of new samples is limited. Although the performance is better than that of λ=0, it is still unsatisfactory. Overall, choosing λ∼Uniform(0, 1) or λ∼Beta(1, 1) lead to the best performance.§.§ Comparison between different modules In our framework, the uniform sampling module is only enabled in the training phase. While in the inference phase, we use the balanced re-sampling module to predict unseen instances. To verify the superiority of the re-sampling module, we compare the performance of these two modules as well as their ensemble. We report the results in <Ref>. The results show that the re-sampling module is superior to the uniform module, and even achieves higher accuracy than the ensembled results. This also indicates the superiority of the proposed context-shift re-sampling method.Moreover, we study the influence of different balance ratios on our re-sampling module and compare it with the vanilla re-sampling method. We report the results in <Ref>. For vanilla re-sampling, adopting a more balanced re-sampling would yield more severe performance degradation. In contrast, our method achieves higher performance through class-balanced resampling. §.§ Comparison of the learned representation We visualize the representation of CIFAR100-LT in <Ref>. Each color represents a class, with darker colors representing head classes, and lighter colors representing tail classes. Although the fine-grained colors make it difficult to distinguish some classes, it can still be seen that CB-RS learns worse representation compared with vanilla CE. Moreover, our proposed method can learn a more distinguishable representation. Moreover, we visualize the Grad-CAM for examples with our proposed context-shift augmentation. We choose the same examples in <Ref> in the main paper and report the results in <Ref>. The results show that our method can alleviate the negative impact on head-class samples caused by the overfitting problem. §.§ The influence of Grad-CAM The Grad-CAM <cit.> is utilized to extract unrelated contexts in previous works such as open-set learning and adversarial learning <cit.>. Also, we use Grad-CAM in the context-shift augmentation to generate diverse contexts for tail-class data. We compare Grad-CAM with CAM <cit.>. The results shown in <ref> demonstrate that Grad-CAM is superior to CAM when applied to our method. One may be concerned with the generated activation map of tail-class images. We visualize some tail-class samples with predicted probabilities higher than δ in <Ref>, which shows that the model can still grap accurate activation maps for tail-class samples.§.§ Combination with self-supervised learning It is interesting to combine self-supervised methods with context-shift augmentation. Inspired by this, we follow the self-supervised + fine-tune method, i.e., SimSiam+rwSAM in <cit.> and conduct extensive experiments on the CIFAR10-LT dataset. The results are shown in <ref>.Note that in <cit.>, the models are pre-trained with the long-tailed dataset, while fine-tuned with the balanced in-domain dataset. However, it is hard to achieve a balanced in-domain version of a long-tailed dataset in real-world scenarios. So we use the long-tailed dataset with a balancing method including class-balanced re-sampling (CB-RS), and re-sampling with context-shift augmentation. The results show that our method is superior to CB-RS. §.§ Combination with the logit adjustment Since our work aims to study the effectiveness of re-sampling in long-tail learning, we use the Class-Balanced Re-Sampling (CB-RS) in our method. We consider combining our method with other re-balancing methods, such as Logit Adjustment (LA) <cit.>. Specifically, we change the class-balanced re-sampling to the uniform sampling while adopting context-shift augmentation. Moreover, we consider combining class-balanced re-sampling and LA simultaneously. The comparison results are shown in <ref>. The results show that our method can be combined with logit adjustment to yield a higher performance. However, by applying the balanced loss and the balanced sampling, the model puts much focus on tail classes and results in a deterioration of overall accuracy. §.§ Combination with supervised contrastive learning Our proposed context-shift augmentation (abbreviated as CSA) can be integrated with the supervised contrastive learning method BCL <cit.> to further improve the generalization. In <Ref>, we report the experimental results. We conduct experiments on CIFAR100-LT with varying imbalance ratios, showing that CSA consistently boosts the performance of BCL. §.§ Computational cost analysis of context-shift augmentation.The proposed context-shift augmentation is based on the widely-used dual-branch network. Moreover, a context-extracting module is designed to calculate Grad-CAM as well as obtain the contexts. The context-extracting module has a very small computational cost, as it only needs to apply the gradient backward once at the last layer of the network, and can be ignored compared to the global gradient backward for updating the whole model. Besides, it does not spend too much memory space to save the extracted contexts, since the size of the context bank is set equal to the mini-batch size. In <Ref>, we report the training time cost per epoch of CIFAR100-LT using a single RTX3090, which also demonstrates that the proposed method does not lead to too much additional computational overhead. | http://arxiv.org/abs/2310.18236v1 | {
"authors": [
"Jiang-Xin Shi",
"Tong Wei",
"Yuke Xiang",
"Yu-Feng Li"
],
"categories": [
"cs.CV",
"cs.LG"
],
"primary_category": "cs.CV",
"published": "20231027162034",
"title": "How Re-sampling Helps for Long-Tail Learning?"
} |
plainA Comprehensive and Reliable Feature Attribution Method: Double-sided Remove and Reconstruct (DoRaR) Dong Qin, George Amariucai*, Daji Qiao, Yong Guan, Shen FuIowa State University, Ames, IA, USA,{dqin, daji, guan, shenfu}@iastate.edu*Kansas State University, Manhattan, KS, USA,[email protected] January 14, 2024 ========================================================================================================================================================================================================================= The limited transparency of the inner decision-making mechanism in deep neural networks (DNN) and other machine learning (ML) models has hindered their application in several domains. In order to tackle this issue, feature attribution methods have been developed to identify the crucial features that heavily influence decisions made by these black box models. However, many feature attribution methods have inherent downsides. For example, one category of feature attribution methods suffers from the artifacts problem, which feeds out-of-distribution masked inputs directly through the classifier that was originally trained on natural data points. Another category of feature attribution method finds explanations by using jointly trained feature selectors and predictors. While avoiding the artifacts problem, this new category suffers from the Encoding Prediction in the Explanation (EPITE) problem, in which the predictor's decisions rely not on the features, but on the masks that selects those features. As a result, the credibility of attribution results is undermined by these downsides. In this research, we introduce the Double-sided Remove and Reconstruct (DoRaR) feature attribution method based on several improvement methods that addresses these issues. By conducting thorough testing on MNIST, CIFAR10 and our own synthetic dataset, we demonstrate that the DoRaR feature attribution method can effectively bypass the above issues and can aid in training a feature selector that outperforms other state-of-the-art feature attribution methods. Our code is available at https://github.com/dxq21/DoRaR. § INTRODUCTION§.§ Background and Motivation The machine learning models are spreading at a high speed in many crucial aspects of society, powered by complex models such as deep neural networks (DNN). These models have a wide application in the real world and such a trend has made interpretable machine learning consequential for trusting model decisions<cit.> and expanding knowledge<cit.>. Interpretability in machine learning is well-studied in order to make machine learning models better serve human beings. On the one hand, efforts have been made to illuminate the inner mechanism of some deep learning models, making them more transparent, e.g. global self interpretable models or global explanations<cit.>. On the other hand, many other methods have been presented to provide an understanding of which features locally, for a given instance of data, are more important than others in final decision making. This type of explanation can be categorised as feature attribution method. For example, feature attribution method produces masks to explain images, where important pixels are highlighted based on their contribution to the target label prediction. The first approach helps people trust a model and the second approach lets users trust the prediction result. Our research focuses on feature attribution method that make the prediction trustworthy for users.Providing a trustworthy DNN model explanation efficiently is challenging due to limitations in many aspects. For example, perturbation based methods <cit.> and locally linear methods <cit.> are computationally inefficient, which makes them inapplicable in the industry. Other methods like gradient based methods are inaccurate and vulnerable to adversarial manipulation<cit.>.In recent years, some studies such as <cit.>, generate a mask to select features and then feed the masked input to the classifier that was trained on the complete feature dataset. While this method is efficient, it faces the issue of unwanted artifacts <cit.>, because the masked inputs typically fall outside the natural distribution of the training dataset. Further details regarding the artifacts problem are discussed in Section <ref>.In order to solve the artifacts problem, some efforts have been made, e.g., inducing α-norm <cit.>, Gaussian blur <cit.> or utilizing low resolution intermediate activation features in the CNN model, followed by up-sampling <cit.>. These approaches mitigate the problem but do not solve it essentially, and may bring in other side-effects, e.g., introducing extra evidence, reducing prediction accuracy and so on.Inspired by <cit.>, some methods <cit.> retrain a new classifier with masked input, and then evaluate explanations by their performance in the new classifier. However, training a model based on masked inputs could cause a new problem which we call Encoding Prediction in the Explanation (EPITE) – details of the EPITE problem are described in Section <ref>. This problem is commonly seen in feature attribution methods with an encoder-decoder joint training architecture. In this paper, we propose a reliable and comprehensive feature attribution method, which we call Double-sided Remove and Reconstruct (DoRaR), to interpret a neural network classifier. In this model, we try to explain how a pre-trained classifier works by finding the most contributing explanation units in the input sample. The number and size of such units (e.g. 4 4× 4 pixels based chunk) are predefined. A feature selector is trained to find these units. Selected features will be used to train a generative model to reconstruct a sample. Then, instead of the masked sample, our reconstructed sample will be used to predict the target label in the pre-trained classifier. The non-selected part is treated similarly, i.e., it is used to train another generative model to reconstruct an sample, which is then evaluated by the pre-trained classifier. Therefore, both selected features and non-selected features are evaluated and their prediction losses are used to train the feature selector. §.§ ContributionThe contributions of this research are as follows:1) We present a feature attribution method (DoRaR) which can deal with both the artifacts and the EPITE problems.2) We present a new definition of feature selector which defines both size and number of required explanation units. It limits the feature interaction within a clearly defined scenario.3) Our feature attribution method is compared with other state-of-the-art feature attribution methods including LIME, Grad, SmoothGrad, VIBI, Real-X and Guided Feature Inversion, <cit.> on MNIST, CIFAR-10 and a user-mouse interaction behavioural based datasets using an appropriate testing scheme.§ PROBLEM STATEMENT In this section, we define the goal of interpreting a DNN-based classifier and describe possible problems occurring in some feature attribution methods. In a multi-class classification problem, the classifier can be defined as y_i = P(X_i) as shown in Fig.<ref>. We aim to find the contributing factors, which we called explanation, in X_i that lead the black-box classifier P to make the prediction y_i. More specifically, given an input X_i, we want to find a discrete mask M_i^*, where M_i^*(j)∈{0,1} represents the corresponding value at dimension j, that selects the features of X_i with the highest relevance for explaining y_i = P(X_i). To achieve this goal, some algorithms have been presented in previous research <cit.>. As shown in Fig. <ref>, a typical category is the non-retraining based algorithm which evaluates the contribution of selected key features in the pre-trained black-box classifier. This category of feature attribution method always suffer from the 1) Artifacts problem, because the masked input sample is out of natural data distribution which the classifier is trained with.In order to address the artifacts problem, re-training a new classifier to evaluate the selected key features is introduced in <cit.>. Inspired by the re-training strategy, multiple feature attribution methods has been presented to find the key features <cit.>. However, this category of feature attribution method may suffer from the 2) EPITE problem, because the class information might be leaked through the mask <cit.>. The final prediction can be made by the re-trained classifier which learns the class label only by the mask shape rather than the feature values selected by the mask.These problems jeopardizes their model explanation's trustworthiness. Current feature attribution methods either don't address the artifacts problem at all <cit.> or address it but introduce the EPITE problem <cit.>. None of them essentially solves all problems at the same time.In this research, we introduce a new feature attribution method for training a feature selector, both the artifacts problem and the EPITE problem can be avoided in our feature attribution method.§.§ The Artifacts Problem Neural networks are known to be affected by surprising artifacts <cit.>. Fig. <ref> shows an example of the artifacts problem, where some obviously irrelevant pixels are selected as key features. Fig. <ref> illustrates how the artifacts problem happens, a classifier represented here as the green and orange lines is trained on a dataset, shown as red and blue data points, with two features x and y. The green part of the classifier boundary is mainly shaped by the nature of the data distribution. However, the orange part of the classifier boundary is somewhat arbitrary, and may depend on randomly initialized neural network parameters, optimizer settings and so on. When evaluating an explanation obtained by replacing part of the features by ad-hoc values, such as zeros – such an explanation, which drops the y feature and replaces it with zeros as shown in Fig. <ref> – the data points follow a distribution different than the original natural distribution, and they may be misclassified by the classifier. The severe decrease in classification accuracy will in this case lead to the wrong conclusion that the y-axis feature is the key factor for maintaining correct classification, so it should be chosen as the explanation. In this case, setting the y-axis feature to zero creates the unwanted artifact in the explanation that we are trying to avoid. More generally, for higher dimensional cases such as images and for different types of background, such as mean value, Gaussian blurred noise and so on, it is possible to find certain masks M_i (artifacts) for input X_i which make the image follow a different distribution than the natural one, and generate unexpected output <cit.>. Therefore, these artifacts selected by M_i could become part of our explanation, but such explanations may not be meaningful and should be avoided.§.§ The EPITE ProblemThe EPITE problem commonly occurs in retrain evaluation strategy <cit.> or feature attribution methods that rely on a retrained predictor to evaluate their feature attribution result <cit.>. These methods typically train a feature selector to perform instance-wise feature selection and feed the selected features to a predictor, which learns to predict the target label based on the selected features. Such encoder-decoder structure trains the feature selector and the predictor jointly to optimize the overall prediction accuracy. However, while this evaluation strategy overcome the artifacts problem, it comes with a problem that the encoder part of the model could encode the prediction label in the explanation. Fig. <ref> shows an example of the EPITE problem. In this example, an encoder-decoder structure based feature attribution method VIBI <cit.> achieves high prediction accuracy of 72% in the retrained predictor with a single selected pixel from irrelevant areas as input and the rest part replaced by 0. This is achievable because the feature selector encodes the label information within the index of the selected feature rather than its value.The EPITE problem is first being mentioned by <cit.>, where an algorithm called REAL-X is introduced to solve it. The claim is that the EPITE problem is solved by first training a predictor, called Eval-X, using randomly chosen features, then training the Real-X feature selector through the feedback from the predictor. But the authors admit it is difficult to reach optimality in practice by training the Eval-X through randomly chosen features, especially when the feature dimension is high. Besides, their feature selector is trained based on their Eval-X predictor, so their feature selctor Real-X is learned to find key features only for the Eval-X predictor rather than arbitrary machine learning predictors.In <cit.>, the EPITE problem is referred to as the class information leakage through masking. In such case, the retrained predictor can predict the label purely by indexes of the selected features, in other words the mask shape, rather than values of the selected features. To address the EPITE problem, the authors propose a solution that involves filling the removed missing features with weighted sums of adjacent selected features as background E. This minimizes I(X';M), where X'=X·M+E·(1-M), making it harder for the retrained model to recognize the shape of the mask and make corresponding predictions. However, this method is not applicable when the selected features are very few.§ THE PROPOSED DOUBLE-SIDED REMOVE AND RECONSTRUCT (DORAR) SOLUTION In this paper, we try to overcome the problems we may face in a feature attribution method. One is the EPITE problem, very common in the encoder-decoder joint training structure, and other is the artifacts problem, which often emerges when evaluating masked input on a classifier trained with dataset that follows the natural data distribution. §.§ The DoRaR Feature Attribution MethodWe combine multiple improvement techniques and propose our DoRaR feature attribution method. Table. <ref> summarizes the notations used in following sections. Let X={X_1,X_2, …, X_s} be the set of samples in a dataset, where s is the size of this set. Let X_i(j) be the j-th feature of X_i. Given a black box classifier y_i = P(X_i) of this dataset, the goal is to select the n_e most important units of features of size s_e from the input sample X_i, that can be used to reconstruct a sample that achieves high prediction accuracy in the black box classifier. Fig. <ref> shows the DoRaR training framework and Algorithm. <ref> summarizes the training procedure.The current form of feature selection is intractable because we choose top n_e explanation units with the highest importance scores between 0 and 1 out of all optional units. This operation breaks the gradient calculation. In order to solve this, we use the generalized Gumbel-softmax trick<cit.>. This is a commonly used technique to approximate a non-differentiable categorical subset sampling with differentiable Gumbel-softmax samples. By approximating the discrete mask M_i^* with M_i defined by M_i = F(X_i;θ_F),0≤M_i≤1, we have the continuous approximation to the argmax function that chooses top n_e units with the highest scores. Then, we can use standard back propagation to calculate the gradient through the argmax function and train the feature selector properly. After the feature selector is well trained, we can switch back to the discrete mask M_i^* by setting the top n_e output units of the feature selector to 1 and the rest output units to 0 during testing.In the following subsections, we will explain how the artifacts problem and the EPITE problem are solved by different improvement techniques in our DoRaR method. The evaluation and comparison of each single improvement technique is presented in <ref>.§.§ Dealing with the Artifacts ProblemFor those feature attribution methods that evaluate the selected features directly in the pre-trained classifier, unwanted artifacts could be the major problem that needs to be solved. Several previous works <cit.> have proposed methods to reduce artifacts, such as using the blurred image as background noise, resizing the mask generated from a low resolution intermediate layer, or introducing smoothness regularization terms. But none of them solve the artifacts problem fundamentally because they still need to evaluate the explanation, where the non-selected features are replaced with ad-hoc values, on the pre-trained classifier directly. In our algorithm, given the selected features, we train a generative model to reconstruct an sample from the selected features with background noise, then feed the reconstructed sample into the pre-trained classifier. The assumption is that, for a well-trained generative model, a better mask should be able to select those features that can reconstruct an sample which achieves higher accuracy in the pre-trained classifier. Fig. <ref> shows an example that can verify our assumption. Following the example in Fig. <ref>, it could appear that the y-axis feature is a better explanation for the classifier, as replacing the value of the y feature with zeros leads to a severe change in prediction accuracy (100% to 50%). However, with the generative model, if the x-axis value is kept, while the y-axis feature is replaced by some specific values, it is easy for the generative model to reconstruct the original data by learning to introduce a constant value of y to all data points. On the other hand, when choosing to keep the y-axis value and discard x, it is less likely to achieve high classification accuracy through a generative model – using the y values only, a generator can no longer learn to reconstruct the original distinct classes.During the training process, we also introduced a reconstruction loss term for both feature selector and generative model to encourage the generated sample to be similar to the original sample. Fig. <ref> shows an example of an image generated from only 4 4x4 pixel based chunks of the original image. By doing this, the reconstructed sample will be closer to the natural data distribution.§.§ Dealing with the EPITE Problem For the EPITE problem, instead of using the commonly suggested Gaussian filter blurred input sample or average value<cit.> to fill in the non-selected area 1-M, we replace X(j) discarded by the mask by a noise sample E(j), randomly drawn from the training dataset, to minimize I(X';M). Fig. <ref> shows examples where the area discarded by the mask is filled with pixel-wise random noise drawn from the training set, with the original discarded pixels blurred through a Gaussian filter, and with the pixel mean value over the training dataset. From the second and third images, we can easily find the position of selected explanation units. However, from the first image, it is much harder for the classifier to learn to locate the selected explanation units, then make corresponding predictions.In practice, completely eliminating I(X';M) is impossible unless we use a perfect inpainter to fill the missing area, which introduces extra class information through the filling content. Therefore, in addition to apply proper background noise to minimizing I(X';M), we also reduce the impact of the EPITE problem by evaluating I(X';C), where C represents the class information, and X'=X· (1-M)+E·M.For explanations with the EPITE problem, as depicted in Fig. <ref>, it is important to note that the non-selected features still contain vital information and can achieve high prediction accuracy. To address this, our approach involves selecting explanations that demonstrate high accuracy when using the masked input (high I(X';C)) and low accuracy when using the complementary masked input (low I(X';C)). This selection process allows us to emphasize the class information present in the selected features (I(X';C|M)), rather than relying on the class information encoded in the mask learned by the retrained predictor (I(X';M;C)). Because if the generative model G_2 can't recognize the complementary mask (1-M) through X' then achieve high accuracy a_2 using the class information encoded in the mask, the generative model G_1 is less likely to take advantage from the class information encoded in the mask M and has the EPITE problem either. For a detailed proof, please refer to <ref>.§ EVALUATION SCHEME Inappropriate feature attribution method evaluation can lead to unwanted result. For example, feature attribution method performance can be significantly impacted by evaluation parameters such as the order in which features are added or removed, specifically prioritizing the most relevant feature first (MoRF) or the least relevant feature first (LeRF). Different choices for these parameters may lead to conflicting outcomes. This inconsistency is unavoidable in some cases. For example, one single feature can independently improves classification accuracy a lot, like one pixel in a flag that indicates an existing color in certain area. It can be easily replaced by other features, like the surrounding pixels with the same color. Such feature can have high priority in adding the most relevant feature order, but there is also chances that it has high priority in removing the least relevant feature order when similar features are already exist. Further details on the limitations of choosing either the MoRF or LeRF order are described in Section <ref>. Therefore, an appropriate evaluation metric is essential for feature attribution methods.Similarly, the addition or removal of features one by one, based on a fixed feature attribution ranking, followed by the calculation of the corresponding accuracy, the inclusion area under curve (iAUC) and the exclusion area under curve (eAUC) can lead to unreliable results. This is because the contribution of a particular feature can be heavily influenced by the presence of other features. However, when calculating iAUC or eAUC, the contribution of a feature is only assessed in the presence of either more relevant or, respectively, less relevant features. We elaborate on this issue in Section <ref>.§.§ Limitation with MoRF/LeRF order The MoRF and LeRF are widely used orders for assessing the performance of feature attribution results. In LeRF, higher accuracy is preferred when removing least relevant features in order, while in MoRF, lower accuracy is preferred. However, <cit.> identifies inconsistencies in the ranking of different attribution methods when different removal orders, such as MoRF/LeRF, are considered. The paper also proposes solutions to address this inconsistency. Nevertheless, we argue that these inconsistencies are inherent due to the mutual information between features.The left figure in Fig. <ref> illustrates a scenario where the selected features (marked in red) exhibit a scattered distribution with high attribution scores. In this case, regardless of whether we remove the most relevant or least relevant features first, the remaining features still contain valuable information and can yield high accuracy. Conversely, the right figure in Fig. <ref> presents a situation where the selected features cover important areas entirely, causing the remaining parts of the image to lack crucial information and resulting in low accuracy. In this case, if we remove features in the MoRF order, e.g. removing all selected features, it is possible for significant portions, such as the ship's outline, to be missed, leading to low accuracy and higher ranking for the right feature attribution result. On the contrary, if we remove features in the LeRF order, e.g. remove all non-selected features. The right figure will lose the whole sky area but the left figure will not miss any part entirely, which leads to higher accuracy and performance ranking for the left feature attribution result. Therefore, the inconsistent ranking of these two feature attribution results in the MoRF and LeRF orders is reasonably justified. §.§ Limitation with iAUC/eAUC metricThe metrics iAUC/eAUC are calculated based on a fixed feature attribution score vector. They determine the area under the accuracy curve derived by including or excluding features from the most/least relevant feature to the least/most relevant feature. However, assuming a universal feature attribution score is inappropriate. The attribution score of a feature also depends on the existence of other features.As an example, consider Fig. <ref>. When we select only 2 pixels for explanation, any 2 pixels from the ship body cannot provide enough information to predict that the image is of a ship. However, 2 pixels from the background area, such as water and sky, can be much more informative.I({X_sky,X_water};C)>I({X_ship_1,X_ship_2};C)Here, I({X_sky,X_water};C) represents the mutual information between 2 pixels from background environment and the class variable. However, if we select 64 pixels instead of 2 pixels for explanation, selecting the ship can be more accurate in making the final classification, especially when 64 pixels are just enough to draw the rough outline of the ship.I({X_sky,X_water};C|X_ship_3...X_ship_64)< I({X_ship_1,X_ship_2};C|X_ship_3...X_ship_64)While I({X_sky,X_water};C|X_ship_3...X_ship_64) represents the mutual information between two pixels from the background and the class variable, given the other 62 pixels of the ship. Thus, the attribution score of a feature depends on the other features that have been selected. Evaluating the attribution score of a feature without limiting the existence of other features can lead to unreliable results. But in iAUC curve for example, each accuracy point in the curve only depends on the n (1≤ n ≤ m) most relevant features. Therefore, a more strict and precise control of other exist features is necessary in evaluation.§.§ A parameterized definition of evaluation metricAs discussed in Section <ref> and Section <ref>, evaluating feature attribution result using single order or assuming a universal attribution score vector without considering interaction between features are unreliable. Therefore, we propose the following more reliable and more comprehensive definition for evaluating feature attribution method, so given a fixed number of selected features, all selected features are evaluated as a group, non-selected features are evaluated as a group either. Definition 1(Feature Attribution Method). A (n_e,s_e,a_1,a_2) feature selector of a black box classifier outputs n_e explanation units, each of size s_e, such that: * using only the n_e units selected by the mask from the feature selector, it is possible to generate inputs to that black box, that are classified by the black box with accuracy at least a_1 relative to the target label, and* using the complementary part of the masked inputs, excluding the selected n_e units, it is possible to generate inputs to that black box, that are classified by the black box with accuracy at most a_2 relative to the target label. Under this definition, multiple types of feature selectors exist for a classifier to be explained. Users can specify either the format or the expected performance of the feature selector. With the same number and size of explanation units, one attribution method is better than another one with lower a_1 and higher a_2. In general, given two arbitrary feature attribution methods, we can use Definition 2 as a rule for comparing these two methods. Definition 2(Feature Attribution Method Partial Order) Consider two feature attribution methods F_1 and F_2, with their format and performance defined by (n_e_1,s_e_1,a_1_1,a_2_1) and (n_e_2,s_e_2,a_1_2,a_2_2) respectively. If a_1_1 ≥a_1_2 and a_2_1 ≤a_2_2 and n_e_1 ≤n_e_2 and s_e_1 ≤s_e_2 then F_1 is better than F_2 and we write F_1≽ F_2. If neither F_1≽ F_2, nor F_2≽ F_1, then F_1 and F_2 are incomparable.Although this paper strictly adheres to the definition above, in the cases when n_e_1 ≤n_e_2 and s_e_1 ≤s_e_2, one could easily define other partial orders on the space of explanations, e.g., F_1≽ F_2 if a_1_1/a_2_1≥a_1_2/a_2_2 or if a_1_1-wa_2_1≥a_1_2-wa_2_2 for some positive constant w.§.§ Our DoRaR evaluation strategy We use the scheme shown in Fig. <ref> as our final evaluation strategy. It is used to test different feature attribution methods in the following section.In this evaluation framework, the output format from the feature selector F is predefined, limiting feature interaction within the top n_e× s_e features. Adding prper background noise can reduce the performance of explanations suffering from the EPITE problem. Displaying the values of both a_1 and a_2 can also expose the EPITE problem through high a_2 values. Training generative models for each tested algorithm to evaluate selected features can prevent explanations that have the artifacts problem from receiving unfair advantages over those without the artifacts problem.Additionally, for each feature selector being tested, it is worth mentioning that we utilize a simple structure for the generative models and train them using the same settings until converge. Consequently, any variations in evaluation results caused by perturbations in the generative models can be disregarded, the standard deviation caused by generative models is reported in the experiment result. This ensures that any observed differences in performance between feature selectors can be attributed to the selectors themselves, rather than the generative models used.§ COMPARISON TO OTHER ALGORITHMSFor comparison purposes, we implemented additional 6 algorithms: LIME, Grad, SmoothGrad, VIBI, Guided Feature Inversion, and Real-X, based on previous research <cit.>. We evaluated these algorithms on MNIST, CIFAR-10, and a synthetic dataset based on user-mouse interaction behavior <cit.> using testing scheme as Fig. <ref>. All other comparison feature selectors were trained either directly on their authors' source code or using our own reproduction based on the pseudo-code in their paper. The major differences among the seven algorithms are summarized in Table <ref>.We test our DoRaR algorithm and other 6 feature attribution methods on the MNIST dataset, which includes 70000 28× 28 pixels images of hand written digits, 50000 for training, 10000 for validation and 10000 for testing respectively. Each feature attribution method is evaluated 5 times, mean and standard deviation are calculated as the result, so the perturbation caused by generative model can be seen from the result. A simple 2D CNN model is trained on the training set as the black box classifier to be explained which achieves 93%accuracy on the testing set.We also test above 7 algorithms in the CIFAR-10 dataset, which consists of 60000 32×32 color images in 10 classes, with 6000 images per class. Every feature selector is tested 5 times either. There are 50000 training images and 10000 testing images. A 2D CNN based model from the research <cit.> is trained as the black box classifier which achieves 95% of test accuracy.Since the MNIST and CIFAR-10 datasets lack ground truth explanations, apart from the principal analysis conducted earlier, we do not have concrete evidence to suggest that a feature attribution method with higher a_1 and lower a_2 in our evaluation scheme can effectively capture the true explanation. To address this limitation, we have created a synthetic dataset that incorporates a ground truth explanation. This dataset has been specifically designed to define the sole difference between the two classes of data, thereby allowing us to thoroughly test the performance of different feature attribution methods.In the synthetic dataset, we collected mouse movement data from 18 users who performed a predefined task involving 10 consecutive movements in a specific pattern. This task was repeated a total of 2437 times. The mouse movement data was stored in the format of mouse cursor movement velocity in the x and y coordinates.For each user, we modified half of their samples in the following manner: we replaced the first 16 points in the last movement with four consecutive horizontal or vertical segments. Each segment consisted of four points with equal movement speed, ensuring that the modified 16 points had the same start and end positions as the original movement. Figure <ref> provides a visual representation of how we modified a sample in the mouse behavior dataset. Please note that for convenience, these figures were plotted using x and y positions, while the classifier was trained using x and y velocities.To evaluate the performance of different feature attribution methods, we used one-fifth of the modified and unmodified samples for testing purposes. Our pre-trained black-box classifier achieved an high accuracy of 99.78% in correctly recognizing the modified samples in the testing set.Our feature selector and generative model are built with 2D CNN and fully connected 3 layers neural networks respectively and are trained based on the black box classifier and the same training set. Details of the black box classifier and DoRaR architectures for MNIST dataset is same as described in <ref>. See <ref> for details of CIFAR-10 experiment. For the sequential data based synthetic dataset, 1D CNN is used for feature selector, <ref> shows more details of the mouse behavior based synthetic dataset experiment.§.§ Results of MNIST and CIFAR-10 DatasetTable. <ref> shows results of 7 feature attribution methods in scenarios of selecting 4, 8, 12 and 24 out of 49 4× 4 pixels based chunks as explanations of the black box classifier. For the CIFAR-10 dataset, as shwon in Table. <ref> in addition to 4×4 chunk based explanations, we also test pixel based explanations. The reason for this is that important object characteristics in the CIFAR-10 dataset are typically smaller in size and distributed across a wider area of the image compared to the characteristics found in the MNIST dataset.The result shows that in most scenarios our algorithm has higher a_1 and lower a_2 than all other algorithms. Real-X has comparable a_1 to our algorithm in some scenarios. But it has higher a_2 than our algorithm, especially in 12 chunks and 24 chunks scenarios of MNIST dataset as well as 64 pixels and 128 pixels scenarios of CIFAR-10 dataset. §.§ Result of Synthetic DatasetTable <ref> shows the results of 7 feature attribution methods in the scenario of selecting 4 segments, each consisting of 4 points, as explanations of the black box classifier. Since we have a ground truth explanation and our black-box classifier achieves very high accuracy, we can directly compare different feature attribution methods based on the hit rate of selected features that hit the ground truth explanation area.The results show that our algorithm has a higher hit rate than all other algorithms except SmoothGrad. SmoothGrad has a comparable hit rate to our algorithm, but it has a much higher time cost. The VIBI algorithm achieves high prediction accuracy in their retrained predictor (97%), but none of the selected features cover the ground truth explanation. §.§ Qualitative analysisWe choose some examples to illustrate the better performance of our algorithm than other algorithms. As depicted in Fig. <ref>, it is evident that feature attribution methods such as LIME, Grad, and SmoothGrad exhibit artifacts, particularly Grad and SmoothGrad. On the other hand, our DoRaR-based feature selector does not display such issues. This indicates that our approach effectively addresses the problem of artifacts that can arise with traditional feature attribution methods.Fig. <ref> shows some examples of the better performance of our algorithm. Since selected chunks of the Guided Feature Inversion algorithm are based on weighted sum of activation values in different channels, those totally black areas, e.g. the central part of 0 and the left part of 3, which can differentiate 0 and 3 from 8, are less likely to be selected. On the contrary, our algorithm can select those informative parts even though they have no overlapping with the digit.Fig. <ref> shows the reason why Real-X has much higher a_2 in scenarios like selecting 24 chunks. It only optimizes a_1 in the training, so when the selected part contains enough information for classification, it is less likely to further polish the explanation. On the contrary, our algorithm will also minimize the information contained in the non-selected parts, which in turn improves the quality of the explanation.Fig. <ref> shows some examples of the VIBI algorithm that encodes the prediction within the special location, e.g. uninformative corners, of masks. While they achieve high prediction accuracy in their jointly trained predictor, their prediction accuracy is lower in our testing scheme. Explanations selected by our algorithm don't show this problem.For the CIFAR-10 dataset, except previous results, we have the following further findings. Like we discussed in section. <ref> and section. <ref>, for the pre-trained classifier, sometimes the background provides more information than the target itself for classification. For example, in Fig. <ref>, if the limited number of selected chunks can't capture enough information of the ship's body, it prefer to select the background, e.g. sky and water.Pixel based explanations can capture more details than chunk based explanations. For example, in Fig. <ref>, while 4 4×4 chunks based explanations focus on the background, pixel based explanations can capture both background information and some details of the ship body. However, pixels based explanations can be defective. Figure <ref> shows pixel-based explanations of the CIFAR-10 dataset from Real-X, which have a very scattered distribution. In contrast, our algorithm has a more concentrated feature selection that has much less mutual information with non-selected parts. This is caused by the term I(X';X';C|M) in the target function <ref>. This example shows that when two feature attribution methods are incomparable under our partial order definition, we sometimes need to visually compare the specific explanations produced. Although we subjectively prefer the explanation of our algorithm, a more scattered feature selection as in Figure <ref> that allows a higher mutual information I(X';X';C|M) can be generated by setting the training hyper-parameter α to a smaller value. In the CIFAR-10 experiment, in addition to the previous results, we made further findings by observing Figure <ref> to Figure <ref>. For example, in the truck image, our algorithm and LIME are the only two methods that focuses on the area between the tire and the ground, as well as the shadow in between. This is a distinctive pattern for truck images, which we did not expect before. But our method can find key features in real time while LIME can not.§ CONCLUSION Based on our experimental observations, we conclude that minimizing prediction accuracy achieved by non-selected features can prevent the EPITE problem. If the mask that selects those features may cause information leakage, the complement mask has the same issue. Therefore, evaluating the performance of both selected and non-selected features can guarantee the reliability of explanations.Reconstructing a new sample through a generative model to evaluate explanations is effective to solve the artifacts problem. Directly evaluating out of distribution masked input in the black-box classifier may lead to unexpected result, but a generative model that is trained to reconstruct the original input can help preventing such problem.Adding the background noise in training can improve the performance of the explanation. The choice of background should minimize the mutual information between masked input and the mask. But the filling content should not provide extra class information.It is crucial to predefine the size and number of basic explanation units in the evaluation process to ensure a fair comparison. However, the selection of these parameters should be customized to align with the specific requirements of the user. It is important to note that a feature attribution method that is effective in identifying important individual features may not perform as well when high attribution score features are evaluated as a group using our evaluation method, and vice versa. This highlights the importance of considering the specific context and goals when selecting an appropriate feature attribution method.§ OTHER RELATED WORKVarious techniques have been developed to interpret machine learning models. Feature attribution methods that shows which feature contribute more to the classification can classified into two categories: real-time methods and non-real-time methods.Non-real-time attribution methods require multiple iterations to learn an explanation for a single input sample. This category encompasses some perturbation-based methods <cit.>, some locally linear methods <cit.> and gradient-based methods <cit.> except the basic gradient-based method <cit.> which only require one backward propagation. These methods suffer from computational inefficiency, and the time required for certain techniques increases exponentially with the number of features. As a result, they are impractical for industry applications.Real-time feature attribution methods, like those described in <cit.>, require just one iteration to generate an explanation. This is typically accomplished by training a feature selector model. The feature selector model generates a feature selection mask for an input sample through one forward propagation.Real-time feature attribution methods, despite their speed, face various challenges due to their inappropriate feature selector training process caused by the aforementioned problems. One such problem arises when feeding masked inputs directly to the black-box model for prediction loss calculation, which can introduce artifacts to the explanation.Although retraining a new predictor might alleviate the artifacts issue <cit.>, it can give rise to the EPITE problem. Some recent efforts has been made to overcome such problem, e.g. filling the non selected pixels with weighted mean of its neighbors <cit.>, training feature selector and predictor separately <cit.>.In addition, the authors of <cit.> attempt to address the inconsistency in evaluation strategies based on different orders. However, a universal attribution vector that performs consistently in both MoRF and LeRF or iAUC and eAUC may not exist since feature interactions can significantly impact the feature attribution score.IEEEtran10url@samestyle lipton2018mythos Z. C. Lipton, “The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery.” Queue, vol. 16, no. 3, pp. 31–57, 2018.silver2017mastering D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton et al., “Mastering the game of go without human knowledge,” nature, vol. 550, no. 7676, pp. 354–359, 2017.du2019techniques M. Du, N. Liu, and X. Hu, “Techniques for interpretable machine learning,” Communications of the ACM, vol. 63, no. 1, pp. 68–77, 2019.zeiler2014visualizing M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European conference on computer vision.1em plus 0.5em minus 0.4emSpringer, 2014, pp. 818–833.zhou2015predicting J. Zhou and O. G. Troyanskaya, “Predicting effects of noncoding variants with deep learning–based sequence model,” Nature methods, vol. 12, no. 10, pp. 931–934, 2015.zintgraf2017visualizing L. M. Zintgraf, T. S. Cohen, T. Adel, and M. Welling, “Visualizing deep neural network decisions: Prediction difference analysis,” arXiv preprint arXiv:1702.04595, 2017.fong2017interpretable R. C. Fong and A. Vedaldi, “Interpretable explanations of black boxes by meaningful perturbation,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 3429–3437.dabkowski2017real P. Dabkowski and Y. Gal, “Real time image saliency for black box classifiers,” Advances in neural information processing systems, vol. 30, 2017.ribeiro2016should M. T. Ribeiro, S. Singh, and C. Guestrin, “" why should i trust you?" explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 1135–1144.lundberg2017unified S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” Advances in neural information processing systems, vol. 30, 2017.dombrowski2019explanations A.-K. Dombrowski, M. Alber, C. J. Anders, M. Ackermann, K.-R. Müller, and P. Kessel, “Explanations can be manipulated and geometry is to blame,” arXiv preprint arXiv:1906.07983, 2019.heo2019fooling J. Heo, S. Joo, and T. Moon, “Fooling neural network interpretations via adversarial model manipulation,” Advances in Neural Information Processing Systems, vol. 32, pp. 2925–2936, 2019.chen2018learning J. Chen, L. Song, M. Wainwright, and M. Jordan, “Learning to explain: An information-theoretic perspective on model interpretation,” in International Conference on Machine Learning.1em plus 0.5em minus 0.4emPMLR, 2018, pp. 883–892.yoon2018invase J. Yoon, J. Jordon, and M. van der Schaar, “Invase: Instance-wise variable selection using neural networks,” in International Conference on Learning Representations, 2018.fu2021differentiated W. Fu, M. Wang, M. Du, N. Liu, S. Hao, and X. Hu, “Differentiated explanation of deep neural networks with skewed distributions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.mahendran2015understanding A. Mahendran and A. Vedaldi, “Understanding deep image representations by inverting them,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5188–5196.yosinski2015understanding J. Yosinski, J. Clune, A. Nguyen, T. Fuchs, and H. Lipson, “Understanding neural networks through deep visualization,” arXiv preprint arXiv:1506.06579, 2015.du2018towards M. Du, N. Liu, Q. Song, and X. Hu, “Towards explanation of dnn-based prediction with guided feature inversion,” in Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 1358–1367.hooker2018benchmark S. Hooker, D. Erhan, P.-J. Kindermans, and B. Kim, “A benchmark for interpretability methods in deep neural networks,” arXiv preprint arXiv:1806.10758, 2018.bang2021explaining S. Bang, P. Xie, H. Lee, W. Wu, and E. Xing, “Explaining a black-box by using a deep variational information bottleneck approach,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 13, 2021, pp. 11 396–11 404.simonyan2013deep K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” arXiv preprint arXiv:1312.6034, 2013.smilkov2017smoothgrad D. Smilkov, N. Thorat, B. Kim, F. Viégas, and M. Wattenberg, “Smoothgrad: removing noise by adding noise,” arXiv preprint arXiv:1706.03825, 2017.jethani2021have N. Jethani, M. Sudarshan, Y. Aphinyanaphongs, and R. Ranganath, “Have we learned to explain?: How interpretability methods can learn to encode predictions in their interpretations.” in International Conference on Artificial Intelligence and Statistics.1em plus 0.5em minus 0.4emPMLR, 2021, pp. 1459–1467.rong2022consistent Y. Rong, T. Leemann, V. Borisov, G. Kasneci, and E. Kasneci, “A consistent and efficient evaluation strategy for attribution methods,” arXiv preprint arXiv:2202.00449, 2022.nguyen2015deep A. Nguyen, J. Yosinski, and J. Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 427–436.jang2016categorical E. Jang, S. Gu, and B. Poole, “Categorical reparameterization with gumbel-softmax,” arXiv preprint arXiv:1611.01144, 2016.fu2022artificial S. Fu, D. Qin, G. Amariucai, D. Qiao, Y. Guan, and A. Smiley, “Artificial intelligence meets kinesthetic intelligence: Mouse-based user authentication based on hybrid human-machine learning,” in Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, 2022, pp. 1034–1048.yu2018deep F. Yu, D. Wang, E. Shelhamer, and T. Darrell, “Deep layer aggregation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2403–2412.sundararajan2017axiomatic M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” in International conference on machine learning.1em plus 0.5em minus 0.4emPMLR, 2017, pp. 3319–3328.selvaraju2017grad R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.dai2021towards E. Dai and S. Wang, “Towards self-explainable graph neural network,” in Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 302–311.teso2019toward S. Teso, “Toward faithful explanatory active learning with self-explainable neural nets,” in Proceedings of the Workshop on Interactive Adaptive Learning (IAL 2019).1em plus 0.5em minus 0.4emCEUR Workshop Proceedings, 2019, pp. 4–16.kumar2021self S. Kumar, S. C. Yu, A. Michelson, and P. R. Payne, “Self-explaining neural network with plausible explanations,” arXiv preprint arXiv:2110.04598, 2021. § MITIGATING EPITE PROBLEM BY EVALUATING PREDICTION ACCURACY ACHIEVED BY NON-SELECTED FEATURES We demonstrate how we avoid EPITE problem when choosing explanation by evaluating the prediction accuracy achieved by non-selected features.When measuring the prediction accuracy achieved through masked input, we expect I(X';C) to be high. On the contrary, we expect I(X';C) to be low, where X'=X· (1-M)+E·M represents the features selected by complementary mask 1-M with the rest area filled with background noise E, and C represents the class variable. We can write: I(X';C)=I(X';C|M)+I(X';C;M), I(X';C)=I(X';C|1-M)+I(X';C;1-M)and we haveH(M)=H(1-M).Now, since we expect high prediction accuracy from the selected features and low prediction accuracy from the non-selected features, we can assume that we are actually evaluatingI(X';C)-I(X';C)= I(X';C|M)+I(X';M;C)-I(X';C|M)-I(X';M;C) SinceI(X';C|M)= I(X';C|M,X')+I(X';X';C|M)we can now write that I(X';C|M)-I(X';C|M)=I(X';C|M)-I(X';C|M,X')-I(X';X';C|M)Using the chain rule for information, we haveI(X';C|M)+I(X';C|M,X')= I(X',X';C|M)Assuming the background noise E contain no class information, we haveI(X',X';C|M)= I(X;C|M)= twhere t is irrelevant to feature selection and treated as a constant here. Therefore, I(X';C|M,X')=t-I(X';C|M), s.t.I(X';C|M)-I(X';C|M)= 2I(X';C|M)-I(X';X';C|M)-t Therefore, using function <ref> our target function <ref> can be simplified toI(X';C|M)+I(X';M;C)-I(X';C|M)-I(X';M;C)= 2I(X';C|M)-I(X';X';C|M)+I(X';M;C)-I(X';M;C)-t Using I(X';C)-I(X';C) as the target function to evaluate, comparing with only evaluating I(X';C), we increase the weight of the class information from interested features I(X';C|M) and reduce the weight of the source of EPITE problem I(X';M;C) largely in evaluation. Besides, if the EPITE problem gets severe, e.g. an improper background selection such as zero, both I(X';M;C) and I(X';M;C) will get close to the upper bound H(C) so I(X';M;C)-I(X';M;C) will get close to 0. Therefore, the effect of I(X';M;C) in target function will become more insignificant. The effect of minimizing I(X';X';C|M)] is discussed in Section. <ref> through Fig. <ref>. In our DoRaR evaluation scheme, we compare both a_1 (prediction accuracy achieved by selected features) and a_2 (prediction accuracy achieved by non-selected features) without using a combined evaluation metric. In our feature attribution method proposed based on DoRaR, we use a hyperparameter α to balance the weight between I(X';C) (prediction loss achieved by selected features) and I(X';C) (prediction loss achieved by non-selected features). § EVALUATING IMPROVEMENT OF BACKGROUND NOISE (BN), RECONSTRUCTION LOSS (RL) AND COMPLEMENTARY MASK (CM) IN MNIST DATASET§.§ Dataset and Experiment SettingWe tested each improvement technique, which included the inclusion of pixel-wise random background noise draw from empirical data distribution, the addition of a reconstruction loss term, the addition of a loss term corresponding to the complementary masked input, as well as their combinations by training a feature selector, then evaluate its performance in our evaluation scheme in the MNIST dataset. The dataset consists of 70,000 28×28 pixel images of handwritten digits, with 50,000 images for training, 10,000 for validation, and 10,000 for testing. The experimental settings for the MNIST experiment in Section. <ref> are the same as those described here. §.§ Model StructuresBlack Box Classifier Structure. For the black box classifier, we use a 2D CNN model which consists of 2 convolutional layers. The first convolutional layer has the kernel size 5 followed by a max-pooling layer with pool size 2 and a ReLU activation function. The second convolutional layer has the kernel size 2 followed by a 2D dropout layer and a ReLU function. Then it goes to a max-pooling layer with pool size 2 followed by a ReLU function. These two convolutional layers contain 10 and 20 filters, respectively. After these two convolutional layers, there are two fully connected layers with 20 and 10 units, respectively, connected by a ReLU function and a dropout layer in between. After that, there is a log-softmax calculation, so the final output returns a vector of log-probabilities for the ten digits.DoRaR Model Structure. Our DoRaR training procedure contains a feature selector and one or two generative models. The feature selector has 3 convolutional layers. The first two convolutional layers have kernel size 5 followed by a ReLU function and a max-pooling layer with pool size 2. The third convolutional layer has kernel size 1. Three convolutional layers have 16, 32 and 1 filters respectively. Then the output is flattened and the log-softmax value is calculated as the probability of choosing each optional explanation unit.After up-sampling the input tensor from 7×7 to the standard image size 28×28, it is fed to the generative model, which consists of 2 fully connected layers, both with 784 units followed by a ReLU function and a Sigmoid function respectively. §.§ Parameter SelectionValues for parameters α and β are determined by implementing each version of improvements (BN, RL, CM and all their combination), with different combination of parameter values. Fig. <ref> shows examples of testing results of choosing different number of chunks with different values of α and β, e.g., α=4/49, 8/49, 12/49, 24/49 and β=0, 0.01, 0.1, 1, 10. Taking the 4 chunks scenario as an example, by choosing α = 4/49 and β=1 (top red plus mark), we get the best prediction accuracy a_1 from selected chunks. In the following experiment, α is set to the fraction of the number of selected units to the total number of available units and β is set to 1, which achieves relatively good overall performance in terms of both a_1 and a_2 as shown in red marks in Fig. <ref>.Other parameters are set as follows: learning rate for feature selector and generative models are λ = 5e-4. We use the Adam optimizer with batch size 100, while the coefficients used for computing running averages of gradient and its square are set as (β1 , β2 ) = (0.5, 0.999)§.§ Evaluated Methods In order to study the effectiveness of each improvement method, we compare every single improvement method as shown in Fig. <ref> and all their combinations. We compare the performance of following the 8 schemes, in terms of the accuracy a_1 and the accuracy for the complementary masked input a_2:* B: Baseline (Feature selector + Generative model without background noise, reconstruction loss and complementary masked input as shown in Fig. <ref>.)* BN: Baseline + Background Noise* RL: Baseline + Reconstruction Loss* CM: Baseline + Complementary Masked Input* BN+RL: Baseline + Background Noise + Reconstruction Loss* BN+CM: Baseline + Background Noise + Complementary Masked Input* RL+CM: Baseline + Reconstruction Loss + Complementary Masked Input* BN+RL+CM (DoRaR): Baseline + Background Noise + Reconstruction Loss + Complementary Masked Input.§.§ Results Fig. <ref> shows the results of all possible combinations of the three improvement methods, for the 4 chunks and the 8 chunks scenarios. The results show that the baseline scheme always has the worst performance. The baseline combined with all three improvement methods has the best overall performance if we consider both a_1 and a_2. Single reconstruction loss has limited improvement, it has to be combined with other improvement methods. Given this result, we decide to use the algorithm that combines all three improvement methods as our DoRaR algorithm in this research.§ CIFAR-10 DATASET EXPERIMENTAL SETTINGS§.§ Model StructuresPre-trained Classifier Structure. The pre-trained classifier is based on the model presented in the research <cit.>. It consist 3 basic convolutional layers followed by a batch normalization layer and a ReLU function then 4 layers of convolutional layer based tree blocks. All convolutional layers have the kernel size 3. Detailed structure of the convolutional layer based tree blocks is introduced clearly in <cit.>.DoRaR Structure. DoRaR model of the CIFAR-10 dataset is similar to that of the MNIST dataset with some small changes. For the feature selector, it has 3 convolutional layers. If the explanation unit is 4×4 chunk, then first two convolutional layers have kernel size 5 followed by a ReLU function. If it's pixel, then there is a max-pooling layer with the pool size 2 after each ReLU function. Third convolutional layer has kernel size 1. Three convolutional layers have 8, 16 and 1 filter respectively.According to the image size of the CIFAR-10 dataset, the convolutional layers in generative model have 32×32×3 units. §.§ Parameter SelectionParameter values for α and β are tuned in the same way as described in S1. Other parameter values are same as the MNIST dataset except the learning rate for generative model which is set to λ = 1e-4. § SYNTHETIC DATASET EXPERIMENTAL SETTINGS§.§ Model StructuresBlack Box Classifier Structure. The black box classifier consists of 1 2D convolutional layer with kernel size 2×9 and 2 1D convolutional layers with kernel size 7 and 5, each followed by a ReLU function and a dropout layer in the front. These 3 CNN layers have 64, 96 and 128 filters respectively. First 2 CNN layers are followed by Max pooling layer with pool size 2 and Batch norm layer. After the CNN layers there is a bi-directory LSTM layer. Then it goes to two fully connected layers with 102400 and 128 units.DoRaR Structure. For the feature selector, it has 3 convolutional layers and 2 max-pooling layers with the same kernel size and filter number as in black-box classifier except the third convolutional layer that has only 1 output channel. According to the sample size of the mouse dataset, the linear generative model have 2 layers with 3200 and 3200 units. §.§ Parameter SelectionParameter values for α and β are tuned in the same way as described in Section <ref>. Other parameter values are same as the CIFAR-10 dataset except the batch size which is set to 50. | http://arxiv.org/abs/2310.17945v1 | {
"authors": [
"Dong Qin",
"George Amariucai",
"Daji Qiao",
"Yong Guan",
"Shen Fu"
],
"categories": [
"cs.LG",
"cs.AI"
],
"primary_category": "cs.LG",
"published": "20231027074045",
"title": "A Comprehensive and Reliable Feature Attribution Method: Double-sided Remove and Reconstruct (DoRaR)"
} |
IEEE Transactions on Wireless Communications, DOI: 10.1109/TWC.2023.3328713Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Generalized Firefly Algorithm for Optimal Transmit Beamforming Tuan Anh Le and Xin-She Yang T. A. Le and X.-S. Yang are with the Faculty of Science and Technology, Middlesex University, London, NW4 4BT, UK. Email: {t.le; x.yang}@mdx.ac.uk.This paper has been presented in part at the IEEE Vehicular Technology Conference (VTC 2023-Spring), Florence, Italy, June, 20-23, 2023.January 14, 2024 =============================================================================================================================================================================================================================================================================================================================== This paper proposes a generalized Firefly Algorithm (FA) to solve an optimization framework having objective function and constraints as multivariate functions of independent optimization variables. Four representative examples of how the proposed generalized FA can be adopted to solve downlink beamforming problems are shown for a classic transmit beamforming, cognitive beamforming, reconfigurable-intelligent-surfaces-aided (RIS-aided) transmit beamforming, and RIS-aided wireless power transfer (WPT). Complexity analyzes indicate that in large-antenna regimes the proposed FA approaches require less computational complexity than their corresponding interior point methods (IPMs) do, yet demand a higher complexity than the iterative and the successive convex approximation (SCA) approaches do. Simulation results reveal that the proposed FA attains the same global optimal solution as that of the IPM for an optimization problem in cognitive beamforming. On the other hand, the proposed FA approaches outperform the iterative, IPM and SCA in terms of obtaining better solution for optimization problems, respectively, for a classic transmit beamforming, RIS-aided transmit beamforming and RIS-aided WPT. Firefly algorithm, nature-inspired optimization, transmit beamforming, reconfigurable intelligent surfaces.§ INTRODUCTION[findent=1pt]Transmit beamforming problems are normally cast as optimization problems where beamforming vectors are optimization variables. Two fundamental optimization problems in transmit beamforming include: i) minimizing the total transmit power subject to signal-to-interference-plus-noise-ratio (SINR) constraints <cit.>; ii) maximizing the weakest SINR subject to a total power constraint <cit.>. In fact, these two problems are equivalent <cit.>. A generalized version of the second problem is introduced in <cit.> where the objective is to maximize an arbitrary utility function of SINRs, which is strictly increasing in every receiver's SINR, subject to a power constraint. The other variation of the second optimization problem is the sum rate maximization <cit.>. Furthermore, additional constraints can be introduced to these fundamental problems to capture other wireless communication applications. For instance, a soft-shaping interference constraint was added for cognitive radio scenarios <cit.> while a power transfer constraint was included for simultaneous-wireless-information-and-power-transferscenarios <cit.>. In addition, various metrics have been utilized to formulate downlink beamforming optimization problems such as secrecy capacity <cit.>, energy efficiency <cit.>, data transmission reliability, data transmission security, and power transfer reliability <cit.>.Since the SINR is a non-convex quadratic function of the beamforming vectors, the two fundamental beamforming optimization problems are NP-hard and cannot be solved in polynomial time. Fortunately, exploiting the hidden convexity property of the SINR metric, an elegant framework was proposed in<cit.> to convert these two optimization problems into convex conic programming forms, which can be effectively solved by a standard interior point method (IPM). Furthermore, uplink-downlink duality was utilized to derive iterative algorithms to find optimal beamforming vectors for some power minimization problems, e.g., <cit.>. An iterative algorithmwas introduced in <cit.> to attain optimal beamforming vectors for the sum rate maximization.Numerous transmit beamforming problems can be realized in quadratically constrained quadratic programs (QCQPs) of beamforming vectors, which are mostly non-convex <cit.>. To solve a QCQP problem, a semidefinite relaxation technique <cit.> is adopted in which the original QCQP is converted to a convex semidefinite programming (SDP) with new optimization variables as beamforming matrices. If solving the transformed SDP yields a rank-one optimal beamforming matrix, then this optimal matrix is also the optimal solution to the original QCQP. Otherwise, an approximated solution to the original QCQP can be obtained by exploiting some rank-one approximations or the Gaussian randomize procedure <cit.>. Unfortunately, obtaining such solution requires further computational resources yet results in a sub-optimal solution. Optimization variables for downlink beamforming problems may include different types ofbeamforming vectors. For example, in a reconfigurable-intelligent-surface-aided (RIS-aided) communication system, see e.g., <cit.> and references therein, the optimization variables are active beamforming vectors for the base station (BS) and a passive beamformimg vector for the RIS. The objective function and/or constraints for a RIS-aided communication system are functions of both active and passive beamforming vectors. These beamforming vectors are independent variables yet need to be jointly optimized making their problems non-convex. Widely adopted approaches for tackling such problems are to iteratively solve two sub-optimization problems, a.k.a., alternative optimization (AO) approach <cit.>, or to approximate a non-convex using first-order Taylor expansion, a.k.a., successive convex approximation (SCA) <cit.>. In an AO approach, each of these two sub-optimization problems, one variable is treated as a constant while solving for the other. These sub-optimization problems themselves are mostly in QCQP forms. Due to the inherent non-convexity character of the original and sub-optimization problems, the resulting active and passive beamforming vectors may not be the global solutions.Whereas in a SCA approach, a lower (or upper) bounded solution is normally attained. IPMs, a.k.a., barrier methods, are gradient based algorithms being good at exploitation,[Exploitation is the ability of using any information from the problem of interest to form new solutions which are better than the current ones <cit.>.] a.k.a., intensification, hence, they are regarded as effective methods to solve convex optimization problems <cit.>. Unfortunately, most of transmit beamforming problems are non-convex. Solving non-convex optimization problems requires algorithms having better exploration[Exploration is the ability of efficient exploring the search space to form new solutions with sufficient diversity and far from the existing ones <cit.>.] ability than that of the IPMs to avoid getting trapped in a local mode. Firefly algorithm (FA), i.e., a nature-inspired algorithm, possesses both exploitation and exploration abilities. Consequently, FA is a good candidate for solving non-convex downlink beamforming problems. FA is an easy-to-implement, simple, and flexible algorithm based on the flashing characters and behaviour of tropical fireflies <cit.>. FA was first developed and published by Xin-She Yang, respectively, in late 2007 and in 2008 <cit.> foroptimization problems with objective and constrains being functions of a single optimization variable. Although FA has been widely applied to many applications <cit.>, there has not been any significant work investigating the application of FA in solving transmit beamforming problems. There were only two attempts to adopt FA for a throughput maximization problem in <cit.> and for a power minimization problem in <cit.>. As these two attempts only capture two fundamental transmit beamforming problems, it is not clear how FA can be adopted to solve other types of transmit beamforming problems.This paper takes a further step on implementing FA to solve a wider range of transmit beamforming optimization problems. The contributions of the paper can be summarized as follows.* The paper proposes a generalized FA to find the optimal solution of an optimization framework where its objective function and constraints are multivariate functions of multiple independent optimization variables. The problems in <cit.> and <cit.> are only two special cases of the proposed generalized FA while the proposed generalized FA is capable of handling a larger range of transmit beamforming problems.* The paper shows four representative examples of how the generalized FA can be adopted for solving transmit beamforming problems, i.e., a classic transmit beamforming approach, a cognitive beamforming approach, a RIS-aided beamforming approach, and RIS-aided wireless power transfer (WPT) approach. The applications of the proposed generalized FA are beyond these four examples which are only given to showcase how different types of beamforming problems can be handled by the generalized FA.* For the sake of completeness and comparison, the iterative closed form or SDP forms of the under investigated beamforming approaches are represented. The paper analyzes and compares the complexities of the iterative or SDP and FA implementations of each beamforming approach.* Simulations are carried out to evaluate the performances of the proposed FAs for the classic transmit beamforming, cognitive beamforming, RIS-aided, and RIS-aided WPT beamforming approaches.Notation: Lower and upper case letter y and Y: a scalar; bold lower case letter 𝐲: a column vector; bold upper case letter 𝐘: a matrix; ·: the Euclidean norm; (·)^T: the transpose operator; (·)^H: the complex conjugate transpose operator; Tr(·): the trace operator; 𝐘≽0: 𝐘 is positive semidefinite; 𝐈_x: an x × x identity matrix; 𝒪: the big O notation;ℂ^M× 1: the set of all M× 1 vectors with complex elements; ℍ^M× M: the set of all M× M Hermitian matrices; y∼𝒞𝒩(0,σ^2): y is a zero-mean circularly symmetric complex Gaussian random variable with variance σ^2;diag( 𝐲): a diagonal matrix whose diagonal elements are the entries of vector 𝐲; and finally diag( 𝐘): a vector whose entries are the diagonal elements of matrix 𝐘. § GENERALIZED FIREFLY ALGORITHM FRAMEWORK §.§ Proposed Generalized Firefly Algorithm FrameworkThe FA was developed based on the following three idealized rules <cit.>. First, any firefly attracts other fireflies regardless of its sex. Second, the attractiveness of any firefly to the other one is proportional to its brightness. Both attractiveness and brightness decrease as the distance between these two fireflies increases. Given two flashing fireflies, the darker firefly will move towards the brighter one. If a firefly does not find any brighter one, it will make a random move. Third, the brightness of a firefly depends on the landscape of the objective function.In this section, we propose a generalized FA to find optimal solution for an optimization framework containing both objective and constraints as multivariate functions of independent variables. To that end, we first introduce the following optimization framework.𝐀,𝐁,⋯,𝐙minimize f( 𝐀,𝐁,⋯,𝐙),g_l( 𝐀,𝐁,⋯,𝐙) ≤ 0, l ∈{1,2,…, L}, h_k( 𝐀,𝐁,⋯,𝐙) =0,k ∈{1,2,… K},where 𝐀∈ℂ^M_a× N_a, 𝐁∈ℂ^M_b× N_b,⋯,𝐙∈ℂ^M_z× N_z, i.e., M_a, N_a, M_b,N_b,⋯,M_z,N_z ≥ 1, are decision variables, a.k.a., optimization variables. Depending on the the values of {M_a, N_a, M_b,N_b,⋯,M_z,N_z}, the decision variables can be matrices, vectors, scalars, or the combination of all. We continue by using the penalty method <cit.> to equivalently rewrite (<ref>) as:𝐀,𝐁,⋯,𝐙minimize f( 𝐀,𝐁,⋯,𝐙)+P( 𝐀,𝐁,⋯,𝐙),where P( 𝐀,𝐁,⋯,𝐙) is the penalty term defined as:P( 𝐀,𝐁,⋯,𝐙)=∑_l=1^L λ_lmax{0,g_l( 𝐀,𝐁,⋯,𝐙)}^2 +∑_k=1^K ρ_k{h_k( 𝐀,𝐁,⋯,𝐙)}^2.In (<ref>), λ_l >0,∀ l, and ρ_k >0,∀ k, are penalty constants. Let {𝐀_i,𝐁_i,⋯,𝐙_i} be the i-th firefly amongst the population of N fireflies, i.e., i ∈{1,2,⋯, N}. Following the second rule of the FA, the brightest firefly is the most attractive one. Since the proposed optimization framework is a minimization, we define the brightness of firefly i as:[Note that if (<ref>) is a maximization problem, then (<ref>) can be expressed as: 𝐀,𝐁,⋯,𝐙minimize - f( 𝐀,𝐁,⋯,𝐙)+P( 𝐀_i,𝐁_i,⋯,𝐙_i).]I_i( 𝐀_i,𝐁_i,⋯,𝐙_i)=1/f( 𝐀_i,𝐁_i,⋯,𝐙_i)+P( 𝐀_i,𝐁_i,⋯,𝐙_i).For any two fireflies i,j∈{1,2,⋯,N }, if I_j( 𝐀_j,𝐁_j,⋯,𝐙_j) > I_i( 𝐀_i,𝐁_i,⋯,𝐙_i), then firefly i will move towards firefly j at (n+1)-th generation as:𝐀_i^(n+1) = 𝐀_i^(n)+β_a,0e^-γ_x (r_a,ij^(n))^2( 𝐀_j^(n)-𝐀_i^(n))+α_a^(n)Λ_a,i^(n), 𝐁_i^(n+1) = 𝐁_i^(n)+β_b,0e^-γ_y (r_b,ij^(n))^2( 𝐁_j^(n)-𝐁_i^(n))+α_b^(n)Λ_b,i^(n), ⋮ 𝐙_i^(n+1) = 𝐙_i^(n)+β_z,0e^-γ_z (r_z,ij^(n))^2( 𝐙_j^(n)-𝐙_i^(n))+α_z^(n)Λ_z,i^(n),where r_a,ij^(n)=|| 𝐀_j^(n)-𝐀_i^(n)||, r_b,ij^(n)=|| 𝐁_j^(n)-𝐁_i^(n)||,⋯,r_z,ij^(n)=|| 𝐙_j^(n)-𝐙_i^(n)||are the Cartesian distances which are not necessary Euclidean distances yet they can be any measure effectively characterized the quantities of interest in the optimization problem;β_a,0,β_b,0,⋯,β_z,0 are, respectively, the attractiveness at r_a,ij^(n)=0, r_b,ij^(n)=0,⋯,r_z,ij^(n)=0; finally γ_a,γ_b,⋯,γ_z present the variations of the attractiveness. The second terms in (<ref>),(<ref>), and (<ref>) capture the attractions. The third terms in (<ref>),(<ref>), and (<ref>) are randomizations with randomization factors α_a^(n),α_b^(n),⋯,α_z^(n) and Λ_a,i^(n)∈ℂ^M_a × N_a,Λ_b,i^(n)∈ℂ^M_b × N_b,⋯,Λ_z,i^(n)∈ℂ^M_z × N_z being matrices of random numbers drawn from a Gaussian or an uniform distribution. The proposed generalized FA for solving the optimization framework (<ref>) is summarized in Algorithm <ref>, where T is the maximum generation of the algorithm. For any particular optimization problem subsumed under the framework, the corresponding FA will have the same steps as those in Algorithm <ref> except the input, step 3, step 16, step 18, step 19, and the return value.§.§ Asymptotic Convergence and OptimalitySince the firefly algorithm, like quite a few other nature-inspired algorithms, is a metaheuristic algorithm, there is no rigorous proof of convergence so far in the current literature, despite many applications of such metaheuristic algorithms. In this section, we provide some intuitive discussions on the optimality and convergence of the FA framework.[Mathematical analysis of the FA's optimality and convergence deserves an important research topic. Such analysis is postponed to future research due to the space constraint. ]§.§.§ Asymptotic Optimality Without loss of generality, let γ_a=γ_b=⋯=γ_z=γ, we consider two special cases of the variations of the attractiveness when γ→ 0 and γ→∞. When γ→ 0, it is clear that e^-γ (r_a,ij^(n))^2→ 1, e^-γ (r_b,ij^(n))^2→ 1, ⋯, e^-γ (r_z,ij^(n))^2→ 1. Therefore the attractivenesses in (<ref>),(<ref>), and (<ref>) are constant and, respectively, equal to β_a,0, β_b,0, and β_z,0. Equivalently, it is an idealized sky scenario where the brightness of each firefly does not change over the distance, which can be seen everywhere. Consequently, a global optimum can be obtained.On the other hand, whenγ→∞, it is obvious that e^-γ (r_a,ij^(n))^2→ 0, e^-γ (r_b,ij^(n))^2→ 0, ⋯, e^-γ (r_z,ij^(n))^2→ 0, indicating that the attractiveness of each firefly is zero. Equivalently, each firefly is randomly in a heavily foggy region and cannot be seen by the others. Each will randomly move and the optimality is not always guaranteed. In this case, FA is equivalent to a random search approach.In fact, the attractiveness is in between these two extreme cases, i.e., 0 < γ <∞. The value of γ^-0.5 defines the average distance of a herd of fireflies being seen by its adjacent herds. Hence, the entire population can be separated into number of herds. This automatic division property provides FA suitable ability of handling highly nonlinear and multimodal optimization problems. By controlling the attractiveness γ_a, γ_b, ⋯, γ_z and the roaming randomness α_a, α_b, ⋯, α_z, it has been shown in previous studies that FA can outperform both Particle Swarm Optimization (PSO), see, e.g., <cit.>, and random search approaches, see e.g., <cit.>.§.§.§ Asymptotic Convergence When γ→ 0, the convergence of FA is similar to that of PSO where the convergence was analyzed by Clerc and Kennedy in 2002 in <cit.>. When γ→∞, the FA may act like a random search, though its behaviour is similar to that of Simulated Annealing (SA) because the FA's solution is perturbed or modified in the similar way as that in the SA in this limiting case. The SA was shown to be convergent under the right-cooling conditions <cit.>. The reduction of the roaming randomness, i.e., α_a, α_b, ⋯, α_z, in the FA can be considered as a type of cooling schedule, and thus it can be expected that FA can converge in this case. Let us now investigate the case when 0 < γ <∞. Given a very large number of firefly population N, it can be assumed that N is much greater than the number of local optima. The initial locations of N fireflies should be uniformly distributed over the whole search space. As the iterations of Algorithm <ref> progress, i.e., n increases, these initial N fireflies should converge into all locally brighter ones, i.e., the local optimaincluding the global ones, in a stochastic manner due to the third term in (<ref>),(<ref>), and (<ref>). By comparing the brightest fireflies amongst the locally brighter ones, i.e., the best solutionsamongst the local optima, the global optima can be attained. Theoretically, these fireflies will reach the global optimal when N→∞ and n≫ 1. However, it has been reported in the related literature that the FA converges with less than 50 to 100 generations <cit.>. In sections <ref>, <ref>, and <ref>, we present how the proposed FA can be adopted to solve optimization problems for transmit beamforming designs.[The original FA has been discretized to solve various discrete or combinatorial optimization problems <cit.>. For example, Osaba et al. <cit.> used a discrete FA to solve rich vehicle routing problems.] Hereafter, “min” and “s. t.” are, respectively, used to represent “minimize” and “subject to”.§ TRANSMIT BEAMFORMINGIn this section we consider a classic transmit beamforming problem with a well-known iterative method based on uplink-downlink duality. We then introduce our FA solution to the problem. §.§ Problem Formulation§.§.§ Problem FormulationConsider an M_t-antenna BS serving U single-antenna mobile users.Let 𝐡^H_i∈ℂ^1 × M_t, 𝐰_i∈ℂ^M × 1 and s_i, respectively, be the channel between the i-th user and the BS, the information-beamforming vector and the data symbol for the ith user. The overall signal received by the ith user is y_i=∑_j=1^U𝐡^H_i𝐰_js_j+n_i where n_i is a zero mean circularly symmetric complex Gaussian noise with variance σ^2, i.e., n_i∼𝒞𝒩(0,σ^2), at the user. Let 𝐑_i= 𝐡_i𝐡^H_irepresent the instantaneous channel state information (CSI) or 𝐑_i= 𝔼(𝐡_i𝐡^H_i) denote the statistical CSI,{𝐰_i}={𝐰_1,𝐰_2,⋯,𝐰_U} be the set of candidate information-beamforming vectors for all users. Assuming that 𝔼(|s_i|^2)=1, the SINR at the i-th user is SINR_i=𝐰_i^H𝐑_i𝐰_i/∑_j=1,j ≠ i^U𝐰_j^H𝐑_i𝐰_j +σ^2. We design the set of beamforming vectors {𝐰_i} such that the BS's total transmit power is minimized while maintaining the SINR level at each user above the required threshold. To that end, the problem is formulated as follows: min_𝐰_i∑_i=1^U𝐰_t^H𝐰_ts. t. 𝐰^H_i𝐑_i𝐰_i/∑_j =1,j ≠ i^U𝐰^H_j𝐑_i𝐰_j+σ^2_i≥γ_i,∀ i ∈{1,⋯,U}, where γ_i is the required SINR level for the i-th user. Problem (<ref>) is known asnon-convex due to the SINR constraint.§.§.§ Iterative ApproachAn elegant approach to solve (<ref>) was introduced in <cit.> based on uplink-downlink duality where the optimal solution of the downlink problem can be sought via solving the following dual-uplink problem:[This approach was also adopted for transmit beamforing problems in coordinated multi-point (CoMP) transmissions, see e.g., <cit.> and <cit.>.]min_p_i∑^U_i=1p_isubject to 𝐩≽Γ𝐭(𝐩),where 𝐩=[ p_1 p_2 ⋯ p_U ]^T, Γ=diag[γ_1, γ_2, ⋯, γ_U], 𝐭(𝐩)=[ t_1(𝐩) t_2(𝐩)⋯ t_U(𝐩) ]^T,t_i(𝐩)=argmin_𝐰̂_i𝐰̂^H_i𝐐_i(𝐩)𝐰̂_i/𝐰̂^H_i𝐑_i𝐰̂_i,𝐐_i(𝐩) =(∑^U_t=1,t≠ ip_t𝐑_t+σ^2_i𝐈), p_i=λ_iσ^2_i is the dual-uplink power for i-th user, λ_i is the i^th Lagrange multiplier associated with the i^th constraint in (<ref>),and 𝐰̂_i, i.e., 𝐰̂_i^H𝐰̂_i=1, is the dual-uplink beamforming vector for i-th user. Starting from any positive initial value of 𝐩( 0 ), the solution for the dual-uplink problem (<ref>) can be found iteratively as 𝐩(n+1)= Γ𝐭(𝐩( n )).The iterative downlink algorithm to find optimal solutions for (<ref>)is summarised in algorithm <ref>. §.§ Proposed Firefly AlgorithmWe rewrite (<ref>) asmin_𝐖 f( 𝐖)s. t.d_i(𝐖)≤ 0 ,∀ i,where 𝐖=[ 𝐰_1, 𝐰_2, ⋯, 𝐰_U ]∈ℂ^M_t × U, f( 𝐖)=∑_i=1^U𝐰^H_i𝐰_i, d_i(𝐖)= -𝐰_i^H𝐑_i𝐰_i +γ_i∑_j=1,j ≠ i^U𝐰_j^H𝐑_i𝐰_j +γ_iσ^2_i. Using the penalty method, we recast(<ref>) into an unconstrained problem as:min_𝐖 f( 𝐖)+ P(𝐖),where P(𝐖) is the penalty term given as:P(𝐖)=∑_i=1^Uλ_imax{0, d_i(𝐖)}^2,with λ_i>0 is the penalty constant.Let {𝐖_i}={[ 𝐰_1^i, 𝐰_2^i, ⋯, 𝐰_U^i ]} be the i-th firefly. We initialize a population of N fireflies {𝐖_i}, i∈{1,2,⋯, N}, and define the light density of the firefly{𝐖_i} as: I_i(𝐖_i)=1/f( 𝐖_i)+P(𝐖_i). For any two fireflies i and j in the population, if I_j(𝐖_j) > I_i(𝐖_i) then the firefly i will move toward the firefly j as:𝐖_i^(n+1) = 𝐖_i^(n)+β_0 e^-γ(r_ij^(n))^2(𝐖_j^(n)-𝐖_i^(n))+α^(n)𝐕,where r_ij^(n)=|| (𝐖_j^(n)-𝐖_i^(n)|| is the Cartesian distance,β_0 is the attractiveness at r_ij^(n)=0, γ presents the variation of of the attractiveness. The second term of (<ref>)represent the attraction. The third term of (<ref>) is a randomization comprised of a randomization factor α^(n) and a matrix of random numbers𝐕∈ℂ^M_t × U. The random factor α^(n) and the elements of 𝐕are drawn from either a Gaussian or an uniform distribution.It can be seen that problem (<ref>) is a special case of the proposed framework (<ref>) where the objective and constraints are functions of optimization variable 𝐖. Hence, the proposed FA has the same steps as those in Algorithm <ref> exceptsteps 3, 16, 18 and 19 given in Algorithm <ref>. §.§ Complexity AnalysisThe complexity of algorithm <ref> is described in the following lemma.The computational complexity of algorithm <ref> is on the order of T [ U(M_t^3+M_t^2+M_tlog M_t) +U ]. The proof is based on the observation that complexities of steps 5, 6 and 8 are, respectively, on the order of M_t^3+M_tlog M_t, M_t^2 and U.The computational complexity of Algorithm <ref> is on the order of:T N^2 [ M_t^2+NUM_t(1+UM_t)]+T N logN+NM_tU+NUM_t(1+UM_t)+NlogN.Due to space limitation, we provide main observations to derive (<ref>) as follows. The dominant terms of the computational complexity of Algorithm <ref> are at steps 2, 3, 4, 16, 19, and 22. The complexity of generating N matrices, each matrix of size M_t× U, in step 2 is on the order of NM_tU. The complexity of evaluating each d_i(𝐖) is on the order of UM_t^2, while the complexity of evaluating ∑_t=1^U 𝐰_t^H𝐰_tis on the order of UM_t.[Here, we adopt the schoolbook iterative algorithm to evaluate complexity of the multiplication of two matrices of sizes n× m and m × p as the order of nmp.] Hence the complexity of calculating the light density for N fireflies, i.e., steps 3 and 19, is on the order ofN(UM_t+U^2M_t^2)=NUM_t(1+UM_t). The complexity of ranking N firefly in steps 4 and 22 is NlogN. Finally, the complexity of moving a firefly in step 16 is on the order of M_t^2. Assuming a worst case when step 16 is executed in every inner loop of the algorithm, after some manipulations, one can arrive at (<ref>).§ COGNITIVE BEAMFORMING §.§ Problem Formulation§.§.§ Problem FormulationConsider a cognitive wireless communication system consisting of an M_t-antenna cognitive base station (BS), U active single-antenna secondary users (SUs) and K single-antenna primary users (PUs). The cognitive BS is allowed to communicate with its SUs in the same frequency band owned by the primary system if its interference imposed on each PU is less than a predefined tolerable threshold of I_to,k. The received signal at thet-th SU, t∈{1, ⋯, U}, is:y_t=𝐡^H_s,t𝐰_ts_t+∑_j=1,j ≠ t^U𝐡^H_s,t𝐰_js_j+n_t,where 𝐡^H_s,t∈ℂ^1 × M_t is the channel coefficient of the wireless link between the t-th SU and the cognitive BS; 𝐰_t∈ℂ^M_t × 1and s_t∼𝒞𝒩(0,1) are, respectively, the beamforming vector and the data symbol associated to the t-th SU; and n_t∼𝒞𝒩(0,σ^2_t) is a zero mean circularly symmetric complex Gaussian noise with variance σ^2_t, at the t-th SU. Let 𝐑_s,t=𝔼( 𝐡_s,t𝐡^H_s,t) for the statistical CSI and 𝐑_s,t=𝐡_s,t𝐡^H_s,t for the instantaneous CSI. The SINR at the t-th SU can be expressed as:SINR_t=𝐰_t^H 𝐑_s,t𝐰_t/∑_j=1,j ≠ t^U𝐰_j^H𝐑_s,t𝐰_j+σ^2_t. Let 𝐡^H_p,k∈ℂ^1 × M_t be thechannel coefficient of the wireless link between the k-th PU, k∈{1, ⋯, K }, and the cognitive BS, 𝐑_p,k=𝔼( 𝐡_p,k𝐡^H_p,k) for the statistical CSI and 𝐑_p,k=𝐡_p,k𝐡^H_p,k for the instantaneous CSI. The total interference power imposed on the k-th PU by the cognitive BS is ∑_j=1^U𝐰_j^H𝐑_p,k𝐰_j.Our objective is to design downlink beamforming vectors for the SUs that minimize the cognitive BS transmit power while maintaining the required SINR level for every SU and keeping the interference level imposed at each PU receiver below the predefined tolerable threshold. The optimization problem to design beamforming vectors is cast as:min_𝐰_t∑_t=1^U𝐰_t^H𝐰_ts. t. 𝐰^H_t𝐑_s,t𝐰_t/∑_j =1,j ≠ t^U𝐰^H_j𝐑_s,t𝐰_j+σ^2_t≥η_t,∀ t ∈{1,⋯,U}, ∑_j =1^U 𝐰^H_j𝐑_p,k𝐰_j≤I_to,k,∀ k ∈{1,⋯,K}, where η_t is the required SINR level for the t-th SU. Due to the SINR constraint, problem (<ref>) is non-convex.§.§.§ SDP ApproachFor the sake of completeness, we provide a review on a traditional approach to solve (<ref>) using semidefinite programming (SDP). We first form a new optimization variable 𝐅_t=𝐰_t𝐰_t^H where 𝐅_t≽0, 𝐅_t∈ℍ^M_t× M_t, and 𝐅_t is a rank-one matrix.[A matrix is rank-one if and only if it has only one linearly independent column/row.] We then utilize the identity 𝐱^H𝐗𝐱=Tr(𝐗𝐱𝐱^H) to rewrite (<ref>) as:min_𝐅_t∈ℍ^M× M∑_t=1^UTr(𝐅_t)s. t. (1+1/η_t) Tr(𝐑_s,t𝐅_t)-∑_j=1^UTr(𝐑_s,t𝐅_j)-σ_t^2≥0, ∀ t,I_to,k-∑_j=1^U Tr(𝐑_p,k𝐅_j) ≥ 0,∀ k,𝐅_t≽0, ∀ t,where t ∈{1,⋯,U}, k ∈{1,⋯,K}. Problem (<ref>) is in a standard SDP form. Hence,its optimal solution can be obtained in a polynomial time by using a general purpose IPM, e.g., CVX which is a Matlab based modeling system for constructing and solving disciplined convex programs <cit.>. In arriving at (<ref>), we have relaxed the rank-one constraint on 𝐅_t,∀ t. If the solution of (<ref>) does not have rank-one, then further computation resources are required to derive a sub-optimal solution via some rank-one approximations or the Gaussian randomize procedure <cit.>. §.§ Proposed Firefly AlgorithmHere, we adopt the generalized FA in Algorithm <ref> to solve (<ref>). Rearranging the constraint, we rewrite (<ref>) as: min_𝐖 f( 𝐖)s. t. ϕ_t(𝐖)≤ 0 ,∀ t ∈{1,⋯,U}, φ_k(𝐖) ≤0,∀ k ∈{1,⋯,K},where 𝐖=[ 𝐰_1, 𝐰_2, ⋯, 𝐰_U ]∈ℂ^M_t × U, f( 𝐖)=∑_t=1^U𝐰_t^H𝐰_t, ϕ_t(𝐖)=η_t∑_j =1,j ≠ i^U𝐰^H_j𝐑_s,t𝐰_j+η_tσ^2_t -𝐰^H_t𝐑_s,t𝐰_t and φ_k(𝐖)=∑_j =1^U 𝐰^H_j𝐑_p,k𝐰_j-I_to,k. Using the penalty method, we first transform(<ref>) into an unconstrained problem as:min_𝐖 f( 𝐖)+ P(𝐖),where P(𝐖) is the penalty term given as:P(𝐖)=∑_t=1^Uλ_tmax{0, ϕ_t(𝐖) }^2+∑_k=1^Kρ_k max{0, φ_k(𝐖)}^2,with λ_t>0 and ρ_k>0 are penalty constants. Let 𝐖_i=[ 𝐰_1^i, 𝐰_2^i, ⋯, 𝐰_U^i ]∈ℂ^M_t × U be the firefly i. We initialize a population of N fireflies 𝐖_i, i∈{1,2,⋯, N}, and define the light density of the firefly𝐖_i as:I_i(𝐖_i)=1/f( 𝐖_i)+P(𝐖_i). For any two fireflies i and j in the population, if I_j(𝐖_j) > I_i(𝐖_i) then the firefly i will move toward the firefly j as:𝐖_i^(n+1)=𝐖_i^(n)+β_0 e^-γ(r_ij^(n))^2(𝐖_j^(n)-𝐖_i^(n))+α^(n)𝐕,where r_ij^(n)=|| (𝐖_j^(n)-𝐖_i^(n)|| is the Cartesian distance,β_0 is the attractiveness at r_ij^(n)=0, γ presents the variation of of the attractiveness. The second term of (<ref>) captures the attraction. The third term of (<ref>) is a randomization comprised of a randomization factor α^(n) and a matrix of random numbers𝐕∈ℂ^M_t × U. The random factor α^(n) and the elements of 𝐕 are drawn from either a Gaussian or an uniform distribution. It can be seen that problem (<ref>) is a special case of the proposed framework (<ref>) where the objective and constraints are functions of only one optimization variable 𝐖. Hence, the proposed FA has the same steps as those in Algorithm <ref> exceptsteps 3, 16, 18 and 19 given in Algorithm <ref>.§.§ Complexity AnalysisWe investigate the complexity of solving (<ref>) in a worst-case runtime of the IPMfollowed by the complexity analysis of the proposed FA. We start by the following definition.At a given ε>0, the set of {𝐅_t^ε} is an ε-solution to problem (<ref>), i.e., an acceptable solution with the accuracy of ε, if∑_t=1^UTr(𝐅_t^ε)≤min_𝐅_t∈ℍ^M× M∑_t=1^UTr(𝐅_t) +ε. The number of decision variables of (<ref>) is M_t^2. The complexity of (<ref>) is described in the following lemma.The computational complexity to attain ε-solution to (<ref>) is on the order of:ln(ε^-1)√(U(M_t+1)+K)[ (M_t^2+1)(U+K) +UM_t^2(M_t^2+M_t)+M_t^4 ] M_t^2.We sketch some main steps to arrive at the lemma due to space limitation. It can be observed that (<ref>) has (U+K) linear-matrix-inequality (LMI) constraints of size 1 and U LMI constraints of size M_t. One can follow the same steps as in <cit.> to derive the following facts: (i) the iteration complexity is on the order of ln(ε^-1)√(U(M_t+1)+K), and (ii)the per-iteration complexity is on the order of [ (M_t^2+1)(U+K)+UM_t^2(M_t^2+M_t)+M_t^4]M_t^2. The computational complexity of Algorithm <ref> is on the order of:T N^2 [ M_t^2+NUM_t(1+UM_t+KM_t)]+T N logN+NM_tU+NUM_t(1+UM_t+KM_t)+NlogN.Due to space limitation, we provide main observations to derive (<ref>) as follows. The dominant terms of the computational complexity of Algorithm <ref> are at steps 2, 3, 4, 16, 19, and 22. The complexity of generating N matrices, each matrix of size M_t× U, in step 2 is on the order of NM_tU. The complexity of evaluating each ϕ_t(𝐖) or φ_k(𝐖) is on the order of UM_t^2, while the complexity of evaluating ∑_t=1^U 𝐰_t^H𝐰_tis on the order of UM_t. Hence the complexity of calculating the light density for N fireflies, i.e., steps 3 and 19, is on the order ofN(UM_t+U^2M_t^2+KUM_t^2)=NUM_t(1+UM_t+KM_t). The complexity of ranking N firefly in steps 4 and 22 is NlogN. Finally, the complexity of moving a firefly in step 16 is on the order of M_t^2. Assuming a worst case when step 16 is executed in every inner loop of the algorithm, after some manipulations, one can arrive at (<ref>). § RECONFIGURABLE INTELLIGENT SURFACE-AIDED BEAMFORMING §.§ Problem Formulation§.§.§ Problem FormulationConsider a communication system comprising of an M_t-antenna BS communicatingwith U single-antenna mobile users in which the direct communication links between the BS and its mobile users are blocked, e.g., because of high building etc., <cit.>. To circumvent the problem, an N_t-reflective-element RIS is utilized to support the communication. Let 𝐇 = [𝐡_1, …, 𝐡_N_t] ∈ℂ^M_t × N_t represent the channel coefficients between the BS and the RIS and 𝐠_i =[g_i1, …, g_iN_t]^T ∈ℂ^N_t × 1 be the channel coefficients between the RIS and the i-th user.Let x_i, i.e., 𝔼[|x_i|^2]=1, and 𝐰_i ∈ℂ^M_t× 1, respectively, represent the data symbol and the active beamforming vector for the i-th user. Each reflective elementof the RIS generates a phase shift to support the communication between the BS and the mobile users. Let θ_k be the phase shift at the k-th reflective element and let θ =[θ_1,θ_2,⋯,θ_N_t]^T denote the phase-shift coefficients generated by the RIS with |θ_k| ≤ 1 and arg(θ_k)∈ [-π,π), ∀ k = 1, …, N_t. Vector θ is the passive beamforming vector for the RIS. The signal arrived at the i-th user is:y_i = 𝐠_i^H diag(θ)^H 𝐇^H 𝐰_i x_i +𝐠_i^H diag(θ)^H 𝐇^H ∑_j=1,j ≠ i^U𝐰_j x_j+ n_i,= θ^H 𝐆_i^H 𝐰_i x_i +θ^H 𝐆_i^H ∑_j=1,j≠ i^U𝐰_j x_j + n_i,where 𝐆_i^H=diag(𝐠_i^∗)𝐇^H ∈ℂ^N_t × M_t and n_i∼𝒞𝒩(0,σ^2) represents the additive noise measured at the i-th user. Furthermore, let {𝐰_i}={𝐰_1, 𝐰_2,⋯, 𝐰_U} denote the set of active beamforming vectors, andSINR_i( {𝐰_i},θ) be the SINR at the i-th user. One can write:SINR_i( {𝐰_i},θ) =|θ^H 𝐆_i^H 𝐰_i|^2/∑_j=1,j≠ i^U|θ^H 𝐆_i^H𝐰_j|^2+σ_i^2.The optimization is posed as follows:{𝐰_i}, θmin∑_i=1^U 𝐰_i^H𝐰_i SINR_i( {𝐰_i},θ) ≥η_i, ∀ i,|θ_k| ≤ 1, ∀ k,where η_i is the required SINR level measured at the i-th user. Since the SINR constraint is a function of two optimization variables 𝐰_i and θ, problem (<ref>) is non-convex. §.§.§ Alternative Optimization ApproachFor the sake of completeness, the widely-adopted AO approach <cit.> is represented here as a baseline to solve (<ref>).Let 𝐅_i=𝐰_i𝐰_i^H, and Θ=θθ^H, i.e., rank(𝐅_i) = 1 and rank(Θ) = 1.As𝐅_i and Θ are two independent variables, they can be alternatively solved <cit.>. To that end, relaxing the rank-one constraint on 𝐅_i and beginning with any initial value of the reflecting coefficient matrix Θ^(0), the following sub-problem will be solved at the p-th iteration:{𝐅_i}minTr(∑_i=1^U 𝐅_i) Tr𝐆_iΘ^(p-1)𝐆_i^H𝐅_i/η_iσ_i^2-∑_j=1,j≠ i^UTr𝐆_iΘ^(p-1)𝐆_i^H𝐅_j/σ_i^2-1≥0, ∀ i,𝐅_i≽0,∀ i∈{1,⋯,U}. The reflecting coefficients Θ^(p) is then updated from the optimal solution of (<ref>) at p-th iteration, i.e., {𝐅_i^(p)}, by solving the following sub-problem <cit.>:ΘminTr( Θ) TrΘ𝐆_i^H𝐅_i^(p)𝐆_i/η_iσ_i^2-∑_j=1,j≠ i^UTrΘ𝐆_i^H𝐅_j^(p)𝐆_i/σ_i^2-1≥0,∀ i,diag(diag(Θ)) ≼𝐈_N_t, Θ≽0. The AO approach repetitively solvestwo SDPs (<ref>) and (<ref>) in n_0 iterations to obtain the solution for (<ref>). It is worth noticing that the AO approach approximates the originally non-convex optimization (<ref>) by two sub-problems (<ref>)and (<ref>). Although (<ref>)and (<ref>) are convex, the solutions to these sub-problems can be regarded as the upper bounds of the original problem (<ref>) as these solutions may not be the global solution. Furthermore, the AO approach adopts the so-called semidefinite relaxation technique <cit.> in which the rank-one constraints on 𝐅_i and Θ are relaxed. If solving (<ref>)and/or (<ref>) does not return rank-one matrices 𝐅_i and/or Θ, then a rank-one approximation or a Gaussian randomize procedure <cit.> is required to extract approximated rank-one solutions. Extracting the approximated solutions requires further computational resources yet only results in sub-optimal solutions. Motivated by the above observations, we introduce a novel FA approach to simultaneously solve 𝐰_i and θ for the original problem (<ref>) in the following section.§.§ Proposed Firefly AlgorithmThe optimization (<ref>) can be expressed as{𝐖,θ}minf( 𝐖)ϕ_i( {𝐖,θ}) ≤ 0, ∀ i, φ_k(θ_k) ≤ 0, ∀ k,where 𝐖=[ 𝐰_1, 𝐰_2, ⋯, 𝐰_U ]∈ℂ^M_t × U, f( 𝐖)=∑_i=1^U𝐰_i^H𝐰_i,ϕ_i( 𝐖,θ)=η_i∑_j=1^U𝐰_j^H 𝐆_iθθ^H 𝐆_i^H𝐰_j/σ^2_i+η_i -( 1+η_i)𝐰_i^H 𝐆_iθθ^H 𝐆_i^H𝐰_i/σ^2_i,and φ_k(θ_k)=|θ_k|-1. Adopting the penalty method, (<ref>) can be written as:{𝐖,θ}min f( 𝐖)+ P(𝐖,θ),where P(𝐖,θ) is the penalty term given as:P(𝐖,θ)=∑_i=1^Uλ_imax{0, ϕ_i({𝐖,θ}) }^2+∑_k=1^N_tρ_k max{0, φ_k(θ_k)}^2,with λ_i>0 and ρ_k>0 are penalty constants. Let {𝐖_t,θ_t}={[ 𝐰_1^t, 𝐰_2^t, ⋯, 𝐰_U^t ], θ_t} be the firefly t. We initialize a population of N fireflies {𝐖_t,θ_t}, t∈{1,2,⋯, N} and define the light density, i.e., the brightness, of the firefly t{𝐖_t,θ_t} as:I_t(𝐖_t,θ_t)=1/f( 𝐖_t)+P(𝐖_t,θ_t). For any fireflies t and l amongst the population, if I_t(𝐖_t,θ_t) > I_l(𝐖_l,θ_l) then the firefly l will move toward the firefly t as:𝐖_l^(n+1) = 𝐖_l^(n)+β_0 e^-γ(r_w,tl^(n))^2(𝐖_t^(n)-𝐖_l^(n))+α^(n)𝐕, θ_l^(n+1) = θ_l^(n)+β_0 e^-γ(r_θ,tl^(n))^2(θ_t^(n)-θ_l^(n))+α^(n)𝐯,where r_w,tl^(n)=|| (𝐖_t^(n)-𝐖_l^(n)|| and r_θ,tl^(n)=|| (θ_t^(n)-θ_l^(n)|| are the Cartesian distances,β_0 is the attractiveness at r_w,tl^(n)=0 and r_θ,tl^(n)=0, γ presents the variation of of the attractiveness. The second terms of (<ref>) and (<ref>) capture the attractions while the third terms of (<ref>) and (<ref>) arerandomization comprised of randomization factor α^(n), 𝐕∈ℂ^M_t × U and 𝐯∈ℂ^M_t × 1. The factor α^(n), the elements of 𝐕 and 𝐯 are drawn from either an uniform or a Gaussian distribution. It can be observed that problem (<ref>) is a special case of the proposed framework (<ref>) where the objective and constraints are functions of optimization variables 𝐖 and θ. The proposed FA for RIS has the same steps as those in Algorithm <ref> except steps 3, 16, 18 and 19 given in Algorithm <ref>.§.§ Complexity AnalysisHere, we analyze the computational complexities of the AO and the proposed FAfor RIS-aided beamforming problem.The complexity of the AO approach is on the order of:n_o( τ_1+ τ_2),where τ_1 = ln(ε^-1)√(U(M_t+1))[ (M_t^2+1)U +UM_t^2(M_t^2+M_t) +M_t^4]M_t^2,τ_2 = ln(ε^-1)√(U+2N_t)[ (N_t^2+1)(U+2N_t^2)+N_t^4]N_t^2.We first give some hints to derive the computational complexity of obtaining optimal solution to problems (<ref>) and (<ref>). With the observation that (<ref>) has U LMI constraints of size 1 and U LMI constraints of size M_t, one can follow the same steps as in <cit.> to derive the complexity of solving (<ref>) as τ_1 given in (<ref>).At a given ε>0, Θ^εis called an ε-solution to problem (<ref>) if Tr( Θ^ε)≤ΘminTr( Θ) +ε. The number of decision variables of (<ref>) is N_t^2. Observing that (<ref>) has U linear-matrix-inequality (LMI) constraints of size 1 and 2 LMI constraints of size N_t, one can derive the computational complexity to attain ε-solution to (<ref>) as the order of τ_2 given in (<ref>).Since the AO approach iteratively solves (<ref>) and (<ref>) in n_o iterations, the complexity of AO approach is on the order of n_o( τ_1+ τ_2). The computational complexity of Algorithm <ref> is on the order ofT N^2 [ M_t^2+N_t+N(UM_t+U(N_t^2+M_tN_t)+N_t)] +T N logN+NM_tU+N_t N +NlogN +N(UM_t+U(N_t^2+M_tN_t)+N_t).The proof is based on the following observations. The dominant terms of the computational complexity of Algorithm <ref> are at steps 2, 3, 4, 16, 19, and 22. The complexity of generating N fireflies in step 2 is on the order of NM_tU+N_t N. The complexities of evaluatingϕ_i(𝐖,θ), φ_k(θ_k), and ∑_i=1^U 𝐰_i^H𝐰_i are, respectively, on the order of U(N_t^2+M_tN_t), N_t, and UM_t. Hence, the complexity of calculating the light density for N fireflies, i.e., steps 3 and 19, is on the order ofN(UM_t+U(N_t^2+M_tN_t)+N_t). The complexity of ranking N firefly in steps 4 and 22 is NlogN. Finally, the complexity of moving a firefly in step 16 is on the order of M_t^2+N_t. Assuming a worst case when step 16 is executed in every inner loop of the algorithm, after some manipulations, one can arrive at (<ref>).§ RIS-AIDED WIRELESS POWER TRANSFER §.§ Problem Formulation§.§.§ Problem FormulationConsider a similar communication system in <ref>, however, the users are energy harvesting receivers (EHRs) instead of information decoding receivers. Using the same notations as in <ref>, the power arrived at the i-th user is:E_i = | 𝐠_i^H diag(θ)^H 𝐇^H ∑_j=1^U𝐰_j|^2=∑_j=1^U𝐰_j^H𝐆_i θ θ^H 𝐆_i^H 𝐰_j,where 𝐰_j is the active energy beamforming vector for the j-th user. we interested in maximizing a total weighted sum power received at the EHRs obtained via the following optimization problem:{𝐰_i}, θmax∑_i=1^U ∑_j=1^Uα_i𝐰_j^H𝐆_i θ θ^H 𝐆_i^H 𝐰_j ∑_j=1^U𝐰_j^H 𝐰_j≤ P,|θ_k| = 1, ∀ k,where P is the maximum transmit power of the BS and α_i≥ 0 is the weighting factor for the i-th EHR.§.§.§ Successive Convex ApproximationAccording to <cit.>, for any fix θ, only one common energy beam is sufficient. Using a successive convex approximation (SCA) technique, <cit.> proposed an iterative algorithm to find optimal active and passive beamforming vectors for problem (<ref>) as follows. Starting with an initialized value θ^(0), the optimal active beamforming vector at the l-th iterations is calculated as 𝐰^(l)=√(P)eig_max( ∑_i=1^U α_i𝐆_i θ^(l-1)θ^(l-1)H𝐆_i^H ) where eig_max( 𝐗) is the maximum eigenvalue of matrix 𝐗. The k-th coefficient of the RIS's phase shift vector at the l-th iterations is calculated as [ θ^(l)]_k=1 if μ_k=0 and [ θ^(l) ]_k =μ_k/|μ_k| if μ_k ≠ 0, where μ_k = [ ∑_i=1^Uα_i 𝐆_i^H 𝐰^(l)𝐰^(l)H𝐆_i θ^(l-1)]_k. §.§ Proposed Firefly AlgorithmThe optimization (<ref>) can be expressed as{𝐖,θ}min-f( 𝐖,θ)ϕ( {𝐖,θ}) ≤ 0, φ_k(θ_k) = 0, ∀ k,where 𝐖=[ 𝐰_1, 𝐰_2, ⋯, 𝐰_U ]∈ℂ^M_t × U, f( 𝐖,θ)=∑_i=1^U ∑_j=1^Uα_i𝐰_j^H𝐆_i θθ^H 𝐆_i^H 𝐰_j, ϕ( 𝐖,θ)=∑_j=1^U𝐰_j^H 𝐰_j-P,and φ_k(θ_k)=|θ_k|-1. Adopting the penalty method, (<ref>) can be written as:{𝐖,θ}min -f( 𝐖,θ)+ P(𝐖,θ)where P(𝐖,θ)=λmax{0, ϕ({𝐖,θ}) }^2+∑_k=1^N_tρ_k {φ_k(θ_k)}^2, with λ>0 and ρ_k>0 are penalty constants. Let {𝐖_t,θ_t}={[ 𝐰_1^t, 𝐰_2^t, ⋯, 𝐰_U^t ], θ_t} be the firefly t. We initialize a population of N fireflies {𝐖_t,θ_t}, t∈{1,2,⋯, N} and define the light density, i.e., the brightness, of the firefly t{𝐖_t,θ_t} as:I_t(𝐖_t,θ_t)=1/-f( 𝐖_t)+P(𝐖_t,θ_t).It can be observed that problem (<ref>) is a special case of the proposed framework (<ref>) where the objective and constraints are functions of optimization variables 𝐖 and θ. Utilizing the firefly movements define in (<ref>) and (<ref>) in Section<ref>, the proposed FA for RIS has the same steps as those in Algorithm <ref> except steps 3, 16, 18 and 19 given in Algorithm <ref>.§.§ Complexity AnalysisHere, we analyze the complexities of the SCA approach and the proposed FA for the RIS-aided WPT beamforming. We start by introducing the following lemma.The complexity of the SCA approach is on the order of:m_0( UM_t(M_t+N_t )+M_t^3+M_tlog M_t +N_t^3+N_t^2M_t),where m_0 is the number of iterations of the SCA approach. At each iteration, the complexity of evaluating α_i𝐆_i θ^(l-1)θ^(l-1)H𝐆_i^H is on the order of U(M_t^2+M_tN_t ). The complexities of finding a maximum eigenvalue of the M_t × M_t matrix α_i𝐆_i θ^(l-1)θ^(l-1)H𝐆_i^H based on the SVD method is on the order of M_t^3+M_tlog M_t. Hence, the complexity of finding 𝐰^(l) is on the order of UM_t(M_t+N_t)+M_t^3+M_tlog M_t. Furthermore, the complexity of calculating μ_k is on the order of N_t^2+M_tN_t. Therefore, the complexity of finding θ^(l) is on the order of N_t( N_t^2+M_tN_t). Consequently, m_0 iterations of evaluating 𝐰^(l) and θ^(l) lead to (<ref>). The complexity of the Algorithm <ref> is on the order of:T N^2 [ M_t^2+N_t+N(UM_t+U(N_t^2+M_tN_t)+N_t)] +T N logN+NM_tU+N_t N +NlogN +N(UM_t+U(N_t^2+M_tN_t)+N_t).Noticing that the complexities of evaluatingϕ(𝐖,θ), φ_k(θ_k), and f(𝐖,θ) are, respectively, on the order of UM_t, N_t, and U(N_tM_t+N_t^2). One can easily show that the complexity of the Algorithm <ref> is the same as that of the Algorithm <ref>.§ NUMERICAL RESULTSIn this section, we perform simulations to evaluate the performances of the proposed FA approaches, i.e., FA approaches for transmit beamforming, cognitive cognitive beamforming, RIS-aided transmit beamforming, and RIS-aided WPT, and compare them with their iterative, SDP, and SCA counterparts.CVX package <cit.> is utilized to obtain the solution for the cognitive SPD approach, i.e., problem (<ref>), and the AO approach for the RIS-aided transmit beamforming. In the AO approach, two SDPs (<ref>) and (<ref>) are alternatively solved in n_0=10 iterations. The setup parameters for FAs are as follows. The variation of the attractiveness γ is set at 1. The penalty constants are set equal but they dynamically vary as λ_i=ρ_k= n^2,∀ i, k where n is the generation index in Algorithm <ref>. The attractiveness at zero distance is β_0=1. Finally, the initial randomization factor is α^(0)=0.9and its value at the n-th generation is α^(n)=α^(0)0.9^n.§.§ Evaluation on Transmit BeamformingWe simulate a scenario of two users, i.e., U=2, randomly distributed within 2 km from their BS. The array antenna gain at the BS is 15dBi.The noise power spectral density, noise figure at each user and the subcarrier bandwidth are, respectively, -174 dBm/Hz, 5 dB and 15 kHz wide. The path loss model is 35+34.5log10(l), where l is in kilometers. A log-normal shadowing with a standard deviation of 8 dB is assumed. Furthermore, a complex Gaussian distribution is setwith the variance of 1/2 on each of its real and imaginary components for the downlink channel fading coefficients. Monte Carlo simulations have been carried out over 1000 channel realizations.Fig. <ref> illustrates the total transmit power of the proposed FA approach and its iterative counterpart versus the required SINR level with different numbers of BS's antennas. The results on Fig. <ref> clearly show that the proposed FA approach outperforms the iterative method in obtaining lower required transmit power, i.e., around 3 to 4 dB lower, for all simulated setups. The results in Fig. <ref> confirm the ability of the proposed FA in handling highly nonlinear and multimodal optimization problems. This power saving gain, however, comes at the price of a higher complexity. Using the parameter setup for Fig. <ref> in Lemmas<ref> and <ref>, i.e., U=2, T=N=30, M_t=4, 6,8, one can find the complexities of the Iterative and FA approaches are, respectively, in the order of 𝒪(10^4) and 𝒪(10^8). When the number of antennas elements are large, letting T=N=M_t, it can be shown that the dominant terms of the complexities of the Iterative and FA approach are in the order of 𝒪(M_t^4) and 𝒪(M_t^6), respectively. The trade off between the power saving gain and computational complexity of the proposed FA approach in comparison with the Iterative method should be considered by the network designer/operator. Fig. <ref> shows the total BS's transmit power of the Iterative and proposed FA versus the number of iteration/generations with different numbers of BS's antennas. The results indicate that the Iterative approach converges after just 5 iterations/generations while the proposed FA requires about 20 generations/iterations to level off.Fig. <ref> shows the total BS's transmit power of the proposed FA approach versus the number of population N with different BS's antenna elements. It can be seen that the observed curves converge after N=30. Our simulations indicate that the proposed FA approach performs well with at least 30 fireflies to solve (<ref>) under the investigated SINR range.§.§ Evaluations on Cognitive Transmit BeamformingWe first reproduce the result of the experiment described in Example 1 of <cit.> to compare the proposed FA approach with the SDP approach. In that experiment, three SUs are located at -5^∘, 10^∘, 25^∘, and two PUs are located at 30^∘ and 50^∘, relative to the BS's array broadside. The tolerable interference level two PUs are I_to,1=0.001 and I_to,2=0.0001. The noise variance is set to 0.1 while the required SINR values are set to 1 for the SUs. The channel covariance matrices from the secondary BS to SU t , i.e., 𝐑_s,t=𝐑( ζ_s,t,δ_a), and to PU k, i.e., 𝐑_p,k=𝐑( ζ_p,k,δ_a), are the function of the angle of departure, i.e., ζ_s,t or ζ_p,k, and the standard deviation of the angular spread, i.e., δ_a. The (m,n)th entry of 𝐑( ζ,δ_a) is, <cit.>:e^j2πΔ/ψ[(n-m)sinζ] e^-2[πΔδ_a/ψ{(n-m)cosζ}]^2,where ψ is the carrier wavelength, σ_a=2^∘, and the antenna spacing at the BS is set as Δ=ψ/2.Fig. <ref> (a) illustrates the radiation patterns at the BS of the SDP approach as described in (<ref>), which is the reproduction of Fig. 3 in <cit.>, while Fig. <ref> (b) shows the radiation patterns at the BS of the FA approach proposed in Algorithm <ref>. The results clearly indicate that the FA obtains the same radiation pattern as the SDP approach does. Both approaches are able to form nulls to the locations/angles where the PUs are located. In other words, the proposed FA can obtain the same optimal solution as the IPM does for the SDP counterpart. This confirms the ability of the proposed FA in handling highly nonlinear and multimodal optimization problems.With the setup in Fig. <ref>, i.e., M_t=8, U=3, K=2, N=100 and, T=80, one can easily verify from Lemmas <ref> and <ref> that the proposed FA approach requires higher computational complexity than the SDP approach does when it returns rank-one optimal solution. When the number of antennas is large, one can show that the dominant term of (<ref>) is M_t^61/2. On the other hand, assuming T=N=M_t, the dominant term of (<ref>) is M_t^6. Hence, the complexity of an IPM to solve (<ref>) is slightly higher than the complexity of the proposed FA in Algorithm <ref>, i.e., 𝒪( M_t^61/2) in comparison with 𝒪(M_t^6). Fig. <ref> shows the transmit power of the proposed FA approach versus the number of population with different numbers of transmit antennas. The results indicate that the proposed FA converges with all number of antenna setups as all the observed curves level off after the maximum size of population of N=50. However, the higher of the antenna elements is, the larger the size of the population is required for a converged transmit power. For example, with M=8, 16, and 32, the proposed FA approach, respectively, obtains a stable transmit power at N=30, 40 and 50. This is due to the fact that the size of the system increases with a higher number of antenna elements, i.e., a higher degree of freedom. As a result, it requires a larger size of the population to provide a sufficient diversification for the exploration of the FA. The results also show that the required transmit power decreases when the number of antennas increase as the result of having higher degree of freedom.Fig. <ref> depicts the transmit power of the proposed FA approach versus the number of maximum generations with different numbers of transmit antennas.A similar trend as in Fig. <ref> is also observed in this figure. The transmit power attained by the proposed FA approach converges with all numbers of antenna setups. The higher number of antennas is, the higher number of generations is needed as a result of higher exploitation required for the increase of the problem dimension. For instance, the transmit power levels off at around 90, 100, and 120 generations, respectively, for M=8, 16, and 32. §.§ Evaluations on RIS-aided Transmit Beamforming We simulate a RIS-aided communication system which consists of one BS, one RIS, and two users, i.e., U=2. The distance between the BS and the RIS is 10 m. Users are randomly distributed with a distance of 6 m from the RIS. The pathloss exponents of both wireless links from the BS to the RIS and from the RIS to users are set to be 2.2 with the signal attenuation at the reference distance of 1 m being 30 dB <cit.>, i.e., the large-scale fading coefficient is modeled as -30 -22log_10(d) dB where d is the distance between the BS to RIS or RIS to a user. The noise variance at each user is -124 dBm. Monte Carlo simulations are carried over 100 channel realizations. Each channel realization is associated with a random user location and a random fading coefficient.Fig. <ref> illustrates the total BS's transmit power versus the required SINR level with different numbers of BS's antennas and RIS's reflective elements. The results indicate that the proposed FA prevails the AO approach in terms of lower power consumption. The superior performance of the FA approach over its AO counterpart can be explained as follows. As the AO approach approximates non-convex problem (<ref>) by two convex sub-problems (<ref>) and (<ref>), the solution obtained by the AO approach is not necessary the global optimal solution of the original problem (<ref>). On the other hand, the proposed FA possessing both exploitation and exploration abilities can effectively handle such non-convex problem and obtain much better solution than its counterpart. The results shown on Fig. <ref> verify the ability of the proposed FA in handling highly nonlinear and multimodal optimization problems.It can be observed from Fig. <ref> that at a given number of RIS's reflective elements, the performance gap between the proposed FA and the AO decreases when the number of BS's antennas increases. For example, when N_t=20, the gaps are, respectively, around 7.5 dB and 3.5 dB with M_t=3 and M_t=8. Fortunately, at a given number of BS's antennas, the performance gap improves when the number of RIS's elements increases. For instance, with M_t=8, the performance gap increases from around 3.5 dB to 4.5 dB when N_t increases from 20 to 30. Interestingly, the FA performs especially well with a relatively high ratio of N_t/M_t, i.e., the performance gap is around 9.5 dB with the ration of 30/3 while it is around 3.5 with the ratio of 20/8. The results can be explained as follows. A higher number of RIS's reflective elements gives more degree of freedom for the FA to perform. Moreover, the channel between the RIS and these users plays a higher role than that between the BS and the RIS does as the former is closer to these users. Last but not least, the performance gaps slightly decrease at relatively high SINR level especially when the N_t/M_t ratio is relatively low. For example with the ratio of 20/8, the performance gap is around 1.8 dB at SINR of 20 dB compared with around 3.5 dB at the other SINR levels, i.e., see the bottom-right corner figure of Fig. <ref>. This is because of a fact that the FA has reached its limit of exploration with N=120 fireflies, at a stricter constraint condition.We now compare the computational complexities of the AO and FA approaches for the experiments presented on Fig. <ref>. As N_t is larger than M_t, from Lemma <ref> one can show that the dominant term of the complexity of the AO approach is n_0 N_t^61/2. Similarly, from Lemma <ref> one can conclude that the dominant term of the complexity of the FA approach is TN^3N_t^2. Substituting for N_t=30, n_0=10, N=120 and T=50, we can arrive at the fact that the computational complexities of the AO and FA approaches are on the same order of 𝒪( 10^10). When the numbers of antennas M_t and N_t are large, letting N_t=n_0=M_t in (<ref>), one can show that the dominant term of the complexity to attain ε-solution to (<ref>) is M_t^71/2.On the other hand, one can derive the dominant term of (<ref>) as M_t^6 when assuming T=N=N_t=M_t. Hence, the complexity of an IPM to solve (<ref>) is higher than the complexity of the proposed FA in Algorithm <ref>, i.e., 𝒪( M_t^71/2) in comparison with 𝒪(M_t^6).In Fig. <ref>, the total BS's transmit power is plotted versus the maximum of generation T used in the FA in Algorithm <ref> with different BS's antennas and RIS's reflective elements. The results indicate that the proposed FA requires around 50 to 60 generations to attain the optimal solution for all setups. Fig. <ref> illustrates the total transmit power versus the number of population N with different BS's antennas and RIS's elements. The results show that increasing the size of the firefly population enables the FA to obtain better solution. For example, the total transmit power decreases around 7 dB, 5.4 dB, 5 dB, and 3 dB, respectively, for the setups of (M_t=8, N_t=20), (M_t=3, N_t=30), (M_t=8, N_t=20), and (M_t=3, N_t=20) when the firefly population increases from 20 to 120. The performance gap at the 20 dB SINR level observed in Fig. <ref> for(M_t=8, N_t=20) can be improved 1 dB furtherwhen the population size is enlarged from 120 to 200. These total-transmit-power curves converge after N=180 as the reduction in the total transmit power is negligiblewhen the population increases to N=200 for all setups.§.§ Evaluations on RIS-aided WPTHere, we use the same setup for the RIS-aided communication system as considered in the previous section, i.e., Section <ref>. However, the EHRs are randomly placed with the distance of 2 m from the RIS. We run m_0=10 iterations to obtain the solution for the SCA approach.Fig. <ref> shows the sum-power received at EHRs versus BS's maximum transmit power with different numbers of BS's antennas and RIS's reflective elements. It is clear from the figure that the proposed FA approach outperforms the SCA approach in <cit.> in offering higher sum-power at EHRs. The performance gaps are, respectively, around 18 dB, 17 dB, 15 dB, and 14 dB for the setups of (M_t=3, N_t=30), (M_t=8, N_t=30), (M_t=3, N_t=20), and (M_t=8, N_t=20). The superior performance of the proposed FA over the SCA is due to the advantage of having exploitation and exploration abilities to handle non-convex optimization problems. On the other hand, the SCA employs the first-oder Taylor expansion to approximate the optimization problem resulting in a lower-bounded solution. Furthermore, the FA approach allocates one active beamforming vector for each EHR whereas the SCA only uses one active beamforming vector for all EHRs.The results shown on Fig. <ref> again verify the ability of the proposed FA in handling highly nonlinear and multimodal optimization problems.Comparing Figs. <ref> and <ref>, it can be observed that the FA behaves in a similar manner for both power minimization problem (<ref>) and sum-power maximization problem (<ref>). For instance, at the same value of M_t, the higher the value of N_t, the larger the performance gap is. At the same value of N_t, the lower the value of M_t, the bigger the performance gap is. The results also recommend to maintain a relatively high ratio of N_t/M_t to attain the best performance of the FA. Slight declines in the performance gaps are also observed at the stricter constraint of BS's transmit power, i.e., 40 dBm, as the FA's population reach their limit of exploration. We proceed by comparing the computational complexities of the SCA and FA approaches for the experiments shown on Fig. <ref>. As N_t is larger than M_t, from Lemmas <ref> and <ref>, it is clear that the dominant terms of the complexities of the SCA and the FA approaches are, respectively, m_0 N_t^3 and TN^3N_t^2. Substituting for N_t=30, m_0=10, N=100 and T=50, we can arrive at the fact that the computational complexities of the SCA and FA approaches are, respectively, on the orders of 𝒪( 10^5) and 𝒪( 10^10). When the numbers of antennas M_t and N_t are large, letting N_t=m_0=M_t in (<ref>), one can show that the dominant term of the complexity of the SCA is M_t^4.On the other hand, the dominant term of (<ref>) is M_t^6 when assuming T=N=N_t=M_t. Hence, the complexity of the SCA approach is lower than that of the proposed FA in Algorithm <ref>, i.e., 𝒪( M_t^4) in comparison with 𝒪(M_t^6). Sum-power received at EHRs are shown versus the number of maximum generations with different numbers of BS's antennas and RIS's reflective elements in Fig. <ref>. The figure reveals that the proposed FA converges after around 50 to 60 generations for all observed setups. The effect of the firefly population on the sum-power received at EHRs is illustrated on Fig. <ref>. The figure shows that all the curves converge after the population size of 80. However the difference between the EHRs' sum-power offered by 80 fireflies and that offered by 40 fireflies is no more than 0.7 dB for all observed setups. This indicates that the complexity of the proposed FA for the RIS-aided WPT sum-power maximization problem in(<ref>) can be reduced with an acceptable tradeoff in the optimality. § CONCLUSIONWe have proposed a generalized FA to find optimal solution for an optimization framework containing objective function and constraints as multivariate functions of independent optimization variables. We have adopted the proposed generalized FA to solve four representative examples of classic transmit beamforming, cognitive beamforming, RIS-aided transmit beamforming, and RIS-aided wireless power transfer. Our analyzes have indicated that the computational complexities of proposed FA approaches are less than those of their IPM counterparts, i.e., the SDP and the AO approaches, yet higher than that of the iterative and SCA approaches in large-antenna scenarios. Simulation results have revealed the fact that the proposed FA attains the same optimal solution as the IMP does for the under-investigated cognitive beamforming problem. Interestingly, the proposed FA outperforms the iterative, AO, and SCA approaches for the under-investigated classic transmit beamforming, RIS-aided transmit beamforming, and wireless power transfer problems, respectively. This confirms the effectiveness of the proposed generalized FA in handling multivariate and non-convex problems. IEEEtran 1.6[ < g r a p h i c s > ]Tuan Anh Le (S'10-M'13-SM'19) received the Ph.D. degree in telecommunications research from King’s College London, The University of London, U.K., in 2012. He was a Post-Doctoral Research Fellow with the School of Electronic and Electrical Engineering, University of Leeds, Leeds, U.K. He is a Senior Lecturer at Middlesex University, London, U.K. His current research interests include integrated sensing and communication (ISAC), RIS-aided communication, RF energy harvesting and wireless power transfer, physical-layer security, nature-inspired optimization, and applied machine learning for wireless communications. He severed as a Technical Program Chair for 26th International Conference on Telecommunications (ICT 2019). He was an Exemplary Reviewer of IEEE Communications Letters in 2019.[ < g r a p h i c s > ]Xin-She Yang obtained his DPhil in Applied Mathematics from the University of Oxford. He then worked at Cambridge University and National Physical Laboratory (UK) as a Senior Research Scientist. Now he is Reader at Middlesex University London, and a co-Editor of the Springer Tracts in Nature-Inspired Computing. He is also an elected Fellow of the Institute of Mathematics and its Applications. He was the IEEE Computational Intelligence Society (CIS) chair for the Task Force on Business Intelligence and Knowledge Management (2015 to 2020). He has published more than 300 peer-reviewed research papers with more than 84,000 citations, and he has been on the prestigious list of highly-cited researchers (Web of Sciences) for eight consecutive years (2016-2023). | http://arxiv.org/abs/2310.18460v1 | {
"authors": [
"Tuan Anh Le",
"Xin-She Yang"
],
"categories": [
"cs.IT",
"eess.SP",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20231027201326",
"title": "Generalized Firefly Algorithm for Optimal Transmit Beamforming"
} |
Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India[][email protected] Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India Center for Computational Sciences, University of Tsukuba, Tsukuba 305-8577, JapanMax-Born Institut, Max-Born Straße 2A, 12489 Berlin, GermanyGenerating and tailoring photocurrentin topological materials has immense importancein fundamental studies and the technological front.Present work introduces a universal method to generate ultrafast photocurrent in both inversion-symmetric and inversion-broken Weyl semimetals withdegenerate Weyl nodes at the Fermi level. Our approach harnesses the asymmetric electronic population in the conduction band induced by an intense single-color circularly polarized laser pulse. It has been found that the induced photocurrent can be tailoredby manipulating helicity and ellipticity of the employed laser. Moreover, our approach generates photocurrent in realistic situations when the Weyl nodes are positioned at different energies and have finite tilt along a certain direction. Present work adds a new dimension onpractical applications of Weyl semimetals for optoelectronics and photonics-based quantum technologies. Tailoring Photocurrent in Weyl Semimetals via Intense Laser Irradiation Gopal Dixit January 14, 2024 =======================================================================Weyl semimetalsare topological materials that have demonstrated the potentialto convert light into electricity efficiently.The superiority of the Weyl semimetal over many materials in generating photocurrent inthe infrared region has been established experimentally <cit.>.In addition,ultrafast photocurrent fromWeyl semimetals can be a source of terahertz radiation <cit.>. Moreover, photocurrent emerges as aquintessential probe of the topological properties of quantum materials <cit.>, including device characterization <cit.>. Thus, recent developments in producingphotocurrent from Weyl semimetals make them a central focus for various applications in optoelectronics, detection, and sensing to name but a few <cit.>. Asymmetric population distribution of theelectronic excitations in Weyl semimetal renders a finite photocurrent – photogalvanic effect –which can be realized in various ways, such as the chiral magnetic effect <cit.>, via transfer of angular momentum of light to the Weyl nodes <cit.>,and nonlinear optical responses in the perturbative regime <cit.>. It has been shown that the inversion-brokenWeyl semimetal with gyrotropic symmetry producessecond-order nonlinear optical responses –injection and shift currents – which lead to a colossal photocurrent in TaAs <cit.>.In addition, photocurrent can exhibit a sign flip with the change in the helicity of the circularly polarized light <cit.>.The broken mirror-symmetry of Weyl nodes is a key reasonbehind helicity-sensitive photoresponse in the inversion-broken Weyl semimetals <cit.>.So far, the majority of the work on photocurrent is focused on inversion-broken Weyl semimetals with various tilts and crystal symmetries, as in the recent one in an inversion-symmetricWeyl semimetal <cit.>.Thus, a universal methodto generate photocurrent from both inversion-symmetric and inversion-broken Weyl semimetals that does not rely on such materials' symmetry details is lacking.It is a commonly accepted notion that the single-color circularly polarized light fails to generatephotocurrent inWeyl semimetals with mirror-symmetric Weyl nodes.While each Weyl node generates current depending on its chirality,the currents in a chiral pair of the mirror-symmetric Weyl nodes cancel each other.In contrast to this accepted notion, we unequivocally demonstrate that a single-color circularlypolarized light is able to generate photocurrent in mirror-symmetric Weyl semimetals. Our approach does not rely on the system's symmetryas it isequally applicable toboth inversion-symmetric and inversion-broken Weyl semimetals with isotropic band dispersionand even with all Weyl nodesat the Fermi level. Recently, bicircular laser pulses have been proposed for generatingphotocurrent in two- and three-dimensional materials, includingWeyl semimetal, described by a linear anisotropic Hamiltonian <cit.>.However, a single-color laser based photocurrent, withrelatively easy experimental setup,is highly desirable for practical purposes.Few-cycle carrier-envelope phase stabilized linearly polarized lasercan induce photocurrent as shown for graphene <cit.>.However, such photocurrent cancels out if the phase is not stabilized, outweighing its applicability. Our approach is also robust against such carrier-envelope phase stabilization. We employ three- and six-cycle laser pulses in the mid-infrared regime to generate photocurrents whose direction and magnitude can be tailored by the phase of the circular pulse. Moreover, it is observed that the photocurrent in an inversion-symmetric Weyl semimetal is sensitive to the helicity and ellipticity of the laser pulse. We start our discussion by writing the Hamiltonian of Weyl semimetals asℋ(𝐤) = 𝐝(𝐤) ·σ,with σ's being the Pauli matrices.Expressions ofthe three components of 𝐝(𝐤) for an inversion-symmetric Weyl semimetal are <cit.> 𝐝(𝐤)= [tsin(k_x a), tsin(k_y a),t{cos(k_z a) - cos(k_0 a) +2- cos(k_x a) - cos(k_y a)}],and for an inversion-broken Weyl semimetal read as 𝐝(𝐤)= [t{cos(k_0 a)-cos(k_y a) + μ[1-cos(k_z a)]}, tsin(k_z a),t{cos(k_0 a)-cos(k_x a) + μ[1-cos(k_z a)]}].Here, k_0 determines the position of the Weyl nodes, which are considered as π/(2a) for both Weyl semimetals.The Weyl nodes for inversion-symmetric and inversion-broken systems are situated at 𝐤 = [0,0,±π/(2a)] and 𝐤=[±π/(2a),±π/(2a),0], respectively.A simple cubic crystal structure is considered with lattice parameter a = 6.28 Å and isotropic hopping parameter t=1.8 eV in Eqs. (<ref>) and (<ref>).Moreover, a dimensionless parameter μ = 2 is used in Eq. (<ref>).Energy band dispersions corresponding to Eqs. (<ref>) and(<ref>) are shown in Figs. S1 and S2 in Ref. <cit.>, respectively.The vector potential of the circularly polarized laser is written as𝐀(t) = A_0 e^i ω t + ϕ 𝐞̂_±, where 𝐞̂_± = (𝐞̂_x± i ϵ 𝐞̂_y) corresponds to left- and right-handed circularly polarized laser pulse with ellipticity ϵ=1.The subcycle phase of the laser pulse is denoted by ϕ, which controls the orientation of the Lissajous profile of the laser.A laser pulse having a sine-squared envelope withwavelength 3.2 μm and pulse duration ranging from ∼ 35 to 70 fs is employed to generate photocurrent.The density-matrix-based approach is used to simulate laser-driven dynamics in Weyl semimetals as discussed in Refs. <cit.>. Thephotocurrent originates from the population asymmetry and can be written as <cit.>𝐉(t) = ∫_𝐤d𝐤 [ ρ(𝐤) - ρ(-𝐤) ] ℰ(𝐤)𝐤,where 𝐉(t) is the total current, ρ is the residualpopulationdensity after the end of the laser pulse,andℰ(𝐤) is the energy dispersion in a Weyl semimetal.Let us analyze results for an inversion-symmetric Weyl semimetal, which exhibitsa finite photocurrentalong x and y directions after the end of the laser pulseas shown inFig. <ref>(a).As the helicity of the laser changes from right to left, the sign of the photocurrentalong the y directionflips from negative to positive as evident from Fig. <ref>(b).To unravel the underlying mechanismfor the flip, we analyzed the Lissajous profile of the vector potentialin the polarization plane as shown in the insets.The change in the Lissajous curve with a change in the helicity is a primary reason for the sign flips alongy direction.This shows that the photocurrent is susceptible to the profile of the laser pulse.At this juncture, it is pertinent to know how the photocurrent is sensitive to the phase of the laser pulse.To this end, we investigate variation in the total photocurrent and its components with respect tothe phase.Figure <ref>(c) shows the insensitivity of the photocurrent with respect to ϕ,which concludes that phase stabilization is not a prerequisite to generate photocurrent in Weyl semimetal. However,the x component (𝖩_x) changes from apositivetoa negative value as ϕ changes from 0 to π, including zero at ϕ = π/2 [see Fig. <ref>(d)].Both helicities display similar behavior for 𝖩_x, whereasthe y component (𝖩_y) exhibits an opposite trend as the helicity is reversedfrom left and right, except ϕ = 0 andπ where it is zero [see Fig. <ref>(e)]. Analysis of Fig. <ref> raises a crucial questionabout factors determining the nonzero photocurrent and its components.The residual population in the conduction band around a Weyl node after the end of the laser is presented in Fig. <ref>.Owing to the zero band-gap nature of the Weyl node, the region around the node is significantly populated, which decreases rapidly as we move away from the origin.Population aboutthe k_x = 0 plane is significantly asymmetric, which in results nonzerophotocurrent along this direction asρ(k_x) ≠ρ(-k_x) for both helicities.However, the population exhibits mirror symmetry about the k_y = 0 plane for ϕ = 0, which in results in zerophotocurrent for both helicities as evident from Figs. <ref>(e), <ref>(a) and <ref>(b).A change inϕfrom 0 to π/4 induces asymmetry along k_y = 0, which generates finite photocurrent asreflected from Figs. <ref>(a) and <ref>(b).In addition, the direction of the induced asymmetry along k_y = 0 flips as we change the helicity from left and right, which results in a sign change in𝖩_y as shown in Fig. <ref>(e).Thus, observationsin Figs. <ref>and <ref> are consistentwith Eq. (<ref>). One of the striking features of Fig. <ref> is the extent of the asymmetries along k_x = 0 andk_y = 0 planes, which are significantly different for both helicities.Recently, it has been shown that the electronic excitation from the nonlinear part of the band dispersioncan effectuate the helicity-dependent population in an inversion-symmetric Weyl semimetal <cit.>.Therefore, owing to the unique coupling of the circularly polarized laser with the Weyl semimetal, the residual population along k_z, integrated along other directions, is sensitive to the laser's helicity <cit.>.Thus, the helicity-sensitive population asymmetry leads to different photocurrent for the left- and right-handed laser pulses as shown in Fig. <ref>.So far, we have discussed the results of the three-cycle laser pulse.It is known that the vector potential can be nonzero when the electric field iszero for a few-cycle laser pulse with stabilized carrier-envelope phase.The nonzero vector potential can induce asymmetric population and photocurrent in graphene, as discussed in Refs. <cit.>.Moreover, the resultant asymmetric population can also yield valley polarization in two-dimensional materials <cit.>. Thus, it is natural to ask about the robustness of our results with thepulse duration. Generating photocurrent in Weyl semimetals via relatively long laser pulse in mid-infrared regime is highly desirable for numerous practicalapplications <cit.>. Towards that end, let usincrease the pulse duration from ≃ 30 to 65 fs by changingthe number of cycles from three to six while keeping the intensity constant.In this case, a finite photocurrent with a relatively smaller magnitude is observed. It is found that the intensity needs to be increased by five timesto make the magnitude of the photocurrent comparable for three- and six-cycle pulses [see Fig. <ref>(a)].On comparing Figs. <ref>(c) and <ref>(a), it is evidentthat an increase in intensity leads to a reduction in the contrast between the photocurrent for different helicity.The reduction in the contrast can be attributed to the underlying mechanism of the helicity-dependent asymmetric population, which relies on the resonant excitation at various 𝐤 and thus reduces the asymmetry with an increase in intensity <cit.>. In contrast to the three-cycle pulse, 𝖩_x transits from negative to positive magnitude as ϕ changes from0 to π, whereas 𝖩_y exhibits similar behavior for three- and six-cycle pulses [see Figs. <ref>(b) and <ref>(c)].Note that the photocurrentcan be positive or negative based on whether -𝐤 or 𝐤 is more populated, whichdepends on the intensity and pulse duration <cit.>.The photocurrent is not only sensitive to the pulse duration but also to the ellipticity of the laser pulse, as shown in Fig. <ref>(d) for ϕ =0.Photocurrent monotonically reduces to zero as the ellipticity changes from one (circular)to zero (linear) for both helicities. Similar observations can be made for 𝖩_x from Fig. <ref>(e).Note that 𝖩_y is zero for ϕ =0.The generated photocurrent is nonperturbative in nature as evident from its scaling with laser's intensity[see Fig. S3 <cit.>]. Our analysis establishes that a laser pulse with definite chirality, but nonzero ellipticity,is able to engender photocurrent inan inversion-symmetric Weyl semimetal, which also encapsulates a unique coupling of chiral light with Weyl semimetal <cit.>.Our approach is equally applicable to realistic situations when the Weyl nodes are nondegenerate [see Fig. S4 <cit.>], situated at different energies, andhave tilt along certain direction [see Fig. S5 <cit.>]. In addition, our method produces photocurrentof the same order [see Fig. S6 <cit.>]as the one reported by Morimotoand coworkers usingbicircular counter-rotating laser pulses <cit.>.After demonstrating the photocurrent generation in an inversion-symmetric Weyl semimetal,let us focus our discussion to inversion-broken Weyl semimetal.Figure <ref> presents finite photocurrent in an inversion-broken Weyl semimetal driven by a circularly polarized laser. By the virtue of the Lissajous profile flip, the photocurrent along y direction flips its sign as the laser's helicity changes for ϕ = π/4 [see Figs. <ref>(a) and <ref>(b)].The total photocurrent does not change significantly with variation in ϕ and is identical for both helicities as shown inFig. <ref>(c).Similar to an inversion-symmetric case,𝖩_x changes its magnitude from positive to negative asϕ changes from 0 to π [see Fig. <ref>(d)], and 𝖩_y remains either positive or negative depending on the helicityexcept at ϕ = 0 and π [see Fig. <ref>(e)].Thus, the behavior of the photocurrent and its components are robust with respect to ϕ.Note that there is a finite photocurrent in the plane of polarization for other polarization directions of the laser, and can be tailored by changing ϕ. We also analyzethe residual population in the conduction band to corroboratethe photocurrent's resultsin Fig. <ref>.Significantpopulation around four Weyl nodes at 𝐤=[±π/(2a),±π/(2a),0] is observedas shown in Fig. <ref>.Population is asymmetric in nature with respect to k_x=0 plane for ϕ=0and exhibits k_y = 0 as a plane of reflection, which results in nonzero (zero)photocurrent along x (y) axis.Reflection symmetryabout k_y = 0plane is lost as ϕ changesto π/4[see Figs. <ref>(c) and <ref>(d)], which results in nonzerophotocurrent along this direction as evident from Fig. <ref>(e).In addition, the population corresponding to both helicities are identical, which is in contrast to the one observed for an inversion-symmetric Weyl semimetal.To summarize, we introduce a robust and universalmethod to generate photocurrent in bothinversion-symmetric and inversion-broken Weyl semimetals using a single-color circularly polarized light. Both Weyl semimetals have degenerate Weyl nodes at Fermi level.We unequivocally showthat phase stabilization isnot a prerequisite to generatephotocurrent inboth types of the Weyl semimetals as the generated photocurrent is insensitive to the phase of the laser pulse.Photocurrent in an inversion-symmetricWeyl semimetal is sensitive to the helicity of the laser as the left-handed circularly polarized laser yields more photocurrent in comparison to the right-handed laser.Moreover, the components of the photocurrent in aninversion-symmetricWeyl semimetal are alsosensitive to the helicity, whereasonly the y component exhibits sensitivity in case of an inversion-broken Weyl semimetal.In addition, the strength of the photocurrent reduces as the ellipticity of the laser changesfrom circular to linear.It is anticipated that the measurement of the photocurrent can quantify the coupling of spin-angular momentum of light with nonlinear band dispersion in Weyl semimetals.Our introduced method can be extended to other topological materialsfor their widespread applications inoptoelectronics and photonics.G. D. acknowledges fruitful discussions withMisha Ivanov (MBI, Berlin) and Kazuhiro Yabana (Tsukuba University).G. D. acknowledges financialsupport from SERB India(Project No. MTR/2021/000138).apsrev4-1 | http://arxiv.org/abs/2310.18145v1 | {
"authors": [
"Amar Bharti",
"Gopal Dixit"
],
"categories": [
"physics.optics",
"cond-mat.mes-hall",
"cond-mat.mtrl-sci",
"cond-mat.str-el"
],
"primary_category": "physics.optics",
"published": "20231027135141",
"title": "Tailoring Photocurrent in Weyl Semimetals via Intense Laser Irradiation"
} |
[email protected] Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, [email protected] A.G.F. and S.V. contributed equally Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USADepartment of Electrical and Photonics Engineering, Technical University of Denmark, Kgs. Lyngby, DenmarkDepartment of Physics, The Pennsylvania State University, University Park, PA 16802, USADepartment of Physics and Institute for Condensed Matter Theory, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USADepartment of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USAWeyl fermions are hypothetical chiral particles that can also manifest as excitations near three-dimensional band crossing points in lattice systems.These quasiparticles are subject to the Nielsen–Ninomiya “no-go” theorem when placed on a lattice, requiring the total chirality across the Brillouin zone to vanish.This constraint results from the topology of the (orientable) manifold on which they exist. Here, we ask to what extent the concepts of topology and chirality of Weyl points remain well-defined when the underlying manifold is non-orientable.We show that the usual notion of chirality becomes ambiguous in this setting, allowing for systems with a non-zero total chirality.Furthermore, we discover that Weyl points on non-orientable manifolds carry an additional ℤ_2 topological invariant which satisfies a different no-go theorem.We implement such Weyl points by imposing a non-symmorphic symmetry in the momentum space of lattice models. Finally, we experimentally realize all aspects of their phenomenology in a photonic platform with synthetic momenta.Our work highlights the subtle but crucial interplay between the topology of quasiparticles and of their underlying manifold.Weyl points on non-orientable manifolds Marin Soljačić January 14, 2024 =======================================Weyl fermions are massless particles of definite chirality allowed by the Standard Model of particle physics <cit.>. Although they remain elusive as high-energy particles, they can emerge as low-energy excitations in quantum systems <cit.>, and their dispersion appears in certain classical systems as well <cit.>.Such Weyl quasiparticles occur near band degeneracies, known as Weyl points, in the momentum space of three-dimensional lattices, and exhibit a number of unique properties:(1) as monopoles of Berry curvature, they carry an integer topological charge—surfaces enclosing a Weyl point have a non-vanishing first Chern number—the sign of which defines their chirality <cit.>. They are therefore robust to perturbations, even some that break translational symmetry <cit.>. (2) A bulk–boundary correspondence associates Weyl points with Fermi arcs—dispersive surface states that connect Weyl points of opposite charges <cit.>.(3) In the presence of gauge fields, they can generate a violation of chiral charge conservation, a phenomenon referred to as the chiral anomaly <cit.>. Formulating lattice theories that describe Weyl fermions, or chiral fermions in a broader context, imposes global constraints on their chirality. The most noteworthy example is the Nielsen–Ninomiya theorem, that asserts, under general assumptions of locality, Hermiticity, and translational invariance, that the net chirality contributed by all Weyl fermions vanishes <cit.>. In lattice field theories of the Standard Model, the Nielsen–Ninomiya theorem requires additional unwanted fermion species in the theory, leading to the important problem of fermion doubling <cit.>.Additionally, since condensed matter systems are often crystalline (there is an underlying lattice), the Nielsen–Ninomiya theorem applies and constrains the total Weyl-point chirality in the Brillouin zone to vanish.This, in turn, directly impacts physical observables, e.g., the spectrum and dispersion of Fermi arcs <cit.>, the chirality of the Landau levels that emerge under an applied magnetic field <cit.>, and electromagnetic responses to circularly polarized light <cit.>. Several approaches for circumventing the Nielsen–Ninomiya theorem have been proposed, all of which violate one or more of its assumptions <cit.>; however none have explored the role of the topology of the underlying manifold. Indeed, the strong constraint imposed on lattice systems by this no-go theorem ultimately results from the topological properties of the underlying momentum-space manifold, i.e., the toroidal Brillouin zone <cit.>. Here, we demonstrate that the notions of chirality and topology for Weyl points are fundamentally altered on non-orientable momentum-space manifolds. We show that while an absolute notion of chirality of Weyl points becomes inherently ambiguous, a relative chirality still exists. We also find that Weyl points on non-orientable manifolds carry a ℤ_2 topological charge and have an associated no-go theorem that places global constraints on both the number of Weyl points and their total chirality. Further, we show that non-orientability provides a natural setting for the Nielsen–Ninomiya theorem to be circumvented in an atypical fashion.Finally, we experimentally realize such Weyl points in a photonic system endowed with synthetic momenta, paving the way to a wider exploration of the interplay between orientability and chirality. We begin by discussing our scheme for obtaining Hamiltonians of lattice systems whose momentum-space domains form non-orientable manifolds. The Bloch Hamiltonian of our lattice system H(𝐤) is invariant under translations by any reciprocal lattice vector 𝐆, i.e., H(𝐤) = H(𝐤+𝐆).This reflects a redundancy in 𝐤-space, since both 𝐤 and 𝐤+𝐆 label the same physical momentum point.By restricting 𝐤 to the set of unique momenta, the Brillouin zone takes the form of a three-dimensional torus, denoted as T^3 (<ref>a). This toroidal nature of momentum space is unavoidable for a lattice. Interestingly, under certain circumstances it is possible to subdivide the torus into smaller closed manifolds that are non-orientable <cit.>.This can be achieved, by imposing a momentum-space symmetry on the Hamiltonian, specifically a glide symmetryH(k_x, k_y, k_z) = H(-k_x, k_y+ π, k_z),where we have set the lattice constant to unity. This symmetry leads to a further redundancy in 𝐤-space, since the Hamiltonian, and hence its eigenstates and energy spectrum, will be identical at momenta (k_x, k_y, k_z) and (-k_x, k_y+π, k_z).This symmetry operation has order two, and hence subdivides the torus into two fundamental domains: without loss of generality we select the domain spanned by -π≤ k_x, k_z < π and -π≤ k_y < 0 as their representative.The non-symmorphic nature of the symmetry in <ref> allows for boundary identifications to be made at the k_y = 0 and k_y = -π planes in a twisted fashion (<ref>b), resulting in a closed manifold.The fundamental domain can consequently be expressed as the direct product of a non-orientable Klein bottle (K^2) in the (k_x, k_y) plane and a circle (S^1) in the k_z direction, K^2× S^1.The Klein bottle can be visualized by gluing one pair of opposite sides of a rectangle and twisting and gluing the other pair (<ref>c).<Ref>d shows an immersion of a Klein bottle in ℝ^3.We note that the direct equality in <ref>, without unitary conjugation of the Hamiltonian, is crucial, and distinguishes this symmetry from spatial symmetries, such as rotations, that subdivide the Brillouin zone into identical smaller copies, but which are open manifolds.This unitary-free symmetry is analogous to translational symmetry which reduces the domain of the Hamiltonian from ℝ^3 to T^3, leading to identical physics at the identified boundaries. In further contrast to spatial symmetries, the non-symmorphic nature of this symmetry in momentum space leaves no momentum point invariant.Next, we consider some of the consequences of this symmetry using a two-band, spinless model on the three-dimensional cubic lattice with a Bloch Hamiltonian of the formH(𝐤) = 𝐝·σ = d_x(𝐤) σ_x + d_y(𝐤) σ_y + d_z(𝐤) σ_z,where σ_x,y,z are the Pauli matrices, and the components of 𝐝(𝐤) = [d_x, d_y, d_z](𝐤) are individually subject to <ref>. As a concrete example, we take:d_x(𝐤)= cosk_x, d_y(𝐤)= sink_xcosk_ysink_z, d_z(𝐤)= cosk_z + sink_xsink_y - 12,and note that, physically, the constraints on 𝐝(𝐤) imply a suppression of certain hoppings in real space (see the Supplementary Material (SM) for more details).The bands touch when |𝐝(𝐤)|=0, and isolated band touchings are Weyl points.For our model we find four Weyl points in the K^2× S^1 fundamental domain, located at (k_x, k_y, k_z) = (-π/2, -π/2, ±π/2), (π/2, -5π/6, 0) and (π/2, -π/6, 0).Their associated chiralities can be computed by enclosing each Weyl point within a small spherical shell and integrating the Berry curvature fluxthrough it to obtain the Chern number <cit.>.In our model, we find that all Weyl points on the K^2× S^1 manifold carry a charge of +1 (<ref>a). Evidently, the Nielsen–Ninomiya theorem is circumvented on the fundamental domain since the total chirality is χ = +4. Further, as we shall explain below, the associated Fermi arcs cross the lines k_y = 0 and k_y = -π and connect Weyl points of the same charge(<ref>b). Since K^2× S^1 is non-orientable, there is no globally consistent orientation. This is relevant for the calculation of the Chern number since it is a pseudoscalar and therefore flips sign upon orientation reversal.Thus, while an absolute sign for the chirality cannot be established, an orientation choice can be made on any finite patch within this manifold that contains all the Weyl points.This choice can be used to locally assign the signs of charges of the Weyl points, which provides an unambiguous definition of relative chirality within a single patch.For our model we find that all of the Weyl points in the fundamental domain have the same sign of the relative chirality, while the chirality ambiguity would simply flip all signs.Hence, the circumvention of the Nielsen–Ninomiya theorem is not related to the lack of a definition of absolute chirality on a non-orientable manifold. Physically, the signs of the relative Weyl-point chiralities can also be ascertained through the Fermi arc connectivity: Weyl points with Chern numbers of the same sign are connected via Fermi arcs that lie on orientation-reversing paths, paths that cross the lines k_y = -π and k_y = 0 an odd number of times (<ref>b); whereas Weyl points with Chern numbers of the opposite sign connect via Fermi arcs that intersect these lines an even number of times.In the SM, we show that these features are general by considering fundamental domains built from a different non-orientable manifold, the real projective plane RP^2.We now take a deeper look at the Nielsen–Ninomiya theorem to understand why it is circumvented in our case. For manifolds, a simple proof of the Nielsen–Ninomiya theorem can be formulated in terms of the Poincaré–Hopf theorem from differential topology <cit.>.This theorem states that for a continuous vector field, the global sum of the indices of all its isolated zeros equals the Euler characteristic of the underlying manifold on which the vector field is defined.Here, the vector field is 𝐝(𝐤), the zeros are Weyl points, and the indices characterize the winding of the vector field in the vicinity of the zeros, which are identical to the chiralities of the Weyl points.The Euler characteristic of T^3 is vanishing, and hence the chiralities of all the Weyl points in the Brillouin zone sum to zero, which concludes the proof for the Nielsen–Ninomiya theorem.However, the Euler characteristic of every closed odd-dimensional manifold is also zero, a result that follows from Poincaré duality <cit.>.This suggests that the Nielsen–Ninomiya theorem should hold on K^2× S^1, contradicting what we observe for our model in <ref>. The apparent inconsistency can be resolved by noting that, although the Hamiltonian, and hence all physical observables, are continuous across the boundary identifications that generate K^2× S^1, the vector field 𝐝(𝐤) is in fact discontinuous, and thus the Poincaré–Hopf and Nielsen–Ninomiya theorems do not apply.This can be seen by carefully analyzing the transformation properties of 𝐝(𝐤) = d_x(𝐤)k̂_x + d_y(𝐤)k̂_y + d_z(𝐤)k̂_z under the momentum-space glide symmetry. From the k_x-mirror operation, k̂_x flips sign, whereas k̂_y and k̂_z are invariant.Therefore, continuity of 𝐝(𝐤) at k_y = -π and k_y = 0 would imply that d_x(𝐤) and d_y,z(𝐤) to be respectively odd and even under the symmetry. However, since we require that the Hamiltonian be continuous, all components of 𝐝(𝐤) are even under the symmetry—this leads to the discontinuity of 𝐝(𝐤) on K^2 × S^1. We can visualize this by plotting d_x(𝐤)k̂_x on the fundamental domain and making the boundary identifications (<ref>c). The k_x-mirror operation is therefore responsible for the discontinuity of 𝐝(𝐤), and for the non-orientability of K^2× S^1. This implies that the non-symmorphic symmetry offers a generic mechanism for rendering vector fields discontinuous on the fundamental domain resulting in the circumvention of the Nielsen–Ninomiya theorem. In the SM, we further explore the nature of this circumvention: first, we show that fine-tuning the components of 𝐝(𝐤) such that both the vector field 𝐝(𝐤) and the Hamiltonian are continuous on the fundamental domain restores the Nielsen–Ninomiya theorem; and second, we show a direct physical consequence of this circumvention—systems that exhibit a non-zero total chirality on K^2× S^1 necessarily host gapless surface states where twisted boundary identifications are made.We will now show that Weyl points on non-orientable manifolds carry an additional ℤ_2 topological charge. This charge results in a different no-go theorem that we discuss below. To identify the ℤ_2 charge we consider topological invariants on two-dimensional gapped subspaces of our three-dimensional Brillouin zone. Explicitly, we consider fixed-k_z subspaces which form a 2-torus, T^2, on which the Chern number can be calculated. We can do so by integrating the Berry connection along one direction of the T^2 momentum subspace, say k_x, to obtain the Berry phase, γ(k_y) (<ref>a).Since k_y = -π and k_y = π represent the same point, the Berry phase is subject to γ(k_y = π) = γ(k_y = -π) 2π.The curves that satisfy this relation have a ℤ classification whose invariants are the winding numbers of γ(k_y) or, equivalently, Chern numbers (<ref>b).If we restrict our fixed-k_z subspace to the fundamental domain it becomes a Klein bottle, K^2 (<ref>c), and the Berry phase can be calculated in a similar fashion (<ref>c). However, now since k_y = -π and k_y = 0 are related by a k_x-mirror operation, integrating the Berry connection along these lines leads to a relative minus sign, and therefore γ(k_y = -π) = -γ(k_y = 0) 2π for K^2.By counting the number of number of crossings W_π of the Berry phase through the horizontal line γ = π, it can be shown that curves with a given W_π parity can be deformed into one another but cannot be deformed into those with a different parity. This defines a ℤ_2 invariant ν≡ W_π 2 ∈{0, 1} on K^2 <cit.> (<ref>d).This invariant is analogous to the Chern number in the sense that it is stable under the additional of trivial bands, and it leads to edge states when interfaced with a trivial system. A detailed discussion can be found in the SM.We now compute the value of ν for various fixed-k_z, Klein-bottlecuts for the model in <ref>.<ref>e, f show the value of ν(k_z) changes by unity as k_z passes through an odd number of Weyl points.This suggests that a non-trivial value of ν is associated with Weyl points of odd chirality.We show in the SM that a local Berry phase calculation for the ℤ_2 invariant can be carried out by enclosing the Weyl point within a two-sided Klein bottle <cit.>. This shows that ν is indeed sourced by Weyl points and accordingly, we may associate a ℤ_2 charge to each.The same conclusion can be reached by relating ν to the Chern number C of the Weyl point by noticing that the Berry phase jumps by 2π C at the momentum of the Weyl point <cit.>. This leads to the relation ν = C2. More details are given in the SM. Having uncovered that Weyl points carry a ℤ_2 charge, we now show that this charge is subject to a no-go theorem on K^2× S^1.The argument proceeds as follows: since the k_z direction is periodic, the value of ν at the k_z = -π and k_z = π planes must match, ν(k_z = -π) = ν(k_z = π).This forbids the presence of a net non-zero ℤ_2 charge of the Weyl points within K^2× S^1.To show this, we consider the case of a single Weyl point at the k_z = 0 plane.Because of this Weyl point, the value of ν(k_z = 0^-) would differ from ν(k_z = 0^+) by unity.Since there are no other Weyl points, it follows that the values, ν(k_z = -π) = ν(k_z = 0^-) and ν(k_z = π) = ν(k_z = 0^+) would not be equal, which is not possible if the k_z-direction is periodic.Thus, a second Weyl point is needed to ensure that the invariants at the two k_z planes match.Accordingly, the total ℤ_2 charge of the Weyl points on K^2× S^1 must vanish; or, equivalently, the net Chern numbers of the Weyl points must be even—but as we have shown, not necessarily vanishing.This no-go theorem implies that for systems obeying Eq. <ref> the minimum number of singly-charged Weyl points is two on K^2× S^1, and four on the full toroidal Brillouin zone, even under broken time-reversal symmetry.In the SM, we provide a second argument for the above based on the Fermi-arc configuration. In this final section, we turn our attention to an experimental demonstration of Weyl points on non-orientable manifolds, which we realize in a photonic system endowed with synthetic momenta.The system considered here consists of a family of optical multilayer structures, one-dimensional photonic crystals (PhCs).The unit cells of these PhCs are composed of four dielectric layers alternating between two materials, Silicon (ε_Si = 12.5) and SiO_2 (ε_SiO_2 = 2.25), with a lattice constant a in the z-direction (<ref>a).Light propagation in such PhCs is governed by a Maxwell eigenvalue problem for the electromagnetic field eigenmodes, and their corresponding frequency eigenvalues, analogous to the problem of electrons moving in a crystalline solid <cit.>. When light propagation along the z-direction is considered, this is reduced to a one-dimensional problem which is easily solved using Bloch's theorem.The resulting field solutions form discrete frequency bands as a function of the quasimomentum, k_z, which may be separated by photonic band gaps. We introduce two geometric parameters, k_1 and k_2, to modulate the thicknesses of each of the four layers in the unit cell according to the following functions:L_1(k_1, k_2)= 14a (1+cos k_1) L_2(k_1, k_2)= 14a (1+sin k_1 cos k_2) L_3(k_1, k_2)= 14a (1-cos k_1) L_4(k_1, k_2)= 14a (1-sin k_1 cos k_2).The two periodic parameters k_1, k_2 serve as synthetic momentum degrees of freedom which, along with the quasimomentum k_z, result in a three-dimensional toroidal parameter space within which Weyl points can exist <cit.>.We choose the functions L_1 to L_4 such that the non-symmorphic symmetry given in <ref> is satisfied in (k_1,k_2,k_z)-space.Thus the fundamental domain in (k_1,k_2,k_z)-space forms a K^2× S^1 manifold after making boundary identifications at the k_2 = -π and 0 planes.We find that two Weyl points occur in the fundamental domain, between the lowest two bands of this system (<ref>b). We calculate the charges of these Weyl points and find that they each have a relative chirality of +1 which implies that they each carry a ℤ_2 charge of ν = 1, as we explicitly show in the SM. The total chirality of the Weyl points therefore does not vanish, similar to what was observed in the tight-binding model in <ref>. However, the total ℤ_2 charge vanishes, consistent with the no-go theorem for these charges. The higher bands host increasingly larger numbers of Weyl points, all of the same chirality, while always maintaining a vanishing total ℤ_2 charge. We discuss more details about the higher bands in the SM.On truncating the PhCs along the z-direction, Fermi arcs are expected to emerge from the projections of the Weyl points in the surface Brillouin zone formed by (k_1, k_2).Since the Fermi arcs are localized on the surfaces, they possess an enormous linewidth generated by the strong out-coupling to plane waves in the air above the PhCs.To remedy this, we clad the PhCs with additional layers on the top surface to better confine these states.Doing so allows for the observation of the Fermi arcs in the transmission spectrum of the PhCs, a simulation of which is shown in <ref>c.When the dispersion of the Fermi arcs is plotted along a loop that encloses the projection of a Weyl point, these states fully cross the band gap with the direction of their spectral flow determined by the sign of the chirality of the enclosed Weyl point. Since our Weyl points have the same chirality, we expect the same spectral flow pattern for both nodes as simulated in <ref>d,e.For the experiment, we fabricate the PhCs using the plasma-enhanced chemical vapour deposition (PECVD) process (further details are given in the SM). For each PhC, we set a = 200nm and fabricate six unit cells, 24 layers, and use three additional layers that serve as cladding. We fabricate a series of samples that correspond to values of k_1 and k_2 lying on the loops that enclose the projections of the Weyl points (<ref>c).We then measure the normal-incidence transmission spectrum of each sample in the wavelength range of 6501400nm using a spectrometer.<Ref>d, e show the experimental results along with corresponding simulations.We see that the surface states cross the gap and that the spectral flow of the surface states is identical for both Weyl points, indicating that they carry the same chirality.In summary, by implementing Weyl quasiparticles in lattice models with non-symmorphic momentum-space symmetries, we have explored the fate of Weyl fermions on non-orientable manifolds.On the associated fundamental domain, the Hamiltonian, its eigenstates, and all physical observables are continuous.However, vector fields whose poles or zeros are Weyl points generically become discontinuous, and therefore, the chirality of Weyl points need not sum to zero, circumventing the Nielsen–Ninomiya theorem.The underlying non-orientable domain endows the Weyl points with an additional ℤ_2 charge, whose conservation enforces a new no-go theorem.Finally, we experimentally demonstrated the phenomenology of such Weyl points in a photonic platform with synthetic momenta.Our work suggests several new research directions.For example, one can consider other non-orientable manifolds in dimensions two and higher that might host their own unique topological invariants and new types of gapless points <cit.>. It will also be interesting to explore the properties of Landau levels originating from both real and pseudo-magnetic fields when the Weyl point chirality does not vanish <cit.>. More broadly, this opens up new avenues to explore how other gapless fermion theories fare in non-orientable settings. We believe that the approaches introduced here may help provide answers to these fundamental questions in the future.1.5ex We thank Clifford Taubes, Terry A. Loring, Adolfo G. Grushin, and Jonathan Guglielmon for stimulating discussions.A.G.F. acknowledges support from the Henry W. Kendall Fellowship and the Whiteman Fellowship, and thanks the University of São Paulo for its hospitality, where part of this work was completed.T.C. acknowledges the support of a research grant (project no. 42106) from Villum Fonden.S.V., M.C.R., T.L.H., and M.S. acknowledge support from the U.S. Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) under Grant No. N00014-20-1-2325 on Robust Photonic Materials with Higher-Order Topological Protection. This material is based upon work also supported in part by the U. S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT, under Collaborative Agreement Number W911NF-23-2-0121. This work was carried out in part through the use of MIT.nano's facilities. | http://arxiv.org/abs/2310.18485v1 | {
"authors": [
"André Grossi e Fonseca",
"Sachin Vaidya",
"Thomas Christensen",
"Mikael C. Rechtsman",
"Taylor L. Hughes",
"Marin Soljačić"
],
"categories": [
"cond-mat.mes-hall",
"physics.optics"
],
"primary_category": "cond-mat.mes-hall",
"published": "20231027205606",
"title": "Weyl points on non-orientable manifolds"
} |
firstpage–lastpage Detectable, defect-free dark photon dark matter Zachary J. Weiner January 14, 2024 ===============================================Absorbers in the spectrum of background objects probe the circumgalactic medium (CGM) surrounding galaxies, but its physical properties remain unconstrained.We use the cosmological hydrodynamical simulation TNG50 to statistically trace the origins of Hi Ly-α absorbers around galaxies at z = 0.5 with stellar masses ranging from 10^8 to 10^11 M_⊙. We emulate observational CGM studies by considering all gas within a line of sight velocity range of ± 500from the central, to quantitatively assess the impact of other galaxy haloes and overdense gas in the IGM that intersect sightlines.We find that 75 per cent of Hi absorbers with column densities > 16.0 trace the central galaxy within ±150 (80)of M_* = 10^10 (10^8) M_⊙ central galaxies.The impact of satellites to the total absorber fraction is most significant at impact parameters 0.5 R_ vir < b < R_ vir, and satellites with masses below typical detection limits (M_* < 10^8 M_⊙) account for 10 (40) per cent of absorbers that intersect any satellite bound to 10^10 and 10^11 (10^9) M_⊙ centrals. After confirming outflows are more dominant along the minor axis, we additionally show that at least 20 per cent of absorbers exhibit no significant radial movement, indicating that absorbers can also trace quasi-static gas.Our work shows that determining the stellar mass of galaxies at z_ abs is essential to constrain the physical origin of the gas traced in absorption, which in turn is key to characterising the kinematics and distribution of gas and metals in the CGM. quasars: absorption lines – galaxies: evolution – galaxies: kinematics and dynamics – galaxies: haloes§ INTRODUCTIONThe discovery of extragalactic absorption lines <cit.> shortly followed the discovery of the first quasar/quasi-stellar object (QSO) 3C 273 <cit.>.At the time, it was suggested that the absorption arose from discrete clouds of gas found in galaxies located along the line of sight (LOS) towards the background source <cit.>.Almost six decades on, the study of absorbers has evolved significantly but fundamental questions remain unanswered.One such question concerns the origin of absorbers; while absorption lines are commonly associated with galaxies near the line-of-sight <cit.>, what is their spatial distribution, kinematic behaviour and origin of the gas being traced in and around galaxies? The distribution of gas in and around galaxies remains in a constant state of flux.Like a galactic bank account, reservoirs of gas are depleted by constant withdrawals in the form of star formation.More sudden ejections of gas in the form of stellar-wind-driven and AGN-driven outflows further reduce the amount of gas available for star formation but also enrich the surrounding galaxy halo <cit.>.Deposits in the form of accreting gas from dark matter filaments <cit.>, condensed material from galactic fountains <cit.> and `clumpy' accretion in the form of satellites <cit.> replenish the gas supply.These processes take place simultaneously in a delicate balance and affect the stellar properties of galaxies.The study of the circumgalactic medium (CGM), the region where inflows and outflows leave their signatures, is of paramount importance to understanding how galaxies form and evolve.Intervening quasar absorbers have long been found at the same redshift of galaxies and intersecting gaseous haloes around galaxies <cit.>.The circumgalactic medium that extends from the interstellar medium (ISM) to the intergalactic medium (IGM) can be traced by absorption lines towards background QSOs.Some surveys study the CGM by preselecting galaxies (using a property such as mass or luminosity) that are located near quasar sightlines and then searching for and analysing absorption lines in the QSO spectrum <cit.>.Alternatively, one can identify absorption lines towards a distant background source, typically of a certain species such as Hi or Mgii, and then search for galaxies at the absorber redshift <cit.>.Finally, surveys can target fields with a UV-bright QSO without a pre-selection on absorption lines or galaxies <cit.>.Despite the differing sample selections, surveys of absorption lines and their associated galaxies have benefited from the advent of integral field spectroscopy (IFS).The ability to simultaneously obtain imaging and spectroscopy has led to a proliferation of galaxy–absorber pairs <cit.>.It is then inevitable that absorption lines in QSO spectra will intersect gas associated with processes such as inflows and outflows or structures like satellites and filaments <cit.>.However, disentangling the relationship between absorber and galaxy and understanding the origin of the absorber becomes difficult for many reasons.First and foremost, we are unable to directly measure the distance between absorber and galaxy in the direction along the line of sight.The only indirect measure of distance is the line of sight velocity difference between the galaxy and absorber (Δ v_ LOS).However, this is determined by a combination of the Hubble flow and peculiar velocity of the gas cloud.Several studies using simulations show that selecting absorbers in velocity space can select for gas beyond several times the virial radius <cit.>. Identifying the origin of an absorber becomes further complicated in studies where galaxy overdensities are found associated with absorbers <cit.>.Typically, the galaxy found at closest impact parameter to the QSO sightline is selected as the absorber host, but other methods have also been used <cit.>.Finally, the inherent sensitivity limit in observations means that a population of low-mass and/or quiescent galaxies will remain undetected, particularly at cosmological redshifts.We can infer the existence of these galaxies from studies that do not detect any object near high column density absorbers that are expected to arise from the galaxy disk <cit.>.The combination of these factors means that assigning an origin to absorbers is challenging but few works have attempted to quantify these effects and how they vary with properties such as species and column density.Intervening absorption lines are expected to probe gas flows into and out of galaxies.However, distinguishing between outflows and inflows is challenging because of the line of sight velocity degeneracy between the two gas flows.Gas infalling onto a galaxy from behind and gas outflowing towards the observer will both be blueshifted with respect to the systemic redshift of the galaxy.Likewise, redshifted absorbers can originate from inflows in front of the galaxy or outflows ejecting gas away from the observer.This degeneracy is not present in down-the-barrel studies where inflows (outflows) are identified by redshifted (blueshifted) absorption against the background stellar continuum <cit.>.To overcome this uncertainty in transverse absorption-line studies, one must rely on other assumptions.One possible way to distinguish outflowing absorbers is to measure the azimuthal angle (Φ) between the absorber and a galaxy's major axis.In both observations and simulations of gas outflows that originate in the galaxy centre, galactic winds commonly form an expanding biconical shape perpendicular to the galaxy disk as this is the path of least resistance <cit.>.Hence, one could assume that outflowing absorbers can be identiifed by their alignment with the minor axis and inflowing absorbers with the major axis as the accreting gas co-rotates with the galaxy <cit.>.Perhaps further evidence of this major-minor axis dichotomy can be found in the bimodal distribution of azimuthal angles for absorbers relative to their galaxy hosts <cit.>.It should follow then that metal-enriched gas will be preferentially found near the minor axis.Indeed, various simulations predict an azimuthal angle dependence in metallicity profiles <cit.>, as well as density, temperature <cit.> and magnetic field strengths <cit.>, in line with some observational studies <cit.>, although other observations find little to no evidence for such angular anisotropies <cit.>.We also expect outflowing gas to be metal-enriched as heavy elements from the ISM are ejected into the CGM and IGM via feedback processes <cit.>.Studies of high-velocity clouds (HVCs) in the Milky Way use a combination of kinematics and metallicity to distinguish between inflows and outflows, and also calculate the mass rates of both processes <cit.>.Beyond the local Universe, some studies of Hi absorbers find a multimodal gas-phase metallicity distribution <cit.>.The separate populations have been proposed to trace outflows (metal-rich), inflows (metal-poor) and gas in the IGM (pristine) <cit.>.However, simulations do not find such large metallicity contrasts between inflows and outflows, particularly at lower redshift where galactic winds are more efficiently recycled <cit.>.In this work, we study the origins of absorbers using the cosmological magnetohydrodynamical simulation TNG50.We explore how the line of sight velocity difference between galaxy and absorber compares with the physical distance. Then, we quantify the contributions of satellite galaxies and gas in the intergalactic medium to the fraction of absorbers with varying column densities.We also test the fidelity of azimuthal angle and metallicity assumptions when identifying gas flows in the CGM.Finally, we compare our results to studies of Ly-α absorbers at redshift z ≲ 1 such as the MUSE-ALMA Haloes survey <cit.> and the COS CGM Compendium <cit.>.We adopt a cosmology consistent with <cit.> and halo masses, circular velocities and radii are defined at 200 times the critical density of the Universe (e.g. R_ vir = R_ 200c), consistent with TNG50. § METHODS§.§ The TNG50 Simulation We present results from TNG50-1 <cit.>, the highest resolution version in the IllustrisTNG (henceforth, TNG) suite of cosmological magneto-hydrodynamical simulations <cit.>. Building on the original Illustris simulation <cit.>, IllustrisTNG incorporates magnetic fields <cit.> and amends the original Illustris galaxy formation model <cit.> with updated feedback processes <cit.>. With a box size of ∼50 comoving Mpc (cMpc) and 2160^3 resolution elements, the TNG50 simulation has a baryonic (dark matter) mass resolution of 8.5 × 10^4 M_⊙ (4.5 × 10^5 M_⊙). The relatively large volume combined with the high spatial resolution are optimal for the study of structures in the circumgalactic medium of galaxies. In the TNG simulations, galaxy stellar mass growth is regulated by supernovae (SN) and supermassive black holes (SMBH) feedback. The SN and SMBH feedback models in TNG use an isotropic energy injection. Hence, any directionality is a natural result of subsequent hydrodynamical and gravitational interactions. The model for stellar feedback uses a kinetic wind approach where star-forming gas is stochastically ejected from galaxies by Type II supernovae <cit.>. Galactic winds are isotropic and the ejected wind particles are decoupled from surrounding gas in the star-forming environment until the density or time reaches a threshold, at which point they hydrodynamically recouple and deposit their mass, momentum, and energy <cit.>. The TNG stellar feedback model produces high mass loading galactic-scale outflows which are highly directional and metal-enriched <cit.>.In TNG, supermassive black holes are seeded in haloes exceeding a total mass of ∼ 7 × 10^10 M_⊙. SMBHs then grow by merging with other black holes and accreting gas at the Eddington-limited Bondi rate. The mode in which the active galactic nucleus (AGN) ejects energy into its surroundings is dictated by this accretion rate. At low-accretion rates relative to the Eddington limit, kinetic energy is stochastically injected into neighbouring cells after enough energy is accumulated. On the other hand, thermal energy heats the surrounding gas at high-accretion rates proportional to the accreted mass. The thermal feedback mode typically dominates for SMBH masses ≲ 10^8 M_⊙, and the kinetic mode for high-mass black holes <cit.>. Feedback from SMBHs in the TNG model is the physical mechanism of galaxy quenching, ejecting gas from halo centers <cit.> out to the scale of the closure radius <cit.>, while heating gaseous haloes and thus preventing future cooling of the CGM <cit.>. §.§ TNG50 Galaxy sample We identify haloes and subhaloes in the simulation using the subfind algorithm <cit.>. The central subhalo is defined as the one found at the gravitational potential minimum of a friends-of-friendshalo <cit.> and the associated baryonic component is termed the central galaxy. Baryonic components of all other subhaloes associated with the FoF halo are dubbed satellites. We focus on galaxies at z = 0.5 with stellar masses (M_*) ranging from log(M_*/M_⊙) = 8.0 to 11.0 in order to match the properties of galaxies from the MUSE-ALMA Haloes survey <cit.>. The survey targets 32 high column density (> 18.0) Hi absorbers and finds 79 galaxies within ± 500of the absorbers at impact parameters ranging from 5 to 250 kpc <cit.>. We consider galaxies that are the central galaxies of their halo and measure their stellar masses within twice the stellar half mass radius. For the four stellar mass bins centred around [10^8.0, 10^9.0, 10^10.0, 10^11.0] M_⊙, we select [100, 100, 50, 20] galaxies at random within bins of ± 0.3 dex. We choose the number of central galaxies in each mass bin such that the number of `sightlines' passing within the virial radius of the galaxies in each bin is approximately equal. The physical extent of the mocks along the plane perpendicular to the projection is the minimum of [400 pkpc, 2R_ vir]. The limit of 400 pkpc is set by the physical scales covered in a single VLT/MUSE <cit.> pointing at z ≈ 0.5.§.§ Mock absorption columns The TNG simulations track the global neutral hydrogen content of gas cells.In order to split the gas between atomic or molecular hydrogen fractions individually, we adopt the molecular hydrogen fraction model of <cit.> to estimate the H_2 fraction <cit.>.We then obtain the Hi mass by subtracting the H_2 gas mass from the total neutral hydrogen content.The masses of all metallic ions including Mgii, Civ and Ovi ions are computed in post-processing using v17.00 of the cloudy <cit.> code, assuming collisional and photoionization equilibrium in the presence of the meta-galactic UV background (UVB) from the 2011 update of <cit.> <cit.>.Our separation of galaxies into centrals versus satellites allows us to flag each gas cell based on whether it is gravitationally bound to the central, a satellite of the central or another FoF halo along the line of sight.Gas cells not bound to any halo (largely gas in the intergalactic medium) are also flagged.We henceforth refer to this family of flags as the set of (gravitational) origin labels. Additionally, we separately categorise each gas cell as outflowing, inflowing or quasi-static using the cell's radial velocity (v_r) with respect to the central galaxy.Hence, to be classified by one of these three flags, it is a pre-requisite that the gas cell is gravitationally bound to the central subhalo (designated as `central' in the origin labels). We calculate the radial velocity of each gas cell using the scalar product of the gas cell position relative to the galaxy centre with the velocity in the frame of reference of the subhalo excluding the Hubble flow.A radial velocity cutoff of 20is used to identify inflowing and outflowing cells, that is, cells with v_r > 20areoutflowing and cells with v_r < -20are inflowing.Cells found at velocities -20 < v_r < 20are considered to be in quasi-hydrostatic equilibrium and encompasses gas that is static or rotationally-dominated.We also include satellite galaxies in this set of flags so all gas cells within the central subhalo are included.Henceforth, this set of flags are the gas flow labels.We consider other choices of the radial velocity boundary and the implications in section <ref>. We show a cartoon depicting the two sets of labels (origin and gas flow) in <ref>.Within the central galaxy's halo, they may trace gas accretion (bluish purple), outflows (orange) or satellite galaxies (green).However, absorbers may also intersect other galaxy haloes in front of or behind the central galaxy and these haloes are depicted in yellow.Finally, there is the chance that absorbers trace overdense gas in filamentary structures (purple).The various, intermixed possibilities highlight that relating the gas observed in absorption with its physical origin requires a statistical approach. To determine the column density of the Hi and other ions, we project the gas along the line of sight around each galaxy onto a large grid with pixel size 1×1 pkpc^2, through a line of sight depth of ±500relative to the systemic velocity of the galaxy. We test pixel sizes with side length 0.5, 1 and 2 pkpc, finding only marginal differences in our forthcoming results.This is consistent with <cit.>, where the H_2 and Hi column density distribution functions in TNG100 are found to differ only at column densities > 22 when comparing pixel sizes with side length 150 pc and 1 kpc.These regions are limited to the centre of galaxies that comprise a small gas covering fraction when compared to the CGM.Hence, we adopt a 1 pkpc^2 pixel to match the median CGM resolution of ≈1 pkpc in TNG50 at R_ vir > 0.1 which increases for larger radii.The ±500line of sight depth includes the contribution from the Hubble flow and we choose ±500to be consistent with surveys of absorber-galaxy pairs <cit.>.We use the standard cubic-spline deposition method <cit.> to spatially distribute the masses of each gas component.Every pixel in this map corresponds to a single, observable `sightline'.For each pixel, we calculate the mass contribution from cells belonging to the origin and gas flow flags mentioned earlier: [IGM, central, satellite and other halo] (origin) and [inflow, quasi-static, outflow and satellite] (gas flow).We assign a flag to each pixel identifying which component contributes the most mass for that given sightline and then we determine the gas column density using the chosen flag only.The orientations of the central galaxies are random as we use three projections of a given halo that correspond to the fixed x-, y- and z-axes of the simulation volume. In addition to the column density, we calculate other quantities measurable in observations such as the mass-weighted (using Hi) metallicity of the gas, line of sight velocity difference between absorber and central galaxy (Δ v_ LOS), two-dimensional projected distance from the galaxy centre (impact parameter, b) and azimuthal angle between the galaxy's major axis and absorber (Φ). We also include mass-weighted quantities such as the gas temperature, distance along the projection axis (d_ LOS) and distance from the galaxy centre (3D distance) which are not directly measurable in observations.These calculations are determined using only the cells that belong to a given flag.Ultimately, we generate a catalogue of absorbers with the properties discussed above that are labelled by two sets of flags that inform us of the origin and gas flow. In <ref>, we display four randomly selected galaxies in the sample that span the entire range of stellar masses from roughly 10^8.0 to 10^11.0 M_⊙.The left column of plots are Hi column density maps where the subhalo identification, stellar mass and physical scale are also given.The central and rightmost plots separately colour each pixel by the dominant mass contribution for that sightline using the origin and gas flow flags.We keep the colour schemes for the two sets of flags consistent throughout the remainder of this work.The dashed white circle marks the virial radius of the central galaxy.From the first and second columns, we find that the dense Hi gas typically arises from the centre of centrals, satellites or other haloes, while lower sightlines trace the intergalactic medium.Moreover, we see that satellites and other coincident galaxy haloes can dominate the projected Hi mass for a sightline even at small impact parameters from the central galaxy (most prominently seen in the second and third rows).Using the gas flow flags, we show that gas in the CGM has a rich structure in radial velocity.For the central galaxies depicted at larger inclinations (second and third rows), outflows appear to be preferentially directed along the minor axes.Gas accretion can be seen directed along the major axes but can arise from the central galaxy stripping gas from another galaxy (second row) or the accretion of gas clouds that co-rotate with the disk (bottom row).These velocity structures are washed out when the galaxy appears more face-on (top row), where much of the gas is quasi-static. We note that there are subtleties in the categorisation of pixels using the origin and gas flow depicted in Figures <ref> and <ref>.First, we separate gas inflows from satellites that may be accreting onto galaxies.Components of accreting satellites that have been stripped and are infalling onto the central galaxy will be considered `inflows', whereas gas that is still more tightly bound to the satellite will be labelled as `satellite'.Similarly, large-scale streams from the IGM, as seen in the rightmost panels for 10^8 and 10^11 M_⊙ central galaxies, will be considered inflows only if they are gravitationally bound to the central galaxy.In the middle column, yellow and purple dominates the regions beyond the virial radius because other haloes and the intergalactic medium contribute the most Hi for these sightlines.As we do not consider the second halo term and IGM in the rightmost column, the black pixels correspond to regions where none of the four gas flow labels [inflow, quasi-static, outflow and satellite] meet a column density threshold of > 13.0 (see <ref>).It is also for this reason that the central galaxy and satellites appear more extended in the right column of that same figure (e.g. satellites in the top left corner of the bottom two rows).Pixels previously assigned to the `IGM' or `other halo' origin labels may change into the most dominant of the four gas flow labels, assuming the column density threshold is met. § WHERE IS THE GAS ALONG THE LINE OF SIGHT?In observational studies of absorber-galaxy systems, the impact parameter and line of sight velocity are two reliably-determined measurables that inform us on the true distance between absorbers and galaxies, and, by extension, determine how absorbers are related to surrounding galaxies. While the impact parameter is a measure of the physical distance in the plane of the sky, the velocity difference (Δ v_ LOS) does not directly correlate with the distance along the line of sight (d_ LOS). An absorber's line of sight velocity is a combination of the gas flow being traced, viewing angle of the galaxy and the Hubble flow. Hence, it is important to evaluate the position of the absorbers with Δ v_ LOS within 300 to 1000of the galaxy systemic redshift <cit.> may be beyond the virial radius. In this section, we examine the relation between Δ v_ LOS and line of sight distance, and how the relation evolves as a function of the gas origin and motion (e.g. absorbers tracing the IGM, accretion and outflows). Additionally, we estimate the ranges in Δ v_ LOS where the majority of the CGM gas mass lies. In <ref>, we show the relationship between the line of sight distance and line of sight velocity for absorbers associated with a stacked sample of central galaxies with stellar mass log(M_*/M_⊙) = 10.0 at z = 0.5. This particular figure is limited to partial Lyman limit systems (pLLS; 16.0 << 17.2) associated with 10^10 M_⊙ central galaxies. Because this choice is arbitrary, we discuss the effects of changing column density and stellar mass. We separate absorbers by their respective origin or gas flow. The top-left panel includes all absorbers associated to the central galaxy. The panels labelled as `Inflow' and `Outflow' in the top row are decompositions of the top-left plot (we have not included `quasi-static' gas with radial velocities |v_r| < 20 ). We display the properties of absorbers that trace gas in satellites, the intergalactic medium or another halo along the line of sight in the bottom row respectively from left to right.The colour of each hexbin is the median impact parameter of all absorbers found in the bin.For gas associated with the central galaxy and its satellites, we find a wide array of line of sight distances for any given line of sight velocity, with values varying by up to ∼ 500 pkpc at fixed Δ v_ LOS.One cause for this is the degeneracy between the physical location of the absorbing gas and its kinematics.Redshifted absorbers can trace gas accreting onto the galaxy from behind or gas ejected towards the observer.Similarly, blueshifted absorption lines can probe outflowing gas expelled away from the observer or gas infalling onto the galaxy from in front.This is evident from the black horizontal bars that display the average d_ LOS associated with absorbers in 40 bins with width ∼ 10in <ref>.Gas at negative line of sight velocities trace both inflowing (outflowing) gas at positive (negative) distances down the sightline.For centrals, the black line is almost horizontal at d_ LOS = 0 kpc and there is roughly 50 kpc of scatter in d_ LOS at all Δ v_ LOS.In general, inflows are indistinguishable from outflows by the velocity of the absorber relative to the galaxy alone <cit.> and we require other diagnostics to break the degeneracy (see Section <ref>). We also find absorbers associated with satellite galaxies, the intergalactic medium and other galaxy haloes.In particular, <ref> shows that partial Lyman-limit systems with |Δ v_ LOS| < 500can trace gas in the IGM or intersect other haloes that are located more than several pMpc along the line of sight.While these absorbers are typically found at larger impact parameters, we still find that gas in the IGM and other haloes can mimic CGM gas of the central galaxy. We quantify the relative frequency of these absorbers in Section <ref>.The black horizontal bars indicating the median line of sight velocity for gas tracing satellites is similar to absorbers tracing accretion, highlighting that satellites are infalling onto the central and their average motion resembles gas accretion. In contrast, the almost linear relationship between d_ LOS and Δ v_ LOS for gas in the IGM and other haloes is caused by the Hubble flow dominating at larger scales. We note that the range in LOS distances of the `IGM' and `Other Halo' panels is not the same as the other four. Nevertheless, it is clear from the inflow-outflow degeneracy combined with the contribution of absorbers from satellites, the IGM and other haloes that the line of sight velocity difference between absorber and galaxy is not a direct indicator of actual distance. By extension, a small velocity separation (e.g. |Δ v_ LOS| < 500 ) does not imply an absorber is associated with a given galaxy. This result is in line with the findings of previous works using the EAGLE simulations <cit.> to analyse Hi around massive galaxies (M_200 ≳ 10^12 M_⊙) at z ∼ 2-3 <cit.>, and Mgii and Ovi around stellar mass 10^9 to 10^11 M_⊙ galaxies at z ≈ 0.3 <cit.>. More recently, the FOGGIE simluations, find that structures of varying temperatures, metallicities and densities in the CGM that span > 100 kpc can be tightly constrained in velocity space <cit.>. We similarly show here that the velocity restrictions we apply in observations lead to a larger LOS path length compared to the virial radius of galaxies. In addition, when multiple absorption components are separated in velocity (from tens to hundreds of ) down a single sightline in observations, our results emphasise that even these smaller velocity differences can lead to different spatial locations. Studies find discrepancies larger than 2 dex in metallicity for distinct absorption components <cit.> and we suggest that these separate components may arise from different physical origins such as the intergalactic medium.We are able to give a simple prescription for an appropriate line of sight velocity limit that encompasses most of the Hi gas belonging to the CGM of the central using the distribution of absorbers in velocity space.Using a kernel density estimator with contour levels of [0.25, 0.50, 0.75] in <ref>, we find that 75 per cent of LLS absorbers associated with the central are found within ± 150 of galaxies withstellar mass 10^10 M_⊙.In addition, we find that our prescription of ± 150varies marginally for absorbers of Hi column densities larger than 10^17.2 atoms cm^-2.However, this value is strongly dependent on the stellar (and more importantly, the halo) mass and decreases to ± 70 (120)for M_* ≈ 10^8 (10^9) M_⊙ galaxies and is ± 250for the M_* ≈ 10^11 M_⊙ galaxies in our sample.Fundamentally, the increasing virial velocities for more massive haloes drives the variation in v_ LOS values.However, this relation is not linear, with Δ v_ LOS > V_200 (halo circular velocity) for smaller haloes, and vice versa, for larger haloes.We also find these values are roughly consistent with the findings of <cit.>, where the strongest absorption is confined to be within ± 200of a M_* ≈ 10^10 M_⊙ galaxy at z = 2.We refrain from a more direct comparison with FOGGIE because their cosmological zoom simulations are at a different redshift and do not include contributions from the IGM or other haloes.These Δ v_ LOS estimates serve as useful indicators of whether an Hi absorber is within the halo of a nearby galaxy.§ THE PHYSICAL ORIGIN OF ABSORBERSWhile the limits in Δ v_ LOS provided in the previous section are practically useful for observational studies of absorber-galaxy systems, it is also important to quantify the likelihood of absorbers originating from the central galaxy as opposed to satellites, other galaxy haloes along the line of sight or the intergalactic medium. Here, we explore how the fraction of sightlines that trace gas outside the central subhalo changes as a function of impact parameter and Hi column density, two observables that are readily available. In addition, we specifically study the contribution of lower mass satellites that are below the current sensitivity limit of many observations (M_* ≲ 10^8 M_⊙).§.§ What fraction of Hi arise from the central halo?We have shown that the combination of gas peculiar velocities and the Hubble flow lead to absorbers of various origins masquerading as belonging to the central galaxy. Here, we quantify the fractional contribution of the various gas origins as a function of the impact parameter. We use the impact parameter in particular because typically, the galaxy at lowest impact parameter is considered to be associated with the absorber in observational surveys <cit.>. Likewise, we consider all gas within ± 500of the central galaxy to mimic a typical line of sight velocity cut in observations <cit.>. In <ref>, we show the fraction of absorbers that trace gas in the central (blue), a satellite of the central (green), another galaxy halo along the line of sight (yellow) and gas in the intergalactic medium (purple). We express the fraction as a function of impact parameter (normalized by the virial radius in the top x-axis) for each of the four origins. The twelve panels span four stellar mass bins in ascending order from left to right and three Hi column density ranges: LLS (top; 17.2 << 19.0), sub-DLAs (centre; 19.0 << 20.3) and DLAs (bottom; > 20.3). We use bins of size 10 (20) kpc to calculate the fractions for the LLS and sub-DLA (DLA) column densities.We find that these fractions are strongly dependent on the stellar mass of the central galaxy. Absorbers associated with more massive central galaxies dominate out to larger physical impact parameters, but this is caused by more massive galaxies having a larger virial radius (see normalized b/R_ vir on the top x-axis).Indeed, the slopes of the central curves when viewed as a function of the normalized impact parameter are almost identical across our four stellar mass bins.Moreover, for stellar masses log(M_*/M_⊙) ≤ 10.0, we often find a peak in the contribution from satellites before the ‘other halo’ term begins dominating. This is caused by the fact that satellites are limited to being gravitationally bound to the central. It is only for the 10^11 M_⊙ central galaxies where the virial radius is larger than the mocks (as seen in <ref>) that the contributions from satellite galaxies have not peaked. We note again that these size restrictions for the most massive galaxies in the plane perpendicular to projection are physically motivated by the scales covered by the field of view of VLT/MUSE at z ∼ 0.5 which is the median redshift of the MUSE-ALMA Haloes survey <cit.>. Ultimately, the general trend we find, which is roughly independent of the mass and column density, is that the central galaxy dominates the covering fraction up to b ≈ 0.5R_ vir, followed by satellite galaxies until the virial radius and then other galaxy haloes beyond R_ vir.There are also differences between the Hi column density bins (increasing column density from top to bottom rows). <ref> reveals a steeper decline in absorbers associated with centrals for higher Hi column densities, as these densities drop rapidly in the CGM towards larger impact parameters <cit.>. This is accompanied by a greater fractional contribution of satellites and other haloes at these impact parameters, as a result of sightlines intersecting the dense ISM of these objects. The curves are more irregular because strong absorbers are rarer, even with larger bin sizes. We also see that the contribution from gas in the intergalactic medium (purple) is minor for column densities ≥ 19.0, but increases in prevalence at lower N_Hi and larger impact parameters, as expected since this gas is predominantly not Hi as a result of ionization by the UV background.This becomes clearer in <ref> where we plot the trend of Hi column density as a function of the impact parameter for the four stellar mass bins.The solid coloured line in the primary panel is the median N_Hi value at a given impact parameter for M_* = 10^11 M_⊙ central galaxies.The faded grey curves represent the N_Hi-b for the lower stellar mass bins which are shown and coloured in the secondary panels.Grey vertical lines mark the impact parameters where the median column density drops below the DLA, sub-DLA, LLS and pLLS thresholds.The coloured area is proportional to the fractional contribution (right y-axis) of the four origin flags for each range in column density.For instance, DLAs found towards our M_* = 10^11 M_⊙ central galaxies (primary panel) can be attributed 75 per cent of the time to the central, 15 per cent to satellites and the remainder to other galaxy haloes.This calculation is made using DLAs found across all impact parameters, as opposed to <ref> where the fractions are given as a function of b.The black dashed vertical lines mark 0.15, 0.5 and 1 times the virial radius to mark the inner and outer boundaries of the CGM. In all panels of <ref>, we find an increase in the incidence of absorbers tracing the intergalactic medium as we move to larger impact parameters (and thus lower median Hi column densities). This highlights that weaker absorbers have a high likelihood of originating from the IGM, with more than 30 per cent of < 10^16 atoms cm^-2 absorbers tracing gas outside of haloes.We also see a larger fraction of absorbers belong to the central across all column densities for the more massive haloes.Crucially, we note that this signal is caused by our restrictions in the mock size perpendicular to the plane of projection which is intended to mimic observational surveys that have fixed observed size.The less massive central galaxies will have a larger portion of their maps outside the virial radius and hence, have a higher incidence of absorbers arising from other haloes or the intergalactic medium.We also see in <ref> that the median Hi column density is larger at all impact parameters ≳ 10 kpc for the more massive central galaxy.At smaller impact parameters, this trend does not hold because of the supermassive black hole at the centre of the most massive galaxies ejecting gas via the kinetic feedback mode <cit.>.In addition, we find a scatter in N_Hi up to 3 dex across the range of impact parameters.This scatter can be attributed to the intrinsic inhomogeneity of gas in the circumgalactic medium, but our results also emphasise that we may be intersecting gas clouds outside galaxy haloes in the IGM.Despite the various potential contributions from satellites, interloping haloes or the intergalactic medium, the trend of decreasing Hi column density with impact parameter remains clear <cit.>. §.§ What fraction of absorbers intersect low-mass satellites? The Milky Way halo contains tens of satellite galaxies <cit.> and we expect absorption lines to occasionally intersect these smaller haloes found in the CGM of more massive galaxies.The probability of sightlines intersecting satellites, particularly faint ones, is of interest to observers as strong absorbers occasionally do not have galaxy counterparts down to some magnitude (R ≈ 25 mag) and/or star-formation rate (SFR) limit (SFR ≈ 0.1) <cit.>.Here, we estimate the gas mass contribution from low-mass satellite galaxies in the TNG50 simulation.This is motivated by the shallower completeness expected for galaxies below some stellar mass limit in observational surveys.For a given TNG50 mock with satellites and other haloes, each gas cell is assigned a subhalo ID.We calculate the subhalo ID that dominates the HI mass for each pixel in our 2D projection down the line of sight of the mock.By crossmatching this list of IDs with sightlines that have been assigned a ‘satellite’ or 'other halo' flag, we then compute the fractional contribution of galaxies with varying stellar masses to the total number of sightlines that are dominated by gas in satellites or a secondary halo along the sightline.This is depicted in <ref> where the four colours correspond to the four stellar mass bins in our sample.We show the cumulative contribution from satellites (left) and other haloes (right) with stellar masses (M_ satellite) that begin at 10^-4 times the mass of the central galaxy (M_ central).The shaded region reflects the scatter when considering different column density bins, from 13.0 << 16.0 to > 20.3 systems. For M_* = 10^10 and 10^11 M_⊙ central galaxies, M_* < 10^8 M_⊙ satellite galaxies only contribute roughly ten per cent to the total number of absorbers that intersect any satellite (see dashed vertical and horizontal lines on the left plot).This fraction increases to ≈40 per cent of sightlines for a central galaxy mass of 10^9 M_⊙.For other haloes along the line of sight, the contribution from M_* < 10^8 M_⊙ galaxies remains roughly constant at 25 per cent (right).There is a larger scatter of roughly 0.2 between the different column density bins, but the upper and lower bounds are not driven by any particular column density.This effect highlights that even low-mass galaxies in the CGM of massive galaxies and along the line of sight contribute to absorption and might remain undetected in observational surveys of absorbers, particularly at high redshift.There are occasional cases in observational studies of galaxy counterparts to absorbers where no galaxy is detected near the QSO sightline down to some limit in stellar mass or star-formation rate <cit.>. At z>2 in surveys of Ly-α emitters (LAEs) associated with Hi absorbers, there are also systems where LAEs are found at impact parameters > 50 kpc from DLAs <cit.>.Given that the Hi is not expected to extend out to such large distances from simulations <cit.>, the more plausible explanation is that we are tracing low-mass satellite galaxies near the QSO sightline.Figure <ref> suggests that satellite galaxies at z=0.5 are responsible for up to 50 per cent of absorbers at impact parameters within the virial radius of the central but particularly at 0.5 < R_ vir < 1.0.We then see in <ref> that M_* ≤ 10^8 M_⊙ satellites make up ≈10 (40) per cent of the total number of sightlines that trace satellites gravitationally bound to central galaxies with stellar masses 10^10 and 10^11 (10^9) M_⊙. Hence, strong Hi absorbers without a nearby galaxy counterpart may simply be associated with objects below the sensitivity limit (typically 10^8 M_⊙ at z = 0.5 for a one hour MUSE exposure).Such objects require deeper observations or larger telescopes such as the Extremely Large Telescope (ELT), Giant Magellan Telescope (GMT) or the Thirty Meter Telescope (TMT).§ GAS FLOWS IN THE CGM As depicted in Figure <ref> and <ref>, disentangling inflowing from outflowing gas in observations of the circumgalactic medium is not straightforward.Two commonly used indicators to observationally infer whether the gas is inflowing or outflowing are the azimuthal angle (Φ) and metallicity (Z).In this section, we characterise how the inflowing to outflowing gas fraction changes as a function of Φ and study the origins of the metallicity anistropy between the major and minor axes of galaxies. §.§ Using the azimuthal angle to identify gas flows In transverse absorption-line studies, the 2D projected azimuthal angle is often advocated as an indicator to distinguish outflows from inflows <cit.> where accretion is assumed to align with the major axis (Φ = 0^∘) and outflows with the minor axis (Φ = 90^∘).To mimic observations, we create 2D images of galaxies in TNG50 using the stellar density.These images are fully idealised mocks that neglect observational effects such as noise, instrumental response and seeing and the effects of dust attenuation and scattering <cit.>.We run statmorph <cit.>, an algorithm utilised to determine the morphology of sources, on the mock images to model the galaxy profile and return the position angle (PA) and axis ratio, b/a.From here, we generate the azimuthal angles for each pixel in the galaxy projection, excluding the pixels within 5 kpc of the galaxy centre as Φ values are unreliable at small distances.For the figures in this section, we select only galaxies with b/a values below the median of the sample.This is to exclude galaxies that may be at lower inclinations (more face-on) where the measured position angle is no longer reliable.We adopt this rather simple approach to measure the PA and b/a because our goal is not to test the accuracy of azimuthal angle measurements in observations but rather, to provide a first-order approximation for what observers might measure. We show the fraction of absorbers that belong to one of the four gas flow flags as a function of the azimuthal angle in <ref>.The shaded background conveys the fraction of absorbers belonging to the four categories using bins of ten degrees in azimuthal angle.The solid white line depicts the fraction of outflows to inflows, also as a function of the azimuthal angle where the horizontal dashed line marks an equal ratio.In the primary panel, we consider Lyman-limit systems found at an impact parameter of 50 to 100 kpc from 10^9 M_⊙ central galaxies.This cut in column density enables us to maximise the signal as higher N_Hi systems are much rarer tracers of gas flows.In the bottom panels, we consider smaller impact parameters (left), lower column densities (middle) and larger stellar masses (right).We see a clear trend in the primary plot where the outflow to inflow ratio increases towards larger azimuthal angles.While similar signals have been detected before in simulations <cit.>, previous studies calculate the azimuthal angle of pixels in a frame where the galaxy is viewed directly edge on.The method used in this paper, based on observational practices, only loosely constrains the inclination of the galaxy by considering axial ratios that are below the median value.This is a reassuring confirmation of observational practices where one uses Φ to distinguish gas flows <cit.>.Additionally, we also account here for the contribution from satellite galaxies and gas in quasi-hydrostatic equilibrium.Still, we confirm that the ratio of outflows to inflows is ≈0.5 near the major axis, rising to a peak of ≈2 at the minor axis in the primary panel.In the main panel, we have considered a particular subset of absorbers (LLS at impact parameters 50 to 100 kpc) associated with M_* = 10^9 M_⊙ central galaxies.When varying the impact parameter, Hi column density and central stellar mass, we obtain more varied results.Looking at the bottom left panel of <ref> where we consider only absorbers within the inner 50 kpc of the galaxy, we still find an increasing outflow to inflow ratio, but the proportion of inflows dominates at all azimuthal angles.A similar signal is found when considering typical Ly-α forest column densities (13.0 << 16.0, bottom middle) and stellar mass (bottom right), namely, inflows dominate in number even near the minor axis.The influence of impact parameter and stellar mass are particularly significant, with only half the gas flows at the minor axis attributed to outflows.In particular, we find that accretion dominates sightlines passing through central galaxies belonging to the M_* ≈ 10^10 and 10^11 M_⊙ bins. Current observational studies, which are more sensitive to massive galaxies, find signatures of outflows more common than inflows <cit.> and we discuss the possible reasons for this in Section <ref>.Looking specifically at the fraction of gas in quasi-hydrostatic equilibrium, we find little evolution with azimuthal angle.The bottom left panel of <ref> shows that the fraction does not vary at lower impact parameters and only mildly increases for lower column densities (bottom middle).Due to an increasing fraction of inflowing absorbers, there is much less gas in quasi-hydrostatic equilibrium for M_* = 10^10 M_⊙ central galaxies (bottom right).Curiously, we find a tentative signal of an increasing satellite LLS fraction at 50 < b < 100 kpc as we move towards the minor axis for both M_* = 10^9 and 10^10 M_⊙ central galaxies.For these stellar masses where outflows are typically starburst-driven, the gas density is expected to be greater along the minor axis.Hence, there should be stronger ram pressure stripping, leading to more quiescent and less gas-rich satellites near Φ = 90^∘ in the CGM.We find instead that satellites along the minor axis are more gas-rich, reminiscent of the anisotropic galaxy quenching found in galaxy clusters <cit.>.A possible cause for this is if satellite galaxies along the major axis have been accreted at earlier times and hence, more likely to be quenched <cit.>.At this stage, the origins of this anistropic LLS fraction from satellites remain ambiguous. In <ref>, we also find that 40 to 60 per cent of gas at all azimuthal angles arises from gas that is in quasi-hydrostatic equilibrium or in satellites.The contribution from satellites is most prominent in the primary panel which is consistent with our findings in <ref> where satellites dominate around 0.5-1 R_ vir.At smaller impact parameters and lower column densities, their contribution diminishes.The contribution from quasi-static gas is significant for impact parameters up to 200 kpc, all central galaxy stellar masses and Hi column densities tested in this study.While the ratio of outflowing to inflowing sightlines increases with larger azimuthal angles, this value ignores the contribution from gas moving at slower radial speeds.These results show that a fraction of absorbers in galaxy haloes may be associated with gas that is static or rotating.The latter has been observed in 60 per cent of Ly-α absorbers at z ≲ 0.03 <cit.> and recent simulations show that rotational support is significant in the CGM <cit.>.In this work, we have set an arbitrary restriction on the radial velocity (± 20 ) to distinguish between inflows, quasi-static gas and outflows.Hence, the fraction of gas that is rotating without loss of angular momentum (quasi-static) compared to co-rotating inflows depends on the radial velocity boundary. Any increase in magnitude of the v_r cutoff used naturally leads to an increase in the fraction of gas in hydro-static equilibrium.Likewise, setting all negative (positive) radial velocities to be counted as inflows (outflows) leads to no absorbers being labelled as quasi-static.However, we find that unless the v_r cutoff is > 150 , the ratio of inflows to outflows remains similar; there is just a proportional increase in the quasi-static absorber fraction.At high radial velocities, we only expect to find outflows and hence, the fraction of inflows to outflows decreases dramatically.Therefore, we caution that interpreting Hi absorbers as inflowing or outflowing using their projected azimuthal angle relative to the major axis should include further consideration of other properties such as b,and M_*.§.§ The metallicity of gas flows Another absorber property used to differentiate outflows from inflows is the gas phase metallicity <cit.>.In <ref>, we plot the metallicity as a function of the azimuthal angle.The four coloured lines correspond to the four gas flow origin flags and the dashed white line is the median metallicity of all absorbers in a given azimuthal angle bin with size 10^∘.The shaded regions indicate the 0.5σ uncertainty in the metallicity for only outflows and inflows.We adopt the same fiducial parameters (LLS found 50 to 100 kpc from M_* = 10^9 M_⊙ central galaxies) in the primary panel as <ref> and the same changes in the secondary panels.For almost all azimuthal angles and choices of b, N_Hi and central galaxy stellar mass, the metallicity (with respect to solar) of outflows is consistently 0.2 to 0.5 dex larger than inflows.The few exceptions occur for absorbers at azimuthal angles Φ < 30^∘ found towards M_* = 10^10 M_⊙ central galaxies (bottom right) where the accreting gas is roughly equal in metallicity to the outflowing gas, perhaps due to recycling.We also find that absorbers are more metal-rich around more massive central galaxies, while the impact parameter and Hi column density change the normalisation by < 0.1 dex.It is for this same reason that the metallicity of satellites is typically lower than the median value; as satellites are usually less massive, their average metallicities are also going to be lower.Similar to the findings of <cit.>, we find a positive gradient in the median metallicity of absorbers as a function of azimuthal angle.The magnitude of the difference in Z between the minor and major axes is typically of order 0.2 dex, but does diminish with increasing central stellar mass.More curiously, we find that the driver of this metallicity difference is only partly caused by an increase in the fraction of absorbers tracing metal-enriched outflows.In general, gas that is inflowing or in quasi-hydrostatic equilibrium also increases in metallicity towards the minor axis, hinting that the pollution of metals into the circumgalactic medium near the minor axis also results in metal-enriched accretion in the form of recycled gas <cit.>.As the fraction of quasi-static and inflowing gas typically dominates the outflow fraction (bottom panels in <ref>), the increase in median metallicity of these types of absorbers contribute more significantly to the median metallicity of the CGM.We discuss the metallicity distribution of inflowing and outflowing gas in more detail in Section <ref>.§ OBSERVATIONAL COMPARISONS§.§ Determining the host galaxy of absorbersWith the proliferation of IFS surveys targeting absorbers, there are an increasing number of cases where multiple galaxies are found within some velocity cutoff with respect to the absorber <cit.>.This points to a more complex relationship between galaxies and absorbers, and one that may only be captured by larger statistical studies.At the same time, it is also critical to relate the properties of the CGM with galaxy properties in individual cases to understand how galaxies evolve <cit.>.However, the fidelity of our current methods used to associate galaxies with absorbers remains ambiguous.We have shown already in Figures <ref> to <ref> the importance of considering other haloes along the line of sight and M_* < 10^8 M_⊙ satellites.The predominant method of assuming the galaxy at lowest impact parameter to the QSO sightline is the host neglects both these considerations.Here, we analyse the potential host galaxy of absorbers using the results of these simulations and prescribe a more reliable method for associating absorbers to galaxies. In <ref>, we show the most likely origin flag of an absorber found at some impact parameter and line of sight velocity difference to the central galaxy.The likelihood changes significantly as a function of stellar mass and the three panels represent stellar masses M_* = [10^9, 10^10, 10^11] M_⊙ from left to right.We also impose a Hi column density limit of > 18.0; both the stellar mass and column density restrictions are intended to match the MUSE-ALMA Haloes sample (black stars).At lower column densities, there is an increasing contribution from the intergalactic medium, replacing the contribution from other haloes and satellites.However, the extent of the central galaxy (blue) remains similar as a function of Δ v and b, even when considering all absorbers with > 13.0.If we look at the distribution of absorbers around galaxies from the COS-Halos survey <cit.>, we find that the centroids of the total Hi absorption profile are predominantly within the region dominated by the central galaxy after matching stellar masses.However, the centroids of individual Hi components are occasionally found at large velocities from the galaxy systemic redshift and are more likely associated with other haloes or the IGM (if at lower column density).The line of sight velocity difference between absorber and galaxy remains an important consideration and we suggest that when associating galaxies with absorbers, both Δ v and b are considered <cit.>.While we have coloured each hexbin by a given flag, we emphasise that this is inherently probabilistic and there is much uncertainty, particularly at the boundaries between the central and satellites or other haloes. Though <ref> is a useful observational diagnostic for determining the potential origins of Hi absorbers by moving beyond just using the impact parameter, it underscores that absorbers may not have a single well-defined origin and the need for statistical studies involving larger samples.This point is implicitly made by works studying absorbers arising from the intragroup medium where absorbers are not associated with a single galaxy <cit.>.Thus far, we have adopted a galaxy-centric approach, that is, we study how the properties of Hi absorbers vary as we move from a selected central galaxy.Many of the current IFS absorber studies are centred on a background source which is arbitrarily positioned in the sky with respect to foreground galaxies <cit.>, including the MUSE-ALMA Haloes survey <cit.>.To produce results that can be directly used to better understand observations of this type, we select a random pixel within a 200 kpc square centred on the central that has a column density > 18.0.This becomes the location of our absorber and we calculate the impact parameter of the remaining pixels in the field with respect to the absorber.We can then determine the fraction of absorbers belonging to the four origin flags in a similar manner to <ref>.For each mock, we repeat this process five times for each projection direction, leading to 15 iterations in total for each central subhalo.The results are shown in <ref>.Using a randomly-selected absorber, we plot how the fraction of pixels tracing the central, satellites, other haloes and the IGM surrounding the absorber changes as a function of impact parameter.The dashed lines corresponding to the four origins show this variation with b and the coloured background elucidates the relative fractions within each 10 kpc impact parameter bin.To make an appropriate comparison with the MUSE-ALMA Haloes survey, we re-analyzed this observational data and mimicked the definitions of centrals, satellites and other haloes used in TNG50.We derive the halo mass and virial radius using the stellar-to-halo mass relation <cit.>. We define the central galaxy as the most massive galaxy in the MUSE field of view and hence, other galaxies associated with the absorber are described as satellites (other haloes) if they are inside (outside) R_ vir of the central <cit.>.While satellites can still be bound beyond R_ vir, we choose to adopt this simpler definition for this exercise.Then, we apply a column density cutoff of > 18.0 and only consider M_* = 10^9 to 10^11 M_⊙ central galaxies to match the MUSE-ALMA Haloes survey.The impact parameter of these various galaxies in the data are plotted using stars with a filled colour corresponding to their origin.For clarity, we plot central galaxies in the top row, followed by satellites and other haloes as we move down.The relative abundances of central, satellite and other halo galaxies as a function of impact parameter from the MUSE-ALMA Haloes survey is broadly consistent with the predictions from TNG50.This highlights that the most massive halo is not necessarily the host galaxy of the absorber and that satellites and other haloes can be significant contributors to the Hi gas down any line of sight.A larger sample size, particularly for the 10^9 M_⊙ central galaxies, is required to make more quantitative comparisons.We note that the excess of satellites at low impact parameters for the 10^11 M_⊙ central galaxies is caused by the DLA towards Q1130-1449 being associated with more than 10 galaxies <cit.>.The right plot of <ref> also highlights that gas ejected from AGN that is no longer gravitationally bound may contribute significantly to the covering fraction around massive haloes.While this signal may be enhanced by the kinetic feedback mode of AGN in the IllustrisTNG model pushing large quantities of cool gas into the CGM, it highlights the potential for absorbers to be unbound to any galaxy which is ignored in most observational studies <cit.>. §.§ Metallicity distribution of Hi absorbersWe now move to statistical studies on absorber metallicities and their possible origins in and around galaxies.The distribution of pLLS and LLS absorber metallicities at z ≲ 1 appears to be bimodal from observations <cit.>.In <cit.>, the authors find tentative evidence for a metallicity bimodality using absorbers with column densities 16.2 ≲≲ 18.5 at z ≲ 1.In a larger sample, <cit.> find that the bimodality is driven by pLLS absorbers where the metallicity peaks at [X/H] =-1.7 and -0.4.This bimodality is purported to arise from low-metallicity absorbers tracing gas accretion or overdense regions of the Universe <cit.>, while more metal-rich absorbers are associated with the CGM of galaxies.However, a study of L^* galaxies from the COS-Halos Survey finds a unimodal metallicity distribution centred on [X/H] ≈-0.5 <cit.> and argues that low-metallicity absorbers are not necessarily freshly accreted gas.As our TNG50 sightlines are centred on central galaxies with specific halo masses, we do not attempt to reproduce the number counts in these observations <cit.>, but rather investigate whether a bimodality exists in the normalized sense using the various absorber origins. In <ref>, we display the metallicity distributions of absorbers separated by their origin in TNG50.We first separate absorbers by whether they trace inflows or outflows in the top panels and then, absorbers tracing the IGM from absorbers tracing the central galaxy in the bottom panels.All distributions are normalized to have equal peak heights and do not reflect the actual incidence of absorbers.While outflowing gas is typically more metal-rich than inflowing gas, we find only a ∼0.2 dex different in their metallicity distribution peaks for the fiducial parameters of 16.0 << 19.0 and M_ cen = 10^9 M_⊙.This difference increases to ∼0.4 dex if we only consider 13.0 << 16.0 absorbers (top right), but almost disappears when considering M_* = 10^11 central galaxies (top centre).Regardless, any difference we find is minor when compared to observations where the difference in [X/H] is 1.3 dex.The strong overlap in metallicities is in part caused by efficient wind recycling, where previous metal-enriched outflows later become inflows <cit.>.We choose to not include absorbers that are in quasi-hydrostatic equilibrium or associated with satellites in this sample for clarity but note that their presence further dilutes any signal of bimodality (as seen in the unimodal `central' distributions in the bottom panels).These results are consistent with previous simulated metallicity distributions <cit.> and in tension with some observations <cit.>. In the bottom panels of <ref>, we compare the metallicity distributions of gas tracing the intergalactic medium with gas bound to the central galaxy.Curiously, we find absorbers with 16.0 << 19.0 and metallicities [X/H] > 0.1 that trace gas in the IGM.When we compare these values with the gas bound to a M_* = 10^9 M_⊙ central galaxy (bottom left), we find the IGM gas is metal-enriched with respect to the galaxy.These metal-enriched IGM absorbers arise from the expelled material of more massive haloes found along the sightline, as feedback processes drive metal enriched gas to large distances from the central regions of galaxies <cit.>.This is why we see an alignment in metallicity distributions between the peak IGM component and the M_* = 10^11 M_⊙ central galaxy sample in the bottom middle panel.When we select for Hi column densities < 16.0, we see a 3 dex discrepancy between the distribution peaks, with IGM absorbers found at lower Z as expected. Whether the metallicity distribution of partial Lyman-limit systems is unimodal or bimodal at z = 0.5 is tied directly to the incidence of absorbers belonging to the various origins (e.g. IGM and central).We reiterate that any direct comparison with observational studies is difficult as our sightlines are inherently found near galaxies.However, we do find that in the TNG50 simulation, the metallicity is not a perfect discriminator of gas that is accreting or outflowing <cit.>.Moreover, the peak of the metallicity distribution for absorbers associated with 10^10 and 10^11 M_⊙ central galaxies is consistent with both the metal-rich peak from the bimodal distribution of <cit.> ([X/H] ≈ -0.4) and the peak of the unimodal distribution ([X/H] ≈ -0.5) in <cit.>.While the peaks align, <cit.> finds a spread in metallicities around L^* galaxies that is roughly twice as large as predicted in TNG50.In <ref>, we observe that the increased spread towards lower [X/H] may arise from less-massive satellite or secondary halo galaxies that are typically lower metallicity from the mass-metallicity relation <cit.>.We note that satellite galaxies have not been shown for clarity, but their metallicity distribution also extends to smaller [X/H] values.The lower incidence of super-solar metallicities ([X/H] > 0.5) may be caused by the limited CGM resolution in TNG50 (∼ 1 pkpc beyond 0.1R_ vir), although we note that <cit.> find an upper limit in [X/H] of roughly 0.5.Ultimately, our results suggest that inferring the origin of gas from metallicity requires caution.For random sightlines towards UV-bright QSOs <cit.>, we find that gas expelled from massive haloes into the intergalactic medium are a source of metal-enriched absorbers.Likewise, the large spread in [X/H] values around L^* galaxies may imply that absorbers are perhaps associated with lower-mass satellites that have lower metallicity.§ DISCUSSION §.§ The CGM of star-forming and quiescent galaxiesThe analysis in this work focuses on the CGM properties of galaxies with varying stellar masses at z = 0.5, coinciding with the M_* range and redshift of galaxies in MUSE-ALMA Haloes.At any given stellar mass bin, there are galaxies above, below and on the SFR-M_* main sequence.Here, we discuss how the CGM properties of galaxies with similar stellar mass depend on the star-formation rate.The central galaxies in this work are randomly selected from TNG50 and enforced to have stellar masses within 0.3 dex of M_* = [10^8.0, 10^9.0, 10^10.0, 10^11.0] M_⊙.In total, there are[100, 100, 50, 20] galaxies within each bin, and we further categorise the galaxies as star-forming or quiescent.The distinction is made somewhat arbitrarily; we consider galaxies with SFRs in the lowest quartile of each bin as quiescent, and the remainder are star-forming.We also tested separating star-forming and quiescent galaxies using the median SFR but found little variation in the results presented below.We find marginal differences in the results presented in this paper after separating star-forming and quiescent galaxies.The fraction of sightlines with Hi as a function of impact parameter for the various origins (<ref>) are within the uncertainties for both groups.Likewise, there are marginal differences in the fraction of inflows, quasi-static gas, outflows and satellites as a function of azimuthal angle between star-forming and quiescent galaxies (<ref>).These results highlight that the stellar mass (because it is tied to halo mass) is the key driver of the amount, distribution and extent of the Hi in the circumgalactic medium.Additionally, it is unclear in observations whether the cool gas content in the CGM of star-forming and quiescent galaxies differ.From studies of luminous red galaxies, the cool gas mass in the CGM of passive galaxies appears comparable to star-forming galaxies <cit.>.As Hi is almost ubiquitous in galaxy haloes <cit.>, Ly-α is not the ideal absorption line to trace differences between the circumgalactic media of star-forming and passive galaxies.Instead, it appears that the Ovi covering fraction is significantly larger around star-forming galaxies <cit.>.To understand the different CGM properties of star-forming and quiescent galaxies in simulations, we require a more systematic study across all stellar masses rather than isolated bins in this work.The simplistic method of separating star-forming and passive galaxies used might weaken any potential differences in their CGM properties.However, we note that our results are consistent with observations and highlight that stellar (halo) mass drives the Hi amount, extent and distribution around galaxies. §.§ The incidence of inflows in observations and simulationsThe incidence of inflowing gas observed in both tranverse absorption line and down-the-barrel observational studies is inferred to be ≈10 per cent <cit.>.Comparatively, our TNG50 results indicate a minimum inflow fraction of ≈20 per cent which rapidly increases for higher stellar masses (<ref>).We emphasise that observational studies of gas flows using absorption-lines towards background sources provide only fragmentary information about the physical origin of the observed gas and perhaps our assumptions on whether the gas traced is inflowing or outflowing require reassessment (see Section <ref>).Moreover, when we collapse the mock into a 2D projection, there is almost always both inflowing and outflowing gas in each sightline.This conflicts with assumptions made in most transverse absorption-line studies, where an entire absorption system is typically assigned to a single gas flow <cit.>. In reality, the absorption could be produced by a combination of the two processes <cit.> and current observational methods do not capture this adequately.This may be one cause of the disparity in inflow incidences between observations and simulations.Despite these observational and modelling challenges, the incidence of sightlines tracing inflowing gas remains an important and potentially constraining quantity to measure. §.§ The number of clouds along each sightlineFor the characterization of different origins, and measurements of absorber properties per sightline, we assume that each sightline is dominated by a single cloud of gas belonging to one of the flags.In reality, there is the chance of intersecting multiple clouds which we commonly find in absorption line observations.If there are multiple clouds along the line of sight found at different velocities, then the measurement of properties such as the line of sight distance or metallicity may be averaged out.However, <cit.> shows that the typical number of clouds intersected with > 16.0 along a sightline is approximately two near the centre of Milky Way-like galaxies at z = 0, decreasing to a single cloud towards the virial radius for TNG50.Hence, at this resolution in the CGM, it is likely we are only intersecting a single cloud.In the event there are multiple clouds, we typically find that a single cloud dominates the Hi mass.§.§ Extension to different gas phasesThus far, we have analysed Hi absorbers in this study.Other common metal absorption lines used in studies of absorber counterparts include Mgii, Civ and Ovi.In the top row of <ref>, we show column density maps of each ion for the same M_* ≈ 10^10 M_⊙ galaxy displayed in <ref>.While the Hi and Mgii absorbers are densest at the centre of galaxies, the Civ and Ovi absorbers that trace hotter gas phases are more diffuse and extended <cit.>.Following the previous analysis for Hi, we designate each pixel by the identical set of origin and gas flow flags using the aggregate Mgii, Civ or Ovi mass along each sightline.Looking at the second row of <ref>, we find that the more massive central galaxy dominates most sightlines over satellite and other halo galaxies for Civ and Ovi.This is caused by smaller haloes not reaching the virial temperatures (≈10^5.5 K) required to form Ovi.The structure of outflows in the bottom row also differs between the ions.A visual inspection shows that ions tracing the hotter phases of gas have a larger proportion of sightlines dominated by outflows.This is in line with the earlier discussion of the high incidence of Hi inflows; because the outflowing gas is hotter, it is more likely to reach the temperatures required to produce Civ and Ovi.We present here only some preliminary insights into how different absorber species have varying origins.A complete analysis will be released in an upcoming work. § CONCLUSIONSWe analyse the physical origins of Hi absorbers in and around central galaxies with stellar masses 10^8 to 10^11 M_⊙ at z = 0.5 using the TNG50 simulation.We consider all gas cells within ± 500of the central galaxy and categorise all cells based on whether they are gravitationally bound to the central galaxy, a satellite of the central or another halo.Additionally, we also consider gas that is unbound and in the intergalactic medium.These four flags [IGM, central, satellite, other halo] form the origin labels.We also derive a second set of gas flow labels: [inflows, quasi-hydrostatic, satellites and outflows]. We then connect absorber properties such as impact parameter, line of sight velocity difference, line of sight distance, metallicity, azimuthal angle and column density with their origin or gas flow.Our major findings are summarised here.* The line of sight velocity difference is a poor indicator of the physical distance between the absorber and galaxy, particularly if the absorber traces gas in the intergalactic medium or another halo near the sightline (<ref>).Hence, studies that find large metallicity, temperature or density discrepancies between individual components <cit.> may be tracing gas arising from different origins or separated by large physical distances.< 19.0 Hi absorbers with velocity separations |Δ v_ LOS| < 20can trace gas that is several Mpc away.However, we find that 75 per cent of Hi absorbers with column densities > 16.0 can be found within ± 150of M_* = 10^10 M_⊙ central galaxies.This value decreases to 70 and 120for 10^8 and 10^9 M_⊙ galaxies, respectively and increases to 250for 10^11 M_⊙ galaxies. * The fraction of absorbers that are associated with the central galaxy decreases with impact parameter (<ref>).This decline is steepest for higher Hi column densities and lower stellar masses.From impact parameters b > 0.5R_ vir, satellite and other halo galaxies begin to dominate the fraction of absorbers.Contributions from the intergalactic medium are marginal, particularly for higher M_* central galaxies and larger N_Hi.However, we see in <ref> that > 80 per cent of absorbers with column densities 13.0 << 16.0 trace gas in the IGM. * Satellite galaxies with stellar mass M_* < 10^8 M_⊙ can contribute ∼40 per cent of the total sightlines that intersect satellites of a 10^9 M_⊙ central galaxy.This fraction reduces to ∼10 per cent for 10^10 and 10^11 M_⊙ centrals. For secondary haloes down the line of sight, 25 per cent of absorbers are attributed to M_* < 10^8 M_⊙ galaxies.These findings are a possible explanation of Hi absorbers that do not have detected galaxy counterparts near the background source in observational studies. * After modelling the azimuthal angle of absorbers in a similar manner to observers, we find that the relative incidence of outflows compared to inflows increases as we move towards the minor axis. This signal is strongly dependent on the impact parameter and central galaxy stellar mass; at smaller b and larger M_*, inflows begin to dominate at all azimuthal angles.The larger incidence in simulations may be attributed to the singular classification of absorbers as either outflowing or inflowing; in TNG50, there is typically both inflowing and outflowing gas found along each sightline. * The median metallicity of absorbers increases towards the minor axis, consistent with the findings of previous theoretical studies <cit.> and observations <cit.>. When decomposing the signal into individual gas flows, we find that the increasing incidence of outflows does not drive the increasing metallicity, but rather both inflows and outflows increase in Z towards larger azimuthal angles.* We find that analysing absorber-galaxy systems from the MUSE-ALMA haloes survey using the position of absorbers with respect to galaxies in Δ v-b space is useful to statistically associate the gas with its surrounding galaxies. Instead of assuming the galaxy nearest the absorber is the host galaxy of the gas, we suggest these Δ v-b diagrams (<ref>) are a useful diagnostic that takes into account both the line of sight velocity difference and impact parameter. * In <ref>, we show a direct comparison between the distribution of galaxies around absorbers in MUSE-ALMA Haloes and TNG50. The results are broadly consistent and highlight that observers may need to consider absorbers not bound to any galaxy as a plausible source of gas being probed, particularly around larger haloes. * There is a marginal (≲ 0.5 dex) difference in the peaks of the inflowing and outflowing gas metallicity distributions. The difference decreases for larger central galaxy stellar masses and increases for lower Hi column densities but never reaches the large differences seen in observational studies. In line with previous simulations <cit.>, we find that the observed bimodal metallicity distribution is not seen as strongly even after accounting for the various physical origins of the gas. We expect to expand this study to other common ions in absorption-line studies such as Mgii, Civ and Ovi and to higher redshifts where more recent surveys are searching for galaxy counterparts to absorbers <cit.>.Upcoming surveys with instruments such as the Dark Energy Spectroscopic Instrument <cit.>, WEAVE <cit.> and 4MOST <cit.> will produce millions of absorber-galaxy pairs and their results require a statistical interpretation.With the rapid proliferation of studies of galaxies surrounding absorbers of varying species and redshifts, it becomes more important than ever to interpret the results of these observational studies using simulations. § ACKNOWLEDGEMENTSThis research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project #564 (The Cosmic Baryon Cycle from Space).This research is supported by an Australian Government Research Training Program (RTP) Scholarship. EMS and SW acknowledge the financial support of the Australian Research Council through grant CE170100013 (ASTRO3D). DN and RR acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG) through an Emmy Noether Research Group (grant number NE 2441/1-1). RR is a Fellow of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). The TNG50 simulation was run with compute time awarded by the Gauss Centre for Supercomputing (GCS) under GCS Large-Scale Project GCS-DWAR on the Hazel Hen supercomputer at the High Performance Computing Center Stuttgart (HLRS). Additional computations were carried out on the Vera machine of the Max Planck Institute for Astronomy (MPIA) operated by the Max Planck Computational Data Facility (MPCDF).§ DATA AVAILABILITY The IllustrisTNG simulations, including TNG50, are publicly available and accessible at <www.tng-project.org/data>, as described in <cit.>. Data directly related to this publication is available on request from the corresponding author.mnras § PROBABILITY THAT THE CENTRAL GALAXY HOSTS AN ABSORBERCurrent Hi Ly-α surveys span a large variety of column densities and redshifts.In Sections <ref> and <ref>, we highlight the challenges when attributing absorbers to a single galaxy.The commonly-adopted approach of assuming that the galaxy at lowest impact parameter hosts the absorber is far too simplistic.In <ref>, we showed how the probability that an absorber belongs to a central galaxy varies with b and Δ v. We have only considered central galaxies with stellar masses M_* = 10^9, 10^10 and 10^11 M_⊙ and > 18.0 absorbers at z = 0.5 which are most relevant for the MUSE-ALMA Haloes survey.Here, we extend the plots to more specific bins in Hi column density (pLLS, LLS, sub-DLA and DLA) and also include M_* = 10^8 M_⊙ central galaxies.We will expand the scope of this work to larger redshifts and Mgii, Civ and Ovi absorbers in a forthcoming paper. | http://arxiv.org/abs/2310.18310v2 | {
"authors": [
"Simon Weng",
"Celine Peroux",
"Rahul Ramesh",
"Dylan Nelson",
"Elaine M. Sadler",
"Martin Zwaan",
"Victoria Bollo",
"Benedetta Casavecchia"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231027175557",
"title": "The physical origins of gas in the circumgalactic medium using observationally-motivated TNG50 mocks"
} |
firstpage–lastpage Exploring Shape Embedding for Cloth-Changing Person Re-Identification via 2D-3D Correspondences Yichong Lu January 14, 2024 ===============================================================================================We test the viability of training machine learning algorithms with syntheticline profiles to determine the inclination angles of Be stars (the angle between the central B star's rotation axis and the observer's line of sight) from a single observed medium-resolution, moderate S/N, spectrum. The performance of three different machine learning algorithms were compared: neural networks tasked with regression, neural networks tasked with classification, and support vector regression. Of these three algorithms, neural networks tasked with regression consistently outperformed the other methods with a RMSE error of 7.6^∘ on an observational sample of 92 galactic Be stars with inclination angles known from directprofile fitting, from the spectroscopic signature of gravitational darkening, and, in a few cases, from interferometric observations that resolved the disk. The trained neural networks enable a quick and useful determination of the inclination angles of observed Be stars which can be used to search for correlated spin axes in young open clusters or to extract an equatorial rotation velocity from a measurement of vsin i.stars: emission-line, Be – (stars:) circumstellar matter – stars: early-type – stars: fundamental parameters – methods: data analysis – methods: statistical § INTRODUCTION §.§ Machine Learning in AstronomyAstronomers are increasingly turning to machine learning to provide automated detection, analysis, and classification in response to large scale surveys that produce unprecedentedly large datasets <cit.>. Machine learning differs from traditional model-fitting techniques in that the model is constructed according to the input data rather than being predefined <cit.>. The flexible nature of machine learning algorithms make them suited to a wide variety of tasks. In astronomical research, common uses of machine learning include classifying objects of interest from large databases <cit.>, dimensionality reduction <cit.>, anomaly detection <cit.>, building models that use more parameters than is possible with classical models <cit.>, and visualizing datasets with a high number of parameters <cit.>.Broadly speaking, machine learning can be divided into supervised and unsupervised algorithms. In supervised machine learning, a set of input features are mapped to a target variable based on labels provided by a human expert <cit.>. In unsupervised machine learning, labels are not included and the algorithms are frequently used to cluster data into groups, reduce dimensionality, and detect anomalies <cit.>.§.§ Machine Learning in Be Star ResearchClassical Be stars are rapidly-rotating, B-type, main sequence stars that are surrounded by an equatorial, circumstellar, decretion disc <cit.>. The defining characteristic of a Be star is the presence of emission in the hydrogen Balmer series, notably , owing to the presence of the disc <cit.>. The exact mechanism that puts the disc gas into orbit is unknown, but it is thought to be related to near critical rotation, perhaps driven by the redistribution of angular momentum within the star <cit.>. Machine learning has emerged as a promising technique to identify Be star candidates in databases produced by large photometric and spectroscopic surveys. <cit.> used wavelet transformations to reduce the dimensionality of approximately 2,300 spectra of about 300 Be and B[e] stars in the vicinity of . Each spectrum was given a label corresponding to pure emission, absorption smaller than 1/3 of the emission peak, absorption greater than 1/3 of the emission peak, and no emission. These labels were then used to train a support vector machine <cit.> to classify the spectra into emission stars and normal stars. Although <cit.> were not explicitly concerned with the determination of inclination angles, their approach shares significant similarities with the present work. <cit.> searched dr14 of the APOGEE near-infrared survey using methods based on anomaly detection. A random forest algorithm <cit.> was trained, using a sample consisting of both synthetic and observed spectra, to create a matrix of similarity scores between each pair of spectra based on the likelihood that a given pair would end up in the same terminal branch of the random forest. The similarity matrix was then used as the input for a t-SNE algorithm <cit.> to reduce dimensionality and help with visualization. The spectra with the lowest similarity scores and their nearest neighbors were then manually inspected yielding (among other finds) 40 previously undiscovered, classical Be stars.<cit.> found 1,162 Be star candidates in dr7 of the LAMOST survey by searching foremission in the spectra of early type stars using the ResNet convolutional neural network <cit.>, combined with a series of tests to remove confounding objects such as B[e] and Herbig stars. A follow up series of tests on the Be star candidates yielded 183 previously undiscovered classical Be stars. The present work seeks to extend machine learning as applied to the Be stars to include the automatic determination of quantitative information from their spectra. As it is well known that the morphology of theline strongly reflects how the star-disk system is viewed (see Figure <ref> and the discussion below), we target the extraction of the stellar viewing inclination of the central star from a single, continuum normalized spectrum of a moderate resolution centred on . The performance of three supervised machine learning algorithms, each trained on synthetic spectra, are compared: neural networks tasked with regression, neural networks tasked with classification, and support vector regression. Each algorithm is then applied to an observed sample of Be starspectra to judge performance in realistic cases. §.§ The inclination angle and its relationship to Ha morphology The inclination angle, i, is the angle between a star's axis of rotation and an observer's line of sight and ranges from 0^∘ to 90^∘ for pole-on and edge-on observations respectively (see Figure <ref>). It is usually assumed that stellar rotation axes are randomly oriented in space which leads to an expected p(i) di=sin i di distribution for any observed sample of stars <cit.>. <cit.> cast doubt on the assumption of random inclinations by finding significant spin axis alignment for the red giant stars in the old open clusters NGC 6791 and NGC 6819 using asteroseismology. <cit.> investigated 48 oscillating red giant stars with masses in the range of 1.1–1.7 M_⊙ and found that about 70 percent of the stars in each cluster showed a strong level of alignment. The probability that these alignments arose by chance from an underlying random distribution was calculated to be below 10^-7 for NGC 6819 and below 10^-9 for NGC 6791 <cit.>. Conversely, the inclination angle distribution obtained from a sample of 36 field red giants showed no significant spin alignment <cit.>. Hydrodynamical simulations <cit.> and numerical simulations of the effects of shear versus compressive turbulence <cit.> suggest that if a significant fraction of a star cluster's initial kinetic energy is rotational, then stars can form in a cluster with significant correlations in the direction of their rotation axes that can persist over Gyr timescales.The strongly correlated spin alignments found by <cit.> have been contested by <cit.> and <cit.>, who attributed them to a combination of systematic bias that favoured low inclination angles and neglecting to account for the impossibility of measuring inclination angles near either 0^∘ or 90^∘. A re-analysis by <cit.> of the spin alignments of both NGC 6819 and NGC 6791 found the inclination angle distribution of both open clusters to be consistent with a sin i distribution upon taking these effects into account. <cit.>'s analysis supported <cit.>'s conclusion that the distribution of the field red giant stars was isotropic, but was unable to test the conclusions on spin-alignment in open clusters because their method is unsuitable for red clump stars. <cit.> urged caution in accepting strongly aligned stellar spins in open clusters and highlighted the need for a dedicated study using another method. Be stars offer an alternative avenue to search for correlated spin axes in young open clusters. This is because Be stars are bright, common (≈ 20 percent of main sequence B stars are Be stars <cit.>), and their inclination angles can be reliably determined spectroscopically (see below). Also, for bright and nearby Be stars, i can be reliably determined using long baseline optical interferometry (LBOI) observations of the star-disc system <cit.>. Additionally, there are methods based on gravitational darkening, in which rapid rotation causes the stellar intensity to vary with latitude <cit.>, and i is extracted from detailed spectral synthesis <cit.>. <cit.> showed that spectral synthesis ofcan accurately determine the orientation of a Be star's disc, and as the disc is in the star's equatorial plane, the inclination of the star itself. The method of <cit.> leverages the fact that the morphology of a Be star'semission-line profile varies strongly with inclination even if the disc size and density structure is held constant. This is shown in Figure <ref>; here low inclinations give rise to singly-peaked emission in , moderate inclinations result in doubly-peaked emission, and high inclinations result in doubly-peaked lines with deep shell absorption <cit.>. By comparing a single observedprofile to a library of synthetic spectra computed using theandsuite of codes <cit.>, <cit.> were able to recover the inclination angles of 11 Be stars to within ± 10^∘ as compared to LBOI determined inclinations. <cit.> further test the Hα technique using a sample of Be stars with inclinations available from gravitational-darkening studies <cit.> and find good agreement between the two methods. §.§ Organization Section <ref> describes the synthetic Be star spectra used to train the machine learning algorithms. Section <ref> describes the three machine learning algorithms and the associated performance metrics by which they have been evaluated. Section <ref> details the procedure for optimizing user-defined model parameters, called hyper-parameters, that must be tuned for each algorithm in order to ensure optimal performance and discusses the accuracy achieved in the synthetic test samples. The results of testing the trained algorithms on observedprofiles for a sample of 92 Be stars, with available inclination angle determinations from gravity darkening <cit.> andprofile fitting <cit.>, are found in Section <ref>. Section <ref> contains a case study of using the trained algorithms on 11 nearby Be stars with well-constrained inclination angle determinations from LBOI. A discussion of our results follows in Section <ref>. § SYNTHETIC TRAINING SPECTRA In order to train machine learning algorithms to determine the inclination angles of Be stars, large libraries of synthetic spectra were generated centred on the vacuum value of , 6564.6Å. Each individual modelprofile is represented by 201 continuum-normalized flux values covering the region ± 1000 km s^-1 from line centre. One library ofline profiles, corresponding to a range of equatorial disc density models, was generated for each of the central B star masses given in Table <ref>, which correspond to spectral types ranging from approximately B9V to B0.5V.§.§ Creating the libraries of synthetic spectra The libraries of Be starline profiles were computed by <cit.> using theandsuite of codes <cit.>. <cit.>'s stellar evolutionary models for a core hydrogen fraction of X = 0.3, which corresponds approximately to the middle-age main sequence, were used to generate the radii, luminosities, and effective temperatures of the central B stars. Table <ref> details the stellar properties adopted. outputs the radiative equilibrium temperatures in the Be star's circumstellar disc given the central B star's photoionizing radiation field and density structure as inputs <cit.>. If the distance from the rotation axis of the central B star is R, the central B star's radius is R_*, the distance above the equatorial plane is Z, and the disc scale height is H, the density structure of the disc is parameterized byρ(R,Z) = ρ_0 (R_*/R)^n e^-(Z/H)^2,where ρ_0 and n are free parameters that can be adjusted to match observations.The scale height, H, of a disc in vertical hydrostatic equilibrium is given by H = [c_ s(T_0)/V_ K(R)] R,where temperature T_0 = 0.6 T_ eff, c_ s is the speed of sound at T_0, and V_ K(R) is the Keplerian orbital speed at distance R <cit.>.For each central B star mass given in Table <ref>, 165 different discs were considered, comprised of 15 values of ρ_ 0 distributed evenly in log-space between 10^-12 gcm^-3 and 10^-10 gcm^-3 and 11 values of n between 1.5 and 4 in increments of 0.25 <cit.>. Amodel was computed for each of the 165 permutations and then the hydrogen level populations computed bywere used byto compute individual H α line profiles.accomplishes this task by solving the radiative transfer equation along a series of rays directed at the observer <cit.>. The composite disc-plus-starprofile is computed in a unified way by incorporating the relevant boundary condition for each ray. Rays that terminate on the stellar surface use a Doppler-shifted, photosphericprofile for the upwind boundary; rays that pass through the disc but miss the star assume no incident radiation. This allows the computed profiles to be directly compared with observed profiles (after convolution to the correct spectral resolution). Calculating theline profiles introduces two new parameters, R_ D, which is the outer radius of the disc and i, which is the inclination. Seven disc sizes, from 5 R_* to 65 R_* in steps of 10 R_*, and ten inclinations, from 0^∘ to 90^∘ in steps of 10^∘, were considered. Each of the 11 central Be star masses detailed in Table <ref> has an associated library containing 11,550 line profiles resulting in 127,050line profiles overall.§.§ Samples of synthetic spectraSeveral different samples ofspectra are used in this work, and the following naming conventions are employed. Previously in Section <ref>, a library of 11,550 synthetic spectra was created for each stellar mass in Table <ref>. This current section details the creation of a sample of ∼8,000 synthetic spectra from each of the libraries of synthetic spectra. Section <ref> describes how these samples of synthetic spectra are further divided into training, validation, and test sets. The training, validation, and test sets are used to optimize the algorithms' hyper-parameters in Section <ref> and to train the algorithms in Section <ref>. Once trained, the algorithms will be used to determine the inclination angles of two samples of observed spectra: the 92 star Zorec sample in Section <ref> and the 11 star NPOI sample in Section <ref>.To create a sample of synthetic spectra from a profile library, the desired number of spectra, n_ spec, is specified. Then, onlyspectra that have an average, absolute percentage difference from the reference photospheric profile (for the same mass) of 3 percent or more are selected randomly from the line profile library corresponding to a central B star of a given mass. Profiles too similar to the reference photospheric line profile are not included because they lack significant line emission (or shell absorption), and therefore poorly constrain the inclination angle. As line profiles within this 3 percent threshold are excluded from the sample, it is not possible to use all 11,550line profiles contained within a given library. This work uses a sample of ∼8,000line profiles for each central B star mass.Two additional parameters that need to be specified when creating a sample are the spectral resolution, ℛ, and the signal to noise ratio, S/N. If Δλ is the characteristic width of the instrumental profile, then the resolution of the spectra is defined as ℛ≡λ/Δλ. The signal to noise ratio is the ratio between the measured flux of the signal to that of the noise in the continuum adjacent to the line, i.e. S/N = 100 spectra will have 1σ error bar magnitudes equal to 1 percent of their corresponding flux measurements. The profiles were generated at ℛ = 10,000 and S/N = 25. The resolution was chosen because it matches that of the Zorec and NPOI samples of observed spectra in Section <ref>. Although the observed sample spectra have S/N ≳ 100, initial testing found that algorithms trained on S/N = 25 profiles outperformed algorithms trained on S/N = 100 profiles at predicting the inclination angles of observed Be stars, possibly because the algorithms trained at S/N = 100 were overspecialized to synthetic profiles and could not deal effectively with the deviations from those profiles exhibited by observed spectra.Figure <ref> shows several syntheticemission line profiles for a 4 M_⊙ Be star at ℛ = 10,000. Illustrated are a representative range of synthetic profiles for different choices of S/N, disc density parameters, and viewing inclinations, with the the upper-right panel showing a profile rejected for being too close to the underlying photospheric profile and within the 3 percent tolerance.§.§ Preprocessing the input spectraEach synthetic spectrum is stored in a 201-element vector containing the continuum-normalized, relative fluxes equally spaced in the interval ± 1,000km s^-1 about line centre in . These vectors of relative fluxes are used as input for both types of neural networks, regression and classification. However, unlike neural networks, support vector regression uses Euclidean distances (see Section <ref>), and vector elements with relatively large values (such as profiles with large emission peaks) will dominate the distance calculations. For this reason, we have scaled each of the samples of ∼8,000 synthetic spectra such that all elements have a mean of zero and unit standard deviation prior to use as input for support vector regression. Each observed spectrum was visually centered on the vacuum value of , λ_0. The wavelengths associated with each flux in a spectrum were converted to velocities using the Doppler formula relative to line centre, v/c = Δλ/λ_0, as theline covers only a narrow range of wavelengths. Compared to retaining the full wavelength dependence, this simplification results in errors that are at most a tenth of the assumed spectral resolution (i.e., 3km s^-1 compared to 30km s^-1 for R=10^4). The observed spectra were truncated to the range ± 1,000km s^-1 and the fluxes were interpolated so that each observed spectrumlies on the same 201 point velocity grid as the synthetic spectra. As with the synthetic spectra, these vectors of relative fluxes are used, directly, as inputs for both types of neural networks but are standardised to zero mean and unit standard deviation before being used as input to support vector regression. § ALGORITHMS AND PERFORMANCE METRICSThis work uses three types of supervised machine learning algorithms to learn the relationship between emission line profiles and i: neural networks tasked with regression, neural networks tasked with classification, and support vector regression. The algorithms are trained on grids of relative fluxes from synthetic Be star line profiles, in the vicinity of , and the trained algorithms are then used to determine i for observed Be stars. A performance metric is needed in order to quantify how well the relationship betweenemission line profiles and i has been learned. The performance metric used in this work is the root mean squared error (RMSE), defined as RMSE = (1/n∑_j=1^n(y_j - ŷ_j)^2)^1/2,where n is the number of Be star spectra in the sample, ŷ are the inclination angle determinations of our machine learning algorithms, and y are our target inclinations. Each spectrum in a sample has an associated target inclination known precisely from thecalculation. All sample spectra are uniformly distributed from 0^∘ to 90^∘ in steps of 10^∘. In Sections <ref> and <ref>, we calculate the RMSE performance of the machine learning algorithms on observed spectra. For observed spectra, the target inclinations, y, are the inclination angle determinations of another method (e.g.,profile fitting). §.§ Neural networksA neural network (NN) is a supervised machine learning algorithm comprised of computational units called nodes organized in layers. In feed-forward configuration, every node is a linear combination of the nodes in the preceding layer followed by an application of a non-linear activation function h. A single layer NN receives the 201 relativefluxes as an input vector, x, and returns a scalar output variable, h(x,w), via the equation h(x,w) ≡ h(∑_j=0^Nw_j x_j)by finding w, the vector of weights, that minimizes a loss function which quantifies the discrepancy between the target values and the output values determined by the NN during training <cit.>. This formulation of the NN equation implicitly includes the bias, a constant offset term, as the element w_0 by defining x_0 ≡ 1. Information about the loss functions used in this work can be found in Section <ref>. Although regression is the natural task of a machine learning algorithm that outputs a continuous scalar such as i, this work uses both regression as well as classification NNs[The regression and classification NNs were implemented usingR2021a functionsand , respectively.]. The outputs of classifiers are not normally directly comparable to those of regressors. However, by choosing an activation function whose output has a probabilistic interpretation, a weighted average can be used to transform a classification NN's output to a continuous scalar, which can then be compared with the output of the regression algorithms using the same performance metric. Although a full discussion is beyond the scope of this work, the authors are aware that the validity of this approach, which requires interpreting the output of the classification NNs as measures of model confidence, is contested <cit.>. The NNs tasked with regression use the hyperbolic tangent function[The hyperbolic tangent activation function can experience a problem known as vanishing gradients, particularly in NNs with many hidden layers <cit.>. We compared our NNs against otherwise identical NNs using the ReLU, h() = max(0,), and leaky ReLU, h() = max(0.01,), activation functions to ensure that vanishing gradients were not occurring.] h()=e^-e^-/e^+e^-,as the activation function for each of their layers. In the above formulation,represents an arbitrary input. The NNs tasked with classification use two different activation functions. All of the layers other than the output layer use the hyperbolic tangent function, while the output layer uses the softmax function,h() = e^/∑ e^,whereagain represents an arbitrary input. The sum in the denominator is taken over the classes (which are the inclination bins 0^∘ to 90^∘ in steps of 10^∘ in this case) such that the denominator is a normalizing factor. This activation function was chosen because it assigns a probability to the likelihood that a given spectrum corresponds to each of the inclination classes. While regression NNs are the natural choice for this work (because i is a continuous scalar), our primary reason for also including classification NNs is exploratory: we are interested in whether the softmax function outputs would be tightly clustered around the bins nearest to the target inclination or more flatly distributed. §.§ Support Vector RegressionSupport vector regression (SVR) is a supervised machine learning algorithm that works by fitting a hyper-plane, with as many dimensions as the dataset contains features, to the data points. The SVR algorithm uses only a subset of the training data; data points sufficiently close to the hyper-plane (within a hyper-cylinder of radius ε) are ignored <cit.>. SVR was chosen for this work because it is deterministic, faster to train than NNs, and effective in high dimensional feature-spaces[SVR was implemented using theR2021a function .]. SVR seeks to minimize1/2 || w ||^2 +C∑_j=1^N(ζ_j + ζ_j^*)with respect to || w ||^2, subject to constraintsy - ŷ≤ε + ζ_jŷ - y ≤ε + ζ_j^*ζ_j, ζ_j^* ≥ 0,where ||w|| is the Euclidean norm of the vector of weights. Here C is a regularization parameter, y are the target values of i, ŷ are the values of i predicted by the model, and ζ^* and ζ are distances beginning at the border of the ε-insensitive region and extending above and below it respectively <cit.>. §.§ Committees of neural networks NNs initialized with random weights and biases can become trapped in poor local minima during training <cit.>. Different NNs trained on the same inputs will, in general, have variance associated with their outputs even if the NNs are identically constructed <cit.>. Furthermore, the RMSE is somewhat sensitive to outliers. To address these concerns, we train committees of independent NNs and retain only the median performing member. Two committees of five neural networks were trained for every central stellar mass listed in Table <ref>; one committee is comprised of NNs tasked with regression and the other, tasked with classification. All 10 of the NNs associated with each central stellar mass are trained on the same sample of synthetic spectra (see Section <ref> for details). Our approach differs from the commonly employed technique of bootstrap aggregation, whereby each neural network in the committee is trained on a bootstrapped sample of the original training sample and the overall determination of the committee is the average determination of its constituent members <cit.>. The advantage of bootstrap aggregation is that under ideal conditions (the errors of the committee members are uncorrelated and have a mean of zero), the average error of a committee falls like the reciprocal of the number of its constituent members <cit.>. Unfortunately, these idealized conditions are not met in this work; the errors of the NNs are highly correlated and do not have a mean of zero (see Sections <ref> and <ref>), and we have instead chosen a committee structure that prioritizes outlier removal. § HYPER-PARAMETER OPTIMIZATIONThe performance of machine learning algorithms on a given task varies depending on user-defined hyper-parameter values. Since the optimal values of these hyper-parameters are difficult to guess a priori and can significantly impact performance, they must be searched for <cit.>. The NN hyper-parameters that were optimized are the number of hidden layers (n_l) and the number of nodes per layer (n_n). The SVR hyper-parameters that were optimized are the size of the ε-insensitive region (ε), the regularization constant (C), and the kernel scale (KS). Additionally, all three algorithms have hyper-parameters that were chosen without being explicitly optimized in order to save computation time. These hyper-parameters were assigned standard choices and can be found in Section <ref>. The hyper-parameters of the machine learning algorithms were optimized independently for each of the Be star masses in Table <ref>. Each Be star mass has an associated sample of 8,000 synthetic H α profiles of ℛ = 10,000 and S/N = 25. As there are 11 samples of synthetic profiles and three machine learning algorithms, this amounts to 33 sets of hyper-parameters to be optimized in total. Although the goal of this work is to produce an automated means of determining i for observed Be stars from a single, medium-to-high resolution spectrum, the nature of the training process dictates that both the hyper-parameters and the parameters of the algorithms are optimized based on their ability to determine i for synthetic spectra. The performances reported in this and the following section should be seen in that context.§.§ Hyper-parameter optimization for NNsThe hyper-parameters that were optimized for both types of NNs are the number of nodes per layer (n_n) and the number of hidden layers (n_l). For NNs tasked with regression, the optimization scheme consists of searching over a grid, found from preliminary trials, that contains the following six values of n_n, n_n ∈{4, 5, 6, 8, 10, 12}, and two values of n_l, n_l ∈{1, 2}. To perform the search, a committee of five NNs were trained on the same sample for each combination of n_n and n_l on the grid. The performance of a given (n_n, n_l) pair is taken to be the RMSE of its median performing committee member. The optimal hyper-parameters are taken to be the (n_n, n_l) pair with the best performance. Figure <ref> shows the hyper-parameter optimization scheme applied to the 4 M_⊙ sample; here, the combination of two hidden layers of six nodes was found to be optimal. Then, this process was repeated for each of the remaining ten samples of synthetic profiles. For NNs tasked with classification, the optimization scheme is nearly identical to that of the NNs tasked with regression. The only differences are that preliminary trials found that the grid to be searched over contains the following seven values of n_n, n_n ∈{25, 30, 35, 40, 45, 50, 55}, and that the output of the classifier is a vector of probabilities that needs to be converted to an estimate of i using a weighted average before a performance can be assigned via the RMSE. The optimal hyper-parameter combinations for both types of NNs are summarized in Table <ref>. While the performance always increased going from one to two hidden layers, we chose to limit the NN depth to two hidden layers because preliminary testing found that adding a third hidden layer rendered computation times prohibitive for only minimal gains in performance.§.§ Hyper-parameter optimization for SVRThe hyper-parameters that were optimized for SVR are the ε-insensitive region (ε), the regularization constant of Equation (<ref>) (C), and a scaling factor that the input matrix is divided by called the kernel scale (KS). For SVR, the optimization scheme consists of searching over combinations of ε, C, and KS that were drawn randomly in log-space from the ranges ε∈ [10^-1/2, 10^1/2] 3y̅/S/N√(lnN/N), C ∈ [10^-1/2, 10^1/2]|y̅ + 3σ_y|, KS ∈ [15, 25],where ε and C are drawn from a range spanning an order of magnitude from the prescription of <cit.> and the range for KS was determined from empirical trials.A combination of hyper-parameters is generated by drawing each of the three hyper-parameters independently. Once a combination of hyper-parameters has been drawn, an SVR is trained on one of the samples of synthetic profiles and its performance is stored. This process is repeated 150 times for each of the 11 samples of synthetic profiles. The motivation for repeating the process n=150 times is that it will find a hyper-parameter combination in the ninety-eighth percentile with 95 percent confidence via solving 1-0.98^n = 0.95 for n≈ 148. We determined that significant performance gains were unlikely to be achieved by raising the value of n by comparing the three best hyper-parameter combinations for each of the 11 samples and noting that the performance difference between the best and third best hyper-parameter combination was always below 0.1^∘. The optimal SVR hyper-parameter combinations, resulting from this process, are summarized in Table <ref>. § TRAINING THE ALGORITHMSThis section describes how the machine learning algorithms are trained on samples of synthetic spectra. By modifying their adaptive parameters, namely, weights and biases, training allows the machine learning algorithms to leverage patterns between the synthetic H α profiles and their associated inclination angles. The trained algorithms will then be used to determined the inclination angles of observed Be stars in the following two sections. The training process introduces additional hyper-parameters to those optimized in Section <ref>. These hyper-parameters, which have been assigned standard choices, are the loss function, training algorithm, and kernel function. While the work is organized such that Section <ref> is about hyper-parameter optimization and Section <ref> is about training the algorithms, the two sections should be seen as complimentary: the optimized hyper-parameters are used during training and the training process was used to optimize the hyper-parameters.§.§ loss functionsIn order to quantify the discrepancy between the determinations of a machine learning algorithm and their associated target inclinations during training, a loss function, E(w), is used. For a machine learning algorithm, learning consists of minimizing the loss function by modifying the adaptive parameters of the algorithm, namely the weights and biases. For both NNs tasked with regression and SVR, this work uses the mean squared error, E(w) = 1/n∑_j=1^n(y_j - ŷ_j)^2,where n is the number of Be star spectra in the sample, ŷ are the inclination angle determinations of our models, and y are the target inclinations, as the loss function. The mean squared error was chosen because it contains the same information as our performance metric, is a standard choice for regression problems, and because it has the property of heavily penalizing large errors.For NNs tasked with classification, this work uses cross-entropy,E(w) = -1/n∑_j=1^n∑_kp(y_jk)ln(p(ŷ_jk)) + (1-p(y_jk))ln(1-p(ŷ_jk)) ,where n, ŷ, and y are defined as in equation (<ref>) and the second sum is taken over the classes. The probability that a profile's target inclination belongs to a given class, p(y), can only have values of zero or one. If we consider a profile with an associated target inclination of y = 20^∘, then p(y) is equal to one when k corresponds to the 20^∘ class and is equal to zero otherwise. The inclination determinations of the classifiers are vectors of length k, whose components contain the probability that a profile belongs to each of the inclination classes, p(ŷ). While cross-entropy is the standard loss function used in classification NNs, recent work has cast doubt on its, supposed, superiority over the mean squared error <cit.>. Nevertheless, cross-entropy was chosen because using a squared loss function appears to impede the optimization of NNs with a softmax output layer <cit.> which is required for converting the probability that each profile belongs to a given inclination class into a scalar estimation of i (see Section <ref>). §.§ Optimization algorithms In order to minimize the loss functions discussed in Section <ref>, an optimization algorithm is required. The two major classes of algorithms applicable to minimising continuous, differentiable functions of several variables (applicable to both types of NNs) are variants of either gradient descent or Newton's method. The optimization algorithm we use for the NNs tasked with regression, which is effectively an interpolation between these two major classes of algorithms, is theR2021a implementation of the Levenberg–Marquardt algorithm <cit.>. In order to reduce computational cost, the Levenberg–Marquardt algorithm uses 𝐉^𝐓𝐉, where 𝐉 is the Jacobian matrix, to approximate the Hessian matrix when performing Newton's method-like parameter updates; this approximation only holds for squared loss functions and is, therefore, incompatible with the NNs tasked with classification (which use cross-entropy as their loss function). The optimization algorithm we use for the NNs tasked with classification, which is an accelerated variant of gradient descent, is theR2021a implementation of the scaled conjugate gradient descent algorithm <cit.>. SVR results in a very large convex, quadratic programming (QP) optimization problem. As the optimization surface is convex, SVR cannot become trapped in poor local minima during training the way NNs can. The optimization algorithm we use for SVR, which breaks this large QP problem into a series of minimally sized QP problems that can be solved analytically, is theR2021a implementation of sequential minimal optimization <cit.>[The Levenberg–Marquardt, scaled conjugate gradient descent, and sequential minimal optimization algorithms were implemented using theR2021a functions , , and , respectively.].§.§ Kernel function For SVR, the kernel function, K, maps inputs to higher dimensional spaces, where a suitable hyper-plane can be found, before back-projecting to the original feature space <cit.>. This allows for a curved hyper-plane, which may provide a significantly better fit to the data than a straight one would. The radial basis functionK(x,x') = e^-||x-x'||^2,where x and x' represent feature vectors of relative fluxes, is the kernel function used in this work. The radial basis function was chosen because it is the standard kernel function used in SVR and has good performance over a wide variety of tasks <cit.>. §.§ Training, validation, and testing resultsFollowing standard practice in machine learning, we divided each of the samples of synthetic spectra (see Section <ref>) into disjoint training, validation, and testing datasets <cit.>: * The largest of the three datasets is the training set. For both types of NNs, we randomly assigned 70 percent of each sample of synthetic spectra to the training set; for SVR, this percentage is higher, at 90 percent, owing to the different validation methods used. For a machine learning algorithm, training consists of using the profiles of the training set to learn the model parameters that minimize E(w). The performances that the algorithms achieve on the training set are prone to being exceedingly optimistic due to over-fitting. Over-fitting occurs when an algorithm becomes over-specialized to the peculiarities of the training data (such as noise) and, as a result, generalizes poorly to new data.* The validation set is held back during training and is used to prevent over-fitting rather than to modify the adaptive parameters of the algorithms. For both types of NNs, we randomly assigned 15 percent of each sample of synthetic spectra to the validation set. Validation is performed by calculating E(w) on the validation set each time the model parameters are updated during training. If a NN is over-fitting, E(w) will decrease on the training set but increase on the validation set due to poor generalization to new data. If E(w) increases for six consecutive parameter updates on the validation set, the model is considered over-fitted and the training ends via a validation criterion known as early stopping <cit.>. The number of parameter-updates performed during training before over-fitting began is stored as the `best epoch' parameter for later use. For SVR, we used ten-fold cross validation whereby the training set is randomly divided into ten equally sized subsets called folds <cit.>. The SVR model is trained ten different times; each of these times a different fold is held back to be used as a validation set and the remaining nine folds are combined into a training set. Validation is performed by calculating E(w), averaged over the ten validation sets, each time the model parameters are updated during training. The number of parameter-updates that minimizes E(w) is stored as the `best epoch' parameter for later use. Ten-fold cross validation has the advantage that every profile in the training set contributes to both training and validation, but comes at the cost of significantly increased training times. This trade-off was ideal for SVR, which is relatively quick to train, but computationally prohibitive for the NNs. * The test set is held back during both training and validation and is used to test the trained algorithms' performance on previously unseen data. Any profile that did not end up in either the training or validation set was assigned to the test set; this amounted to 15 percent of each sample of synthetic spectra for both types of NNs and 10 percent for SVR. Testing is performed by calculating the RMSE performance of an algorithm that was trained for a number of parameter updates defined by its associated `best epoch' parameter.In this work, the performance of an algorithm on synthetic spectra (in Tables <ref>, and <ref>, and in Figure <ref>) always refers to the test set performance.Tables <ref> and <ref> summarize the RMSE performance of the three machine learning algorithms on the test samples of synthetic spectra for each mass bin. There is a trend that the RMSE of all three machine learning algorithms tends to worsen as stellar mass increases until it plateaus at around 9–10 M_⊙. The two regression algorithms outperformed the NNs tasked with classification for every stellar mass considered. The NNs tasked with regression outperformed SVR for masses between three and seven M_⊙ whereas SVR outperformed the NNs tasked with regression for eight and nine M_⊙ spectra; the performance of the two regression algorithms is approximately equal at 10 M_⊙ and above. § PERFORMANCE ON OBSERVED PROFILESThe results of the previous section are an encouraging proof-of-concept. However, the method must still be shown to be effective on observational data. This section is concerned with testing the trained algorithms on an observed sample ofspectra consisting of 92 of the 233 galactic Be stars considered by <cit.>, which we call the Zorec sample following <cit.>. The stars of the Zorec sample were chosen based on the public availability of spectra in the region of . Spectra for 58 of the stars come from the BeSS spectral database[Operated at LESIA, Observatoire de Meudon, France:] <http://basebe.obspm.fr>, with the remaining 34 stars coming from the sample of <cit.> taken at the John Hall telescope at Lowell Observatory. Sources for individual stars can be found in Table 1 of <cit.>. The spectra typically have S/N∼100 and ℛ∼10^4, with the latter matching the training resolution. Every star in the sample has an inclination angle determination based on gravitational darkening <cit.> andprofile fitting <cit.>. More information on the 92 stars of the Zorec sample can be found by consulting <cit.> and the references therein. The <cit.> inclination angle determinations (referred to ashereafter) are based on gravity darkening whereby very rapid stellar rotation results in a latitude-dependent T_ eff <cit.>, causing the spectrum to vary with i. Figure <ref> shows the inclination angle distribution of the Zorec sample as determined by both gravity darkening (, left) andprofile fitting (, right). Although both distributions peak near 60^∘, there is a trend forto be higher for low inclinations and lower for high inclinations than . As a comparison betweenandon the stars of the Zorec sample has already been done <cit.>[<cit.> carefully discusses the apparent non-sin i distributions of bothand .], this section will focus on a comparison between the machine learning determinations of i and .All three trained machine learning algorithms discussed in Section <ref> were used to determine the inclination angles of the Zorec sample stars. These inclination angles were calibrated and then compared withto ascertain how effectively the different machine learning algorithms, trained on only synthetic spectra, can determine inclinations using observed spectra. To calibrate the machine learning inclinations, the mean of the distribution (i_ ML-i_ H α) was set to zero by adding a constant offset to i_ ML. Here i_ ML refers to inclinations determined using each of the three algorithms, NNs tasked with regression, NNs tasked with classification, and SVR. These calibration offsets are given in Table <ref> for each of the three machine learning algorithms. We note that two of these offsets, NNs tasked with regression and SVR, are quite small. NNs tasked with classification has the largest offset of -7.3^∘; however, even in this case, the offset is still less than most of the 1σ errors inas determined by <cit.>.Figure <ref> plots the inclination angle determinations of the three types of algorithms: NNs tasked with regression (i_ NN), NNs tasked with classification (i_ CNN), and SVR (i_ SVR), each versus the correspondingfor each of the 92 stars of the Zorec sample. The 1σ uncertainties inare as determined by <cit.>. We have adopted the algorithms' RMSE performance on synthetic spectra of an equivalent mass star (see Tables <ref> and <ref>) as their 1σ uncertainties. The Pearson correlation coefficients, r, were calculated for each of the three plots, as were least-squares fits to the data, including uncertainties from bootstrap Monte Carlo resampling done 100 times. Figure <ref> shows the distribution of the residuals between the inclinations determined by each algorithm andfor the stars of the Zorec sample (i.e, i_ NN - i_ H α, i_ CNN - i_ H α, and i_ SVR - i_ H α). Theses residuals are binned in widths of 5^∘. The blue curve in each plot shows a Gaussian distribution with the same mean and standard deviation as the distribution of the residuals for comparison. Figures <ref> and <ref> show a clear hierarchy of performance; the NNs tasked with regression outperformed the NNs tasked with classification which in turn outperformed SVR. The NNs tasked with regression performed the best with a RMSE of 7.6^∘ and a correlation coefficient between i_ NN andof r=+0.91. Of the 92 stars, 78 (or 85 percent) were found to have (i_ NN-i_ H α) consistent with zero within the errors. The NNs tasked with classification had an intermediary performance with a RMSE of 10.9^∘ and a correlation coefficient of r=+0.78; 71 of the 92 stars (or 77 percent) were found to have (i_ CNN-i_ H α) consistent with zero within the errors. Finally, SVR performed notably worse than the NNs. with a RMSE of 13.9^∘. The correlation coefficient was found to be r=+0.64, and 47 of the 92 stars (or 51 percent) were found to have (i_ SVR-i_ H α) consistent with zero within the errors. Thus NNs tasked with regression are the optimal choice, providing an accuracy comparable to the directprofile fitting of <cit.>.§.§ Performance by mass and inclinationOverall, NNs tasked with regression best automates the method ofprofile fitting; however, it is possible that one of the other algorithms is better at determining i for Be stars with particular properties. Should this be the case, the best approach would not rely on a single “best" algorithm but would instead be an ensemble of two (or all three) algorithms whose outputs would be weighted by the properties of the star of interest. This subsection will look with more granularity at two such properties, stellar mass and inclination, with the goal of determining if either the NNs tasked with classification or SVR can outperform the NNs tasked with regression on particular mass and/or inclination ranges. We have designated the stars of the Zorec sample as either low mass (3–5M_⊙, N=36), medium mass (6–8M_⊙, N=24), or high mass (9–14M_⊙, N=32) and tabulated[We use the mass-spectral type calibration of <cit.>.] the algorithms' performances in Table <ref>. When tested on synthetic spectra (Section <ref>), there was a trend that the performance of all three algorithms tended to worsen as stellar mass increased until it plateaued around 9–10M_⊙ (see Tables <ref> and <ref>). With observed spectra, NNs tasked with regression performed similarly on both the observed spectra of low (RMSE = 7.0^∘) and medium mass (RMSE = 6.7^∘) stars, with performance worsening for the high mass stars (RMSE = 8.8^∘). NNs tasked with classification performed worse on the observed spectra of low mass stars (RMSE = 9.1^∘) compared to medium masses (RMSE = 8.0^∘), with their performance worsening for high mass stars (RMSE = 14.2^∘). SVR performed best on the observed spectra of low mass stars (RMSE = 13.4^∘) and performed similarly on both medium (RMSE = 14.2^∘) and high mass stars (RMSE = 14.3^∘). Ultimately, however, the NNs tasked with regression outperformed both of the other algorithms on all three mass ranges suggesting that an ensemble of the algorithms is not warranted based on mass. Turning now to inclination, we have designated the stars of the Zorec sample as either low i (0–30^∘, N=7), medium i (30–60^∘, N=41), or high i (60–90^∘, N=44) and tabulated the three algorithms' performances in Table <ref>. The small sample size of low inclination stars is unfortunate but not surprising because of the p(i)∼sini for randomly oriented spin axes <cit.>. The NNs tasked with regression performed best on low i observed spectra (RMSE = 5.8^∘) and similarly on both medium (RMSE = 8.0^∘) and high i stars (RMSE = 7.5^∘). The NNs tasked with classification performed the worst on low i (RMSE = 23.8^∘) observed spectra and similarly on both medium (RMSE = 9.2^∘) and high i stars (RMSE = 9.0^∘). The very poor performance of the NNs tasked with classification on low i observed spectra is the result of a small sample size (N=7) combined with the worst determination of i of any algorithm on the Zorec sample for the star HD 58050 (with i_ CNN - i_ H α = +63.1^∘); omitting HD 58050 improves the performance considerably (RMSE = 7.8^∘). SVR performed worse on low i (RMSE = 19.3^∘) than on medium i (RMSE = 12.7^∘) observed spectra with an intermediate performance for high mass stars (RMSE = 14.0^∘). The NNs tasked with regression outperformed both of the other algorithms on all three inclination ranges confirming that an ensemble of algorithms is not warranted for this task either.§.§ DiscussionWhile NNs tasked with regression and SVR performed similarly on synthetic spectra (Section <ref>), NNs tasked with regression performed significantly better than SVR at automating the results of <cit.>'sprofile fitting method on observed Be star spectra. It is also interesting to note that the NNs tasked with classification, the worst performer on synthetic spectra, actually outperformed SVR on observed Be star spectra. With an RMSE of 7.6^∘ and a Pearson coefficient of r=+0.91, the NNs tasked with regression are the clear choice to automate theprofile fitting method. The NNs tasked with regression's performance of RMSE = 7.6^∘ matches the average uncertainty in(Δ i_ H α = 7.6^∘) on the Zorec sample, again, suggesting excellent agreement between the two methods. An ensemble of specialists was considered in Section <ref>, but ultimately rejected because the NNs tasked with regression had the best performance on every subdivision of mass and inclination considered. § THE NPOI SAMPLEThis section is concerned with testing the calibrated algorithms on an observational sample of 11 bright, nearby, Be stars taken by the Naval Precision Optical Interferometer (NPOI) <cit.> withspectra available from <cit.>. The NPOI observations feature the spatially resolved circumstellar discs of their associated Be stars which allows for accurate determinations of their inclination angles. If a is the measured major axis of the disc and b is the minor axis, we can calculate the interferometrically determined inclination angle via i_ NPOI=cos^-1(b/a) on the simple geometric assumption that the disc is circular yet appears elliptical due to projection. While it is well established that Be star discs are thin <cit.>, they do have a small associated scale height. Therefore, interferometric observations of sufficient angular resolution can never yield b = 0 and we should take care to only use i_ NPOI for inclinations where it is appropriate. <cit.> examines when the cos^-1(b/a) relation is expected to fail and finds it to be when i_ NPOI > 80^∘. None of the 11 Be stars in the NPOI sample have an inclination value outside this range, so it is used for all the determinations of i_ NPOI in this work. More information about the 11 stars in the NPOI sample can be found by consulting Table <ref>.The main advantage of the NPOI sample is the high accuracy of the interferometrically-determined inclinations, which have average uncertainties that are about two and a half times smaller than the uncertainties of the inclinations determined by gravity darkening (5.5^∘ vs 14.5^∘). Furthermore, unlikeprofile fitting, the method of interferometry is entirely independent of thespectroscopy used to train the algorithms used in this work. The main disadvantage of the NPOI sample is its small size; as only the brightest and closest Be stars can be resolved interferometrically, the resulting sample of 11 profiles will necessarily be sensitive to outliers. The three machine learning algorithms, trained on syntheticprofiles, were tested on the NPOI sample of observed profiles and the results are compared with the inclination angle determinations made using both interferometry andprofile fitting. As before, the performance of an algorithm is taken to be the RMSE between its determinations of i and either i_ NPOI or(considered separately). Figure <ref> shows a comparison of the inclination angle determinations of our three machine learning algorithmsi_ NN, i_ CNN, and i_ SVR with those of i_ NPOI. The NNs tasked with regression performed the best with an RMSE of 12.3^∘. Only two of the 11 determinations of i differed by more than 10^∘: υCyg (33.7^∘) and γCas (13.6^∘). The NNs tasked with classification fared a little worse with a RMSE of 14.2^∘. Five of the 11 determinations of i differed by more than 10^∘, with the worst cases being those of υCyg (35.8^∘) and χOph (14.5^∘). SVR had the worst performance with a RMSE of 19.0^∘. Seven of the 11 determinations of i differed by more than 10^∘ with the worst cases being υCyg (43.6^∘) and oAqr (24.8^∘). Although the inclination determinations of all three algorithms were higher on average than i_ NPOI, the effect was small for SVR (μ_SVR = +0.4^∘) but larger for both types of NNs (μ_NN = +7.7^∘,μ_CNN = +5.6^∘). Figure <ref> also shows a comparison of the inclination angle determinations of i_ NN, i_ CNN, and i_ SVR with those of . The NNs tasked with regression performed the best with a RMSE of 8.5^∘. Four of the 11 determinations of i differed by more than 10^∘ with the worst disagreements being υ Cyg (14.3^∘) and 48 Per (14.2^∘). The NNs tasked with classification had a RMSE of 11.2^∘. Four of the 11 determinations of i differed by more than 10^∘ with the worst cases being those of O Aqr (16.4^∘) and υ Cyg (16.3^∘). SVR performed the worst with a RMSE of 15.8^∘. Five of the 11 determinations of i differed by more than 10^∘ with the most discordant cases being O Aqr (26.8^∘) and υ Cyg (24.2^∘). The relative performance of the three algorithms for the NPOI sample was the same as that of the larger Zorec sample: NNs tasked with regression performed the best, followed by NNs tasked with classification, and then SVR. All three algorithms performed better when compared tothan they did when compared to . This is not surprising because the synthetic profiles used to train the algorithms come from the same libraries as those used forprofile fitting. It is worth highlighting the influence of υCyg on the results for the NPOI sample as this star caused all three algorithms significant problems. The three worst discrepancies between an algorithm's determination of i andall occur for υCyg. When comparing an algorithm's determination of i with υCyg is the largest or second largest discrepancy in all three cases. While υCyg does have the smallest value ofin the NPOI sample (27.3^∘), the issue seems to be more complicated than the algorithms struggling with low inclinations because they performed well on both ηTau (33.0^∘) and βPsc (35.9^∘). When comparing with , omitting υCyg from the sample would cause the following performance changes: NNs tasked with regression would improve by about 40 percent (RMSE falling from 12.3^∘ to 7.2^∘), NNs tasked with classification to improve by about 30 percent (RMSE falling from 14.2^∘ to 9.6^∘), and SVR to improve by about 20 percent (RMSE falling from 19.0^∘ to 14.5^∘). With υCyg omitted, these resulting performances are similar to the performances on the full Zorec sample (see Section <ref>); this may suggest the υCyg determinations are anomalous. To resolve whether the inclination angle determinations for υCyg really are anomalous, we would ideally like to include more stars in the NPOI sample. Unfortunately, optical interferometry is only possible on the nearest and brightest Be stars, and the question of whether υCyg is an anomaly remains unanswered. Finally, when comparing against , the effect of omitting υCyg from the NPOI sample results in a smaller performance increase of approximately 10 percent across all three algorithms.§ CONCLUSIONSThree supervised machine learning algorithms were trained exclusively on synthetic, Be starspectra computed with thecode suite to be able to extract an estimate of the central B star's inclination angle from a single, observed,flux profile and the star's spectral type. The algorithms tested were neural networks tasked with regression, neural networks tasked with classification, and support vector regression. When applied to a large (N∼ 100) observed sample of Be star spectra <cit.>, neural networks tasked with regression performed best, yielding an inclination accuracy of RMSE=7.6^∘ which is comparable to that obtained by direct model profile fitting of theline. Neural networks tasked with classification were an intermediate performer (RMSE=11^∘) and support vector regression performed significantly worse (RMSE=14^∘). During the training and hyper-parameter optimizations, it was found that algorithms trained on low S/N=25profiles yielded much better results compared to those trained on higher S/N profiles when applied to the real,spectra of Be stars. We speculate that the wider variation among the lower S/N synthetic spectra, coupled with the large training samples, allowed the algorithms to better deal with natural variations in observed spectra that are not captured by the models. Training on synthetic data has the advantage that cases rare in the general population (in this case, low inclination systems as p(i) di=sin i di) can be incorporated into the training, as long as over-specialization of the algorithm to purely synthetic data can be avoided. An interesting avenue for future work is testing how the optimal S/N varies with network depth. Further along these lines, we are testing the viability of training deep, convolutional neural networks on images ofline profiles (rather than 1D vectors of relative fluxes) to determine the inclination angles of observed Be stars.Finally, future work will focus on further extending the quantitative analysis of Be star spectra by training neural networks to extract vsin i estimates from the relevant portions of Be star spectra by focusing on, for example, the observed profiles of He i 4471Å and Mg ii 4481Å. We feel that this problem is also very amenable training with synthetic line profiles generated with thecode suite. Combined with this future work, the inclination finding neural networks will allow equatorial stellar rotation velocities to be directly measured from moderate-to-high S/N spectra of sufficient resolution. § ACKNOWLEDGEMENTS The authors would like to thank the anonymous referee for thoughtful feedback.B. D. Lailey acknowledges support from the University of Western Ontario's physics and astronomy department. T. A. A. Sigut acknowledges support, in the form of a Discovery Grant from the Natural Sciences and Engineering Council of Canada. § DATA AVAILABILITY The trained neural networks tasked with regression are available to download at <https://github.com/bryanlailey/Be_inclination> and use in the (R2020a or later) programming environment. The observed profiles of the Zorec and NPOI samples and the 4 M_⊙ library of synthetic spectra are also available there.mnras | http://arxiv.org/abs/2310.18437v1 | {
"authors": [
"B. D. Lailey",
"T. A. A. Sigut"
],
"categories": [
"astro-ph.SR"
],
"primary_category": "astro-ph.SR",
"published": "20231027192355",
"title": "Inclination Angles for Be Stars Determined Using Machine Learning"
} |
=1^1Department of Physics & Astronomy, University of the Western Cape, Cape Town 7535, South Africa ^2Institute of Cosmology & Gravitation, University of Portsmouth, Portsmouth PO1 3FX, United Kingdom ^3National Institute for Theoretical & Computational Sciences (NITheCS), Cape Town 7535, South Africa In the pursuit of understanding the large-scale structure of the Universe, the synergy between complementary cosmological surveys has proven to be a powerful tool. Using multiple tracers of the large-scale structure can significantly improve the constraints on cosmological parameters. We explore the potential of combining the Square Kilometre Array Observatory (SKAO) and the Dark Energy Spectroscopic Instrument (DESI) spectroscopic surveys to enhance precision on the growth rate of cosmic structures. We employ a multi-tracer Fisher analysis to estimate precision on the growth rate when using pairs of mock surveys that are based on SKAO and DESI specifications. The pairs are at both low and high redshifts. For SKA-MID, we use the HI galaxy and the HI intensity mapping samples. In order to avoid the complexities and uncertainties at small scales, we confine the analysis to scales where linear perturbations are reliable. The consequent loss of signal in each individual survey is mitigated by the gains from the multi-tracer. After marginalising over cosmological and nuisance parameters, we find a significant improvement in the precision on the growth rate.^[email protected]^[email protected]^[email protected] the growth rate on linear scales by combining SKAO and DESI surveys Simthembile Dlamini^1 ^,^a, Sheean Jolicoeur^1 ^,^b, Roy Maartens^1,2,3^,^c February 2022 ================================================================================ § INTRODUCTION Einstein's theory of General Relativity (GR)andmodified gravitytheories (see e.g. <cit.>) prescribe the relation between peculiar velocities and the growth of large-scale structure. Peculiar velocities generate redshift-space distortions (RSD) in the power spectrum, which consequently provide a powerful probe for testing theories of gravity via the linear growth rate f=-lnln D/ln(1+z), where D(z)= δ(z, k) /δ(0, k) and δ is the matter density contrast. Here we assume that thegrowth rate is scale-independent on linear scales. We confine our analysis to scales where linear perturbation theory is accurate, using a conservative k_ max. Although this leads to a significant loss in signal, it has the advantage that we can avoid the theoretical complexities and uncertainties involved in the modelling of small-scale RSD.Precision measurements of RSD require the redshift accuracy of spectroscopic surveys.Currently, one of the best constraints on the growth indexis from the extended Baryon Oscillation Spectroscopic Survey (eBOSS)survey, Data Release 14 Quasar <cit.>: γ≡ln f(z)/lnΩ_m(z) = 0.580± 0.082 .This is consistent with the standard value γ=0.55, which is predicted by GR in the ΛCDM model. This value of γis also a good approximation for simple models of evolving dark energy, whose clustering is negligible <cit.>. Statistically significant deviations from γ=0.55 could indicate either non-standard dark energy in GR or a breakdown in GR itself. The next generation of multi-wavelength spectroscopic surveys (e.g. <cit.>) promises to deliver high-precision measurements of RSD, using complementary types of dark matter tracers. The effect of linear RSD on the power spectrum is degenerate with the amplitude of the matter power spectrum and with the linear clusteringbias. This degeneracy can be broken by using information in the multipoles of the Fourier power spectrum (see e.g.<cit.>), or by using the angular power spectrum and including cross-bin correlations <cit.>.By combining information from different tracers, the multi-tracer technique <cit.> can significantly improveconstraints on the growth rate <cit.>. We use Fourier power spectrain the flat-sky approximation. We perform a simple Fisher forecast on pairs of next-generationspectroscopic surveys at low and at higher redshifts. The low-z samples are similar to the Dark Energy Spectroscopic Instrument (DESI)Bright Galaxy Sample (BGS) <cit.> and the Square Kilometer Array Observatory (SKAO) HI galaxies sample or the HI intensity mapping (IM) Band 2 sample <cit.>.For the higher-z samples, we use samples similar to the DESIEmission Line Galaxies (ELG) andSKAO Band 1 IM samples.§ MULTI-TRACER POWER SPECTRAIn redshift space, the positions of observed sources are made up of two parts. The firstis due to the background expansion of the universe, and the second is due to the peculiar velocities of the sources. Peculiar velocities are the result of the gravitational effect of local large-scale structure, and they induce shifts in the redshift-space positions of the sources. On large scales,linear RSD produce an increase in clustering. For a given tracer A of the dark matter distribution, the observed density contrast at linear order is Δ_A(z,n̂) = b_A(z) δ(z,n̂)_ - (1+z)/H(z) n̂·∇[n̂· v(z,n̂)]_ ,where b_A is the linear bias,v is the peculiar velocity, and n̂ is the unit vector in the line of sight direction of the observer. In the flat-sky approximation (fixed n̂), the Fourier transform of (<ref>) givesΔ_A(z,k) = [b_A(z) + f(z)μ^2] δ (z,k) μ = k̂·n̂ .Here weused the first-order continuity equation ∇· v = - H/(1+z)f δ .The tree-level Fourier power spectra are then defined by ⟨Δ_A(z,k) Δ_B(z,k') ⟩ = (2π)^3 P_A B(z,k) δ^D(k+k') .By (<ref>),P_A B(z,k) = P_A B(z,k,μ) = [b_A(z) + f(z)μ^2][b_B(z) + f(z)μ^2] P(z,k) ,where P is the linear matter power spectrum (computed from CLASS <cit.>). We can split it into a shape function 𝒫 and an amplitude parameter σ_8,0 as:P(z, k) = σ_8,0^2 𝒫(z, k) . Note that in general there is a scale-dependent cross-correlation coefficient, 0<r≤1, that multiplies the P_AB in (<ref>)<cit.>. On the large, linear scales that we consider, it is expected that r can be taken to be 1 (e.g. <cit.>).§.§ Sample specificationsWe consider mock samples similar to the following spectroscopic samples:*galaxies: DESI BGS and ELG <cit.> and SKAO Band 2 HI galaxies <cit.>.*intensity mapping: SKAO HI IM Band 1,2 <cit.>. <ref>, based on <cit.>, shows the sky and redshift coverage of the individual and overlapping samples, together with the survey time for the HI samples. For the overlap sky areas, we assume nominal values. For the linear clustering biases b_A, we use one-parameter models where the redshift evolutionis assumed known, as suggested by <cit.>.For the DESI-like samples we use <cit.>:b_g(z) = b_g0/D(z)with fiducial value b_g0=1.34 0.84 .For the SKAO-like HI galaxy sample, we use <cit.>:b_g(z) = b_g0(1 + 0.880 z - 0.739 z^2) with fiducial value b_g0=0.625.For HI IM, we use a fit based on simulations <cit.>:b_H(z) = b_H0(1 + 0.693 z - 0.046 z^2)with fiducial value b_H0=0.842 .The background brightness temperature of HI IM is modelled via the fit given in <cit.>:T̅_H(z) = 0.0559 +0.2324 z -0.0241z^2 mK .§.§ NoiseFor galaxy surveys, the noise that affects the auto-power spectrum measurement is the shot noise (assumed to be Poissonian):P^ shot_gg(z) = 1/n̅_g(z) ,where n̅_g is the comoving background number density.The total signal for the galaxy auto-power spectra isP̃_gg(z,k,μ) = P_gg(z,k,μ) + P^ shot_gg(z) . <ref> shows the fiducial clustering biases and number density and brightness temperature for all the samples.There is shot noise in HI IM surveys –but on the linear scales that we consider, thisshot noise is much smaller than the thermal noise (see below) and can be safely neglected <cit.>. For the cross-power spectra, the cross shot-noise may be neglected if the overlap of halos hosting the two samples is negligible. This is shown to be the case for BGS × IM in <cit.> (see also <cit.>). We assume that it is a reasonable approximation in the cases ELG × IM and BGS × HI galaxies, so that P_gH^ shot≈ 0≈ P_gg'^ shot. (Note that we do not consider the multi-tracer case HI galaxies × HI IM.)The thermal noise in HI IMdepends on the sky temperature in the radio band, the survey specifications and the array configuration (single-dish or interferometer). For the single-dishmode of SKAO-like IM surveys, the thermal noise power spectrum is<cit.>:P_HH^therm(z) = Ω_sky/2ν_21 t_tot (1+z)^2 r(z)^2/H(z) [T_sys(z)/T̅_H(z)]^2 1/N_ d ,where ν_21=1420 MHz is the rest-frame frequency of the 21 cm emission,t_tot is the total observing time, and the number of dishes is N_d=197 (with dish diameterD_d=15m). The system temperature is modelled as <cit.>:T_sys(z) = T_ d(z)+T_ sky(z) =T_ d(z) + 2.7 + 25[400 MHz/ν_21 (1+z)]^2.75 K, where T_ d is the dish receiver temperature given in <cit.>. The total signal is then P̃_HH(z,k,μ) = P_HH(z,k,μ) + P_HH^therm(z) .§.§ Intensity mapping beam and foregroundsHI IM surveys in single-dish mode have poor angular resolution, which results in power loss on small transverse scales, i.e. for large k_⊥=(1-μ^2)^1/2k. This effect is typically modelled by a Gaussian beam factor <cit.>:D_ beam(z,k,μ)=exp[-(1-μ^2)k^2 r(z)^2θ_ b(z)^2/16ln 2]θ_ b(z) = 1.22 λ_21(1+z)/D_ d . HI IM surveys are also contaminated by foregrounds much larger than the HI signal.Since these foregrounds are spectrally smooth, they can be separated from the non-smooth signal on small to medium scales. However, on very large radial scales, i.e. for small k_=μ k, the signal becomes smoother and therefore, the separation fails. A comprehensive treatment requires simulations of foreground cleaning of the HI signal (e.g. <cit.>).For a simplified Fisher forecast, we can instead use a foreground avoidance approach by excising the regions of Fourier space where the foregrounds are significant.This means removing large radial scales, which can be modelled by the foreground-avoidancefactor: 𝒟_fg(k, μ) = Θ(|k_|-k_fg)= 1,|k_|>k_fg0,|k_| ≤ k_fgwhere Θ is the Heaviside step function. We assume the cut is made at a minimum value of k_fg = 0.01h Mpc^-1 . In summary, the HI IM density contrast is modified by beam and foreground effects as follows:Δ_H(z,k,μ) → 𝒟_ beam(z,k,μ)𝒟_fg(k,μ) Δ_H(z,k,μ) .§ MULTI-TRACER FISHER ANALYSIS The Fisher matrix in each redshift bin for the combination of two dark matter tracers is <cit.> F_αβ^ P =∑_μ=-1^+1 ∑_k=k_min^k_max ∂_α P·Cov(P, P)^-1·∂_β P^T ,where ∂_α = ∂ / ∂ϑ_α, with ϑ_α the parameters, and P is the data vector of the power spectra:P = ( P_g g , P_g H , P_HH) ( P_g g , P_g g' , P_g'g') . Note that the sum over μ incorporates the foreground avoidance via the Heaviside factor (<ref>) in P_HA. Also note that P contains no noise terms – these appear in the covariance below. The reason is that noise does not depend on the cosmological parameters. Although the thermal noise in HI IM depends on H, this arises from mapping the Gaussian pixel noise term to Fourier space. The multi-tracer covariance includes the shot and thermal noises, and is given by <cit.>: Cov(P, P) =k_ f^3/2π k^2Δ k 2/Δμ [ P̃_gg^2P̃_ggP̃_gH P̃_gH^2;;P̃_ggP̃_gH 1/2[P̃_ggP̃_HH+ P̃_gH^2 ]P̃_HHP̃_gH;; P̃_gH^2P̃_HHP̃_gH P̃_HH^2 ] ,and similarly for the case g× g'. Here Δ k and Δμ are bin-widths and the fundamental mode k_f, corresponding to the longest wavelength, is determinedby the comoving survey volume of the redshift bin centred at z:V(z) = Ω_sky/3[r(z+Δ z/2)^3 - r(z-Δ z/2)^3]= [2π/k_f(z)]^3. We choose the bin-widths following <cit.>:Δ z = 0.1, Δμ = 0.04, Δ k = k_f . In order to exclude the small length scales that are beyond the validity of linear perturbation theory, we impose a conservative maximum wavenumber of .08h/Mpc at z=0, with a redshift evolution as proposed in <cit.>:k_max(z) = 0.08 (1+z)^2/(2+n_s) h Mpc^-1 . The largest length scale that can be measured in galaxy surveys corresponds to the smallest wavenumber, given by k_min = k_f .For HI IM surveys,k_min = max(k_f, k_fg). The multi-tracer Fisher matrix applies for a perfectly overlapping region in both redshift range and sky area for the two tracers. If the samples differ in redshift and sky area, then we can add the independent non-overlapping Fisher matrix information of the individual surveys. The full Fisher matrix, denoted by g⊗ H, is <cit.> F_αβ^g ⊗ H =F_αβ^ P(overlap)+ F_αβ^g(non-overlap) + F_αβ^H(non-overlap) ,and similarly for the case g⊗ g'.For the cosmological parameters, we choose σ_8,0, Ω_b,0, Ω_c,0, n_s and h, since we are focusing onconstraining the growth rate index γ, which should be minimally affected by the remaining ΛCDM parameters on linear scales. We fix these remaining cosmological parameters to their Planck 2018 best-fit values <cit.>. We therefore consider the following set of cosmological parameters together with the two nuisance bias parameters:ϑ_α = (γ,σ_8,0,n_s,h,Ω_b0, Ω_c0 ; b_A0) A=. The marginalised errors are then obtained asσ(ϑ_α) = [(F^-1)_αα]^1/2 . We compute numerically the Fisher derivatives with respect to n_s, h, Ω_b0 and Ω_c0, using the 5-point stencil approximation with selected step sizes, shown in <ref>. The derivatives are stable for 0.0003 ≤≤ 0.1<cit.>. The derivatives with respect to γ, σ_8,0 and b_A0 are computed analytically, with for example1/P_AA ∂_γ P_AA= 2μ^2 Ω_m^γ lnΩ_m/b_A+μ^2 Ω_m^γ . §.§ Alcock-Paczyński correction The distortions induced by an incorrect choice of background cosmological model are dealt with via the Alcock-Paczyński (AP) correction <cit.> Cosmological measurements of redshift and angular displacements correspond to comoving radial and transverse lengths given by r_∥(z) = Δ z/H(z)and r_⊥(z) = (1+z)D_a(z) Δθ ,where Δ z,Δθ are observedand D_a is the angular diameter distance:D_a(z) = r (z)/(1+z) = 1/(1+z)∫_0^zz'/H(z') .For spherically symmetrical objects, r_∥ = r_⊥, the AP factor is defined asℱ = .Δθ/Δ z|_r_=r_⊥ = (1+z)D_a(z)H(z) ,which is independent of the physical size of the object. Since H depends on h and Ω_m0, using the wrong values for (h,Ω_m0) will distort the values of both H and D_a. Consequently, ℱ willalso be wrong: ℱ_ true/ℱ_ wrong = [D_a(z)H(z)]_ true/[D_a(z)H(z)]_ wrong . Here `true' refers to the fiducial cosmology Planck 2018 <cit.>. In the power spectra, the AP effect enters as the volume factor 𝒱(z) = [D_a(z)^2/H(z)]_ true/[D_a(z)^2/H(z)]_ wrong .Therefore, the power spectra P_A B becomeP_A B(z,k, μ) = 𝒱(z) P_A B(z,k̅, μ̅) , where (k̅, μ̅) are the k-space coordinates in the `wrong' cosmology and are related to the `true' (or fiducial) coordinates (k, μ) as <cit.>:k̅ = α(μ) k ,μ̅ = μ/α(μ) [H(z)]_ wrong/[H(z)]_ true ,α(μ)^2= (1-μ^2) ([D_a(z)]_ true/[D_a(z)]_ wrong)^2 + μ^2 ([H(z)]_ wrong/[H(z)]_ true)^2 .§ RESULTS<ref>–<ref> show the 1σ error contours for the parameter γ and the cosmological parameters, after marginalising over the 2 bias nuisance parameters in (<ref>). There are significant degeneracies, which the multi-tracer partly alleviates, allowing for improved precision on the cosmological parameters. The improvement is small, unlike the case of the growth index, which shows significant improvement. This is not surprising, since the multi-tracer removes cosmic variance from the effective bias, i.e. the clustering bias plus the RSD contribution, as shown in <cit.>. All multi-tracer pairs show a significant reduction in errors on γ compared to the best single tracer.The improvements are shown in the fractional errors listed in <ref>. We note that these multi-tracer errors are obtained using only linear scales.For the BGS and HI galaxy combination in <ref>, the multi-tracer fractional error on γ is less than half of the BGS value. We note that our constraint on γ for the single-tracer BGS is weaker than that in <cit.>. The reason is that <cit.> uses the angular power spectra with a large number of very thin redshift bins (width 0.01) and considers all possible cross-bin correlations. By contrast, our standard Fourier analysis only uses 5 redshift bins of width 0.1, and does not include cross-correlations between different redshift bins. The single-tracer HI galaxy deliversthe weakest constraints,mainly due to its smaller sky area and number density. When BGS is combined with HI IM Band 2 (<ref>) the situation changes. HI IM Band 2 gives much better constraints than BGS, with an error on γ less than half of the BGS error. This arises despite the effects of foreground noise because HI IM Band 2 covers a much larger area of the sky, which results in more Fourier modes that contribute to the Fisher analysis. In addition, foreground noise affects the largest scales where the γ is not strong. When IM surveys are combined with spectroscopic galaxy surveys, the impact of foreground noise on the multi-tracer constraints isfurther mitigated. <ref> shows that the multi-tracer error on γ is reduced by ∼ 40% compared to the best single-tracer error from HI IM Band 2. The best γ precision is delivered at high redshifts by ELG ⊗ IMBand 1 (<ref>).The IMBand 1 error on γ is ∼ 6% while ELG produces about double this error. The multi-tracer reduces the error to ∼ 5%. <ref> displays the fractional errors for the 2 bias nuisance parameters for the 3 survey combinations. The multi-tracer constraints on bias nuisance parameters are much tighter than those obtained from the individual single-tracers (compare <cit.>).All of our constraints are obtained from scales k<k_ max where linear perturbations are accurate.In <ref> we investigate the effect on the marginalised fractional error for γ of changing k_ max,0 from its value given in (<ref>). The plots confirm that constraints aresensitive to k_max,0. We would unnecessarily lose information by reducing ourk_ max,0 value. On the other hand, increasing it leads to higher precision – but at the risk of moving into the regime of nonlinear effects – especially in RSD – which requires much more effort to model. The multi-tracer has the advantage of allowing us to avoid these difficulties while at the same time delivering constraints that would not be possible with single tracers in the linear regime.§ CONCLUSION Using a simplified Fisher analysis we have estimated the multi-tracer constraints on the growth rate of large-scale structure, for pairs of tracer samples that are similar to those expected from the specifications of SKAO and DESI surveys. Themulti-tracer is known to be more effective, the more different are the pairs of surveys – and this motivates our choice of DESI-like and SKAO-like samples. We applied a foreground-avoidance filter to the HI intensity mapping samples and included the effects of the radio telescopebeam, but we have not dealt with the many other systematics. Our aim is not a realistic forecast for specific surveys, but rather a proof of principle analysis to answer the question:what is the potential of the multi-tracer to improve constraints using only linear scales? By confining the signal to linear scales, we avoid the highly complex modelling, especially for RSD, that is required to access nonlinear scales. The cross-power spectra represent an additional complexity in the nonlinear regime <cit.>. Our Fisher analysis suggests that significant improvements in precision on the growth rate could be achieved by multi-tracing next-generation radio-optical pairs of samples. The details are summarised qualitatively in <ref>, <ref> and <ref>, and quantitatively in <ref>.The biggest improvements, ∼40-60%, are for the low-redshift pairs. This indicates that it is worthwhile to perform a more realistic analysis and derive more realistic forecasts. We leave this for further work. Finally, we note that multi-tracer improvements can be delivered without requiring additional observational resources.Acknowledgements We are supported by the South African Radio Astronomy Observatory (SARAO) and the National Research Foundation (Grant No. 75415).JHEP | http://arxiv.org/abs/2310.17959v1 | {
"authors": [
"Simthembile Dlamini",
"Sheean Jolicoeur",
"Roy Maartens"
],
"categories": [
"astro-ph.CO",
"gr-qc"
],
"primary_category": "astro-ph.CO",
"published": "20231027081852",
"title": "Constraining the growth rate on linear scales by combining SKAO and DESI surveys"
} |
A class of fractional differential equations via power non-local and non-singular kernels: existence, uniqueness and numerical approximationsThis is a preprintof a paper whose final form is published in Physica D: Nonlinear Phenomena (ISSN 0167-2789). Submitted 19-Jan-2023; revised 15-May-2023; accepted for publication 11-Oct-2023. Hanaa Zitane Delfim F. M. TorresCorresponding author. Center for Research and Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal =================================================================================================================================================================================================================================================================================================================================================== We study open boundary conditions for the D^(2)_3 spin chain, which shares connections with the six-vertex model, under staggering, and also to the antiferromagnetic Potts model. By formulating a suitable transfer matrix, we obtain an integrable, open Hamiltonian, hence allowing for us to classify different regions of the underlying conformal field theory from eigenvalues of the Hamiltonian. [Keywords: spin chain, open boundary conditions, CFT, conformal field theory sectors, ground state, local Hamiltonian ] § INTRODUCTION§.§ Overview Spin chains have long been objects of study across the fields of quantum physics, high-energy physics, and statistical physics, for connections to computations of finite-size spectra [1], staggered vertex models [2], integrable boundary conditions [3], quantum R-matrices [4], the Bethe ansatz [5], integrability, either through being able to completely solve spin chain models, or through boundary conditions, [7,8], and conformal invariance [9]. To further explore avenues of interest at the intersection of all of these fields, in the following we study the D^(2)_2, and D^(2)_3 spin chains. Despite having different rank, each of the two spin chains share similarities, not only from the fact that R-matrices can be constructed which satisfy the Yang Baxter equation, but also from the fact that open boundary conditions can be encoded from K-matrices satisfying variants of the Yang Baxter equation at the leftmost and rightmost endpoints of a finite volume. To determine which sections of the underlying conformal field theory (CFT) are selected depending upon the encoding of open boundary conditions, we introduce the lower, and higher, rank spin chains, from which we distinguish different sectors of the CFT depend on open boundary conditions. From an expansion of the Hamiltonian into a local Hamiltonian, we characterize the ground state with open boundary conditions about the point ( h_1 , h_2 ) ≡( 0 , 0 ), and proceed to characterize other sectors of the CFT for h_1 ≡ 0, h_2 ≠ 0, and for h_1 ≠ 0, h_2 ≡ 0. §.§ Spin chain objects We begin by providing an overview of the higher rank spin chain, and then proceed to describe its relations to the lower rank spin chain. To introduce such a model, define the 36 × 36 R matrix, with,R ( u )≡exp( - 2 u - 6 η) R_J ( x ) , as a function of the single parameter u, where R_J ( x ) denotes the Jimbo matrix [4], and x ≡exp( u ) and k ≡exp( 2 η). The R matrix satisfies the Yang Baxter equation,R_12( u -v ) R_13( u ) R_23( v ) = R_23( v ) R_13( u ) R_12( u - v ), for the anisotropy parameter η≡ i γ, and another parameter v. Besides the R matrix satisfying the Yang Baxter equation, it also possesses a U ( 1 ) symmetry, which is captured by the condition, [ R ( u ) ,h_j⊗I + I⊗h_j] ≡ 0 ,for j=1 and j=2, with,h_1 ≡ℳ( 1 , 1 ) - ℳ( 6 , 6 ) ,h_2 ≡ℳ( 2 , 2 ) - ℳ( 5 , 5 ) , for the matrices ℳ( 1 , 1 ), ℳ( 6 , 6 ), ℳ( 2,2 ) and ℳ( 5,5), which are respectively given by the 6 × 6 matrices with nonzero entries at ( 1,1), ( 6 , 6 ), ( 2,2), (5,5), and the identity matrix I. The R matrix also satisfies Parity-Time (PT) symmetry, in which, R_21( u ) ≡𝒫_12ℛ_12( u ) 𝒫_12≡ R^t_1 , t_2_12( u ), for the permutation matrix 𝒫, for the transposition t. Additional properties, including braiding unitarity, regularity, crossing symmetry, quasi-periodicity, and Z_2 symmetries are also satisfied [1]. From the quantities introduced since the beginning of the section, the transfer matrix of the model takes the form,T( u ) ≡tr_0 (K_0 T_0 ( u ) ) ≡tr( K_0 ∏_1 ≤ j ≤ LR_0j( u )) , for the twist diagonal matrix,K≡diag( exp( i ϕ_1 ) , exp( i ϕ_2 ) , 1 , 1 , exp( - i ϕ_2 ) , exp( - i ϕ_1 ) ) , given two angles ϕ_1 and ϕ_2, and product of R matrices for 1 ≤ j ≤ L. The angles ϕ_1 and ϕ_2 determine the boundary conditions of the higher rank spin chain, as opposed to the open boundary conditions of the lower rank spin chain that is introduced in the remaining parts of this section.To work towards introducing the higher rank spin chain and open boundary conditions for it, we start with defining the following R matrix, and similar components, for the lower rank spin chain with the following. To construct the R matrix, consider the 6 × 6 matrix, of the form, R^(XXZ)( u ) ≡[ sinh( - u/2 + η)000;0 sinh( u/2) exp( - u/2) sinh( η)0;0 exp( u/2) sinh( η) sinh( u/2)0;000 sinh( - u/2 + η);] ,from the R matrix for the A^(1)_1 (XXZ) spin chain, which is related to the R matrix of the lower rank spin chain from the fact that, R( u ) ∝ B_12 B_34R^'_12,34( u )B_12 B_34≡ B_12 B_34(R_14( u ) R_13( u ) R_24( u ) R_23( u ))B_12 B_34 ,and matrices B, which are given by, [ 1 0 0 0; 0cosh( η/2) /√(cosh( η) )- sinh( η/2)/√(cosh( η)) 0; 0- sinh( η/2)/√(cosh( η)) - cosh( η/2) /√(cosh( η)) 0; 0 0 0 1 ] , satisfying,B^2 = I , and R-matrices solving the Yang Baxter equation,R_12( u - v ) R_13( u ) R_23( v ) = R_23( v ) R_13( u ) R_12( u - v ) .In contrast to the higher rank case, the R matrix above for the lower rank spin chain satisfies,R^'_12,34( u ) = R_43( - θ) R_13( u ) R_14( u + θ) R_23( u - θ) R_24( u ) R_34( θ) ,which in turn implies, R( u ) ∝B_12 B_34(R_14( u ) R_13( u ) R_24( u ) R_23( u ))B_12 B_34≡B_12 B_34(R_43( - θ) R_13( u ) R_14( u + θ) ×⋯R_23( u - θ) R_24( u ) R_34( θ) ) B_12 B_34 .To encode open boundary conditions of the lower rank spin chain, we must further describe properties of the K matrix, which was introduced earlier in the section with the definition of the transfer matrix T( u ) for the higher rank spin chain. In particular, in addition to the R matrices which satisfy the Yang Baxter equation for the lower rank spin chain, open boundary conditions of the chain are enforced from the fact that two other matrices, given by K_-( u ) and K_+( u ) below, satisfy, [8], R_12( u - v ) K_1,-( u ) R_21( u + v ) K_2,-( v ) =K_2,-( u ) R_12( u + v ) K_1,-( u ) R_21( u - v ) , corresponding to the first, and second, boundary conditions which are reflected through the addition of the terms K_1,-( u ) and K_2,-( v ), as well as, [2],R_1,2(- u + v )K^t_1_1,+( u ) R_1,2( - u - v - 2 i γ)K^t_2_2,+( v ) = K^t_2_2,+( v ) R_1,2( - u - v - 2 i γ) K^t_1_1,+( u ) R_1,2( - u + v ) , corresponding to the Yang Baxter equation for parameters t_1 and t_2 from the PT symmetric property of the R matrix satisfied by the higher rank spin chain, for the anisotropy parameter γ, where each matrix is respectively given by, [8], K_-( λ) ≡[- exp( - λ) ( exp( 2 λ) + k )000;0 - 1/2( 1 + exp( 2 λ) ) exp( λ) ( 1 + k ) 1/2( exp( 2 λ) - 1 ) ( 1 - k ) exp( λ)0;0 1/2( exp( 2 λ) - 1 ) ( 1 - k ) exp( λ) - 1/2( 1 + exp( 2 λ) ) exp( λ) ( 1 + k )0;000⋯ ] , where the last entry along the diagonal is given by, - exp( 3 λ) ( exp( 2 λ) + k ),which is equivalent to the matrix with symbols,[ Y_1 ( λ)000;0 Y_2 ( λ) Y_5 ( λ)0;0 Y_6 ( λ) Y_3 ( λ)0;000 Y_4 ( λ) ] , from the R matrix basis,{|1⟩ , |2⟩ , |3⟩ , |4⟩}⊗{|1⟩ , |2⟩ , |3⟩ , |4⟩} . With such an encoding of the boundary conditions of the spin chain with K_-( u ) and K_+( u ), the transfer matrix takes on a similar form, in which, T_D^(2)_2( u ) ≡tr_a (K_+,a( u ) R_a1( u ) ⋯× R_aL( u ) K_-,a( u ) R_1a( u ) ⋯× R_La( u ))≡tr_a (K_+,a( u )∏_1 ≤ j ≤ L R_aj( u )K_-,a( u ) ∏_1 ≤ j^'≤ L R_j^'a( u )) . With T_D^(2)_2( u ), which satisfies the condition [ T_D^(2)_2( u ) , T_D^(2)_2( v ) ] = 0, we also stipulate, in order to properly construct open boundary conditions for the lower rank spin chain, that, K_+,a( λ)=K^-t( - ρ - λ) M,where t denotes the transposition of the matrix, and ρ≡ - log( k ), and M ≡diag( k , 1 , 1 , 1/k). Explicitly, the entries of K_+, from K_- and the parameters ρ and M, is given by, [ Y_1 ( λ) ( - ρ - λ) 0 0 0; 0Y_2 ( λ)( - ρ - λ)Y_6 ( λ)( - ρ - λ) 0; 0Y_5 ( λ)( - ρ - λ) Y_3 ( λ) ( - ρ - λ) 0; 0 0 0Y_4 ( λ)( - ρ - λ) ][ k 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1/k ] §.§ Paper overviewEquipped with the overview in 1.1 and definitions of lower, and higher, rank spin chains in 1.2, in the remaining sections of the paper we apply the open boundary framework to the higher rank spin chain, in an effort to determine how the boundary conditions determine the CFT sector. From information on how open boundary conditions are encoded in the Yang Baxter equation, and transfer matrix, for the lower rank spin chain, we incorporate open boundary conditions in the higher rank spin chain. In the higher rank case, we obtain an expansion for the local Hamiltonian, and provide a formulation of the Bethe equations whose roots are analyzed to study the ground state. § ENCODING OPEN BOUNDARY CONDITIONS IN THE HIGHER RANK SPIN CHAIN §.§ Obtaining the higher rank spin chain Hamiltonian from an expansion of the derivative of the transfer matrix about u ≡ 0In comparison to twisted boundary conditions encoded with the angles ϕ_1 and ϕ_2, open boundary conditions for the higher rank spin chain can be encoded by introducting a K matrix for the 36 × 36 R matrix, from the basis,{|1⟩ , |2⟩ , |3⟩ , |4⟩, |5⟩ , |6⟩}⊗{|1⟩ , |2⟩ , |3⟩ , |4⟩ , |5⟩ , |6⟩} ,in which the trace would then take the form, T^open( u ) ≡tr_0 (K_+,0( u ) T_+,0( u ) K_-,0( u ) T_-,0( u )) ≡tr_0 ( K_+,0( u )∏_1 ≤ j ≤ LR_+,0j( u ) K_-,0( u ) 1 ≤ j^'≤ L∏R_-,j^'0( u ) )≡tr_0 ( K^open_+,0( u )∏_1 ≤ j ≤ LR_+,a0( u ) K^open_-,0( u ) 1 ≤ j^'≤ L∏R_-,j^'0( u ) ) , for the higher rank spin chain transfer matrix,T^open_D^(2)_3( u ) ≡T^open( u ), with open boundary conditions enforced through the K matrix, K^open_-( u ) ≡K_-( u )≡[ k_0 ( u ) 0 0 0 0 0; 0 k_0 ( u ) 0 0 0 0; 0 0 k_1 ( u ) k_2 ( u ) 0 0; 0 0 k_3 ( u ) k_4 ( u ) 0 0; 0 0 0 0 k_5 ( u ) 0; 0 0 0 0 0 k_5 ( u ) ] ,from the fact that the K matrix is a special n ≡ 2 case of the matrix, [6], [ k_0 ( u ) I_n × n; k_1 ( u ) k_2 ( u ); k_3 ( u ) k_4 ( u ); k_5 ( u ) I_n × n ] ,which amounts to the matrix,[ k_0 ( u ) I_2 × 2; k_1 ( u ) k_2 ( u ); k_3 ( u ) k_4 ( u ); k_5 ( u ) I_2 × 2 ] ,for arbitrary boundary parameter ξ_-, and functions, k_0 ( u ) ≡( exp( 2 u ) + exp( 2 n η) ( ξ^2_-exp( u + 2 n η) - exp( - u ) ), k_1 ( u ) ≡1/2( exp( 2 u ) +1 ) (2 ξ_-exp( 2 n η) ( exp( 2 u ) - 1 ) - exp( u ) ( 1 - ξ^2_-exp( 2 n η) ) ( 1 + exp( 2 n η)) ) ,k_2 ( u )≡k_3 ( u ) ≡1/2exp( u ) ( exp( 2 u ) - 1 ) ( 1 + ξ^2_-exp( 2 n η) ) ( 1 - exp( 2 n η) ), k_4 ( u )≡1/2( exp( 2 u ) +1 ) (- 2 ξ_-exp( 2 n η) ( exp( 2 u ) - 1 ) - exp( u ) ( 1 - ξ^2_-exp( 2 n η) ( 1 + exp( 2 n η) ),k_5 ( u ) ≡( exp( 2 u ) + exp( 2 n η) ) ( ξ^2_-exp( u + 2 n η) - exp( 3 u ) ) , for a real parameter η. The trace of the product of matrices enforcing open boundary conditions, and the R matrices, is obtained by setting a ≡ 0 from, tr_a ( K^open_+,a( u )∏_1 ≤ j ≤ LR_+,aj( u ) K^open_-,a( u ) 1 ≤ j^'≤ L∏R_-,j^'a( u ) ) . From the transfer matrix with open boundary conditions, one can introduce an open integrable Hamiltonian, which can be obtained from rearranging the expression above. To obtain the desired expression for the open, integrable Hamiltonian, we analyze the derivative of the transfer matrix above, upon set u ≡ 0, ( T^open( 0 ) )^'≡( tr_0 ( K_+,0( 0 )∏_1 ≤ j ≤ LR_+,j0( 0 ) K_-,0( 0 ) 1 ≤ j^'≤ L∏R_-,j^'0( 0 ) ))^' , from solutions to the Bethe equations, which can be formulated by observing that the transfer matrix, with open boundary conditions for the higher rank spin chain, satisfies, along the lines of arguments presented in [1],T( u ) |Λ⟩ = Λ( u )|Λ⟩ ,h_j |Λ⟩ = h_j |Λ⟩ ,for 1 ≤ j ≤ 2, where |Λ⟩ denotes the normalized eigenstate of T( u ). From the two relations provided above, in the presence of twisted boundary conditions parameterized by the angles ϕ_1 and ϕ_2, the eigenvalues take the form, [1],Λ( u ) ≡[ 4 sinh( u - 2 i γ) sinh( u - 4 i γ)]^L exp( i ϕ_1 ) A ( u ) + ⋯ [ ( 4 sinh( u - 4 i γ) sinh( u ) ]^L ( exp( i ϕ_2 ) B_1 ( u ) + B_2 ( u ) + B_3 ( u ) + exp( - i ϕ_2) B_4 ( u )) + ⋯ [4 sinh( u - 2 i γ) sinh( u ) ]^Lexp( - i ϕ_1 ) C ( u ) , for quantities exhibiting the dependencies, A ( u , u^[1]_j , γ) ≡ A ( u ) , B_1 ( u , u^[1]_j , γ)≡ B_1 ( u ), B_2 ( u , u^[2]_j , γ) ≡ B_2 ( u ) ,B_3 (B_2 ( u ) , u , u^[2]_j , γ) ≡ B_3 ( u ) , B_4 ( u , u^[1]_j , u^[2]_j , γ)≡B_4 ( u),C( A ( u ) , u , u^[1]_j , γ) ≡ C ( u ),for the parameter γ∈( 0 , π/4), and Bethe roots of the first, and second types,u^[1]_j, and u^[2]_j, respectively. In the presence of twisted boundary conditions, the Bethe equations are, [ sinh( u^[1]_j - i γ) /sinh( u^[1]_j + i γ) ]^L= m_1k ≠ j∏m_2k=1∏[ [sinh( u^[1]_j - u^[1]_k - 2 i γ) /sinh( u^[1]_j - u^[1]_k + 2 i γ) ] [ sinh( u^[1]_j - u^[2]_k + i γ) /sinh(u^[1]_j - u^[2]_k - i γ) ]] .For the higher rank spin chain of the same length L with open boundary conditions, the normalized eigenstates, |Λ^open⟩ would satisfy, T^open( u ) |Λ^open⟩ = Λ^open( u )|Λ^open⟩ ,h_j |Λ^open⟩ = h_j |Λ^open⟩ . Irrespective of an explicit form of the eigenstates from the first equality above, asymptotically the Hamiltonian takes the form, d/d u( log( T^open( u ) ) )|_u ≡ 0 ,from the logarithmic derivative of the higher rank spin chain transfer matrix with open boundary conditions, [6],ℋ( k , κ , K_- , K_+) ≡ℋ∼1 ≤ k ≤ N-1∑ h_k,k+1 + 1/2κ( K^-_1( 0 ) )^' + 1/tr( K_+( 0 ) )tr_0 K_0,+( 0 ) h_N0 , for the two-site Hamiltonian appearing in the first term,h_k,k+1 = 1/ξ( 0 )𝒫_k,k+1( R_k,k+1( 0 ))^' ,and another Hamiltonian term appearing in the third term,h_N0≡1/ξ( 0 )𝒫_N,0( R_N,0( 0 ))^' , for some parameter κ and a permutation 𝒫, given by,𝒫≡1 ≤ a, β≤ d∑ e_αβ⊗ e_βα ,over the basis for the tensor product of d-dimensional vector spaces, 𝒱⊗𝒱, and the function,ξ( u ) ≡ 4 sinh( u + 2 η) sinh( u + 4 η). §.§ The open, integrable HamiltonianEquipped with the transfer matrix under open boundary conditions, and the corresponding Hamiltonian, we identify eigenvectors of the Hamiltonian obtained in the previous section. To do this, observe, E ∝( Λ^open( 0 ) )^'' , from which we write, [1], E≡ - 1 ≤ k ≤ m_1∑2 sinh^2 ( 2 i γ) /cosh( 2 u^[1]_k ) - cosh( 2 i γ),corresponding to the energy of the eigenvalues, termed the eigenenergies. To establish the connection between the transfer matrix, integrable Hamiltonian, and boundary CFT, the summation for E above can be expressed as, [8],E = f_0 L + f_s - π v_F( c/24 - h ) /L + 𝒪( L^-2),for the length L of the chain, which coincides with the system size, the central charge, conformal weight of the field, the bulk energy density f_0, surface energy f_s, and Fermi velocity,v_F ≡2 πsin( π - 2 γ) /π - 2 γ . Furthermore, observe from the equation, h_j |Λ^open⟩ = h_j |Λ^open⟩ ,that the eigenvalues are given by, h^open_1 ≡ h_1 ≡L - m_1, h^open_2 ≡ h_2≡m_1 - m_2, for parameters m_1 ≥m_2 ≥ 0. The expression for the summation over k given for E above is obtained from the leading term of an expansion of the transfer matrix, T^open( 0 )≈[4 sinh( 2 i γ) sinh( 4 i γ) ]^Lexp(iP),where the term multiplying the L th power is given by, 1 ≤ i ≤ L∏δ^b_a+i_a_i , the translation operator under open boundary conditions, under the equivalence, for some j>0,( a_L+i+j) mod L ≡a_i+j ,( b_L+i+j) mod L ≡ b_i+j , a_L ≡ b_1 .In turn, substituting the leading order term for the natural logarithm of T^open( 0 ) into the expansion for the Hamiltonian, H^open≈ - sinh( 2 i γ) [ d/d u( log( T^open( u )))|_u ≡ 0] + L sinh( 2 i γ) [ coth( 2 i γ) + coth( 4 i γ)] I^⊗ L , yields an expression for a local Hamiltonian,- sinh( 2 i γ) [ d/d u( log[tr_0 ( K^open_+,0( u )∏_1 ≤ j ≤ LR_+,a0( u ) K^open_-,0( u ) 1 ≤ j^'≤ L∏R_-,j^'0( u))] L sinh( 2 i γ) [ coth( 2 i γ) + ⋯ coth( 4 i γ)] I^⊗ L which is equivalent to, after collecting like terms, - sinh( 2 i γ) [d/d u( log[tr_0 ( K^open_+,0( u )∏_1 ≤ j ≤ LR_+,a0( u ) K^open_-,0( u ) 1 ≤ j^'≤ L∏R_-,j^'0( u )) ]+ L[ coth( 2 i γ) + ⋯ coth( 4 i γ)] I^⊗ L] ,in terms of the site translation operator. Computing the derivative of the natural logarithm of the transfer matrix for the lower rank spin chain under open boundary conditions, and evaluating at u ≡ 0, yields approximately to first order,( tr_0 ( K^open_+,0( u )∏_1 ≤ j ≤ LR_+,a0( u ) K^open_-,0( u ) 1 ≤ j^'≤ L∏R_-,j^'0( u ) ))^-1([4 sinh( 2 i γ) sinh( 4 i γ) ]^Lexp(iP) ). implies the approximation,- sinh( 2 i γ) [( tr_0 ( K^open_+,0( u )∏_1 ≤ j ≤ LR_+,a0( u ) K^open_-,0( u ) 1 ≤ j^'≤ L∏R_-,j^'0( u ) ))^-1([4 sinh( 2 i γ) sinh( 4 i γ) ]^Lexp(iP) ) + ⋯L [ coth( 2 i γ) + coth( 4 i γ)] I^⊗ L] . for the open boundary Hamiltonian. §.§ Statement of the Bethe equations for anisotropy parameters approaching 0, and the root density For anisotropy parameters that are very close to 0, the Bethe equations, [ sinh( u^[1]_j - i γ) /sinh( u^[1]_j + i γ) ]^L= m_1k ≠ j∏m_2k=1∏[[ sinh( u^[1]_j - u^[1]_k - 2 i γ) /sinh( u^[1]_j - u^[1]_k + 2 i γ) ] [ sinh( u^[1]_j - u^[2]_k + i γ) /sinh(u^[1]_j - u^[2]_k - i γ) ] ],can be approximated with the relations,[ u^[1]_j - i/u^[1]_j + i]^L =m_1k ≠ j∏m_2k=1∏[ [ u^[1]_j - u^[k]_k - 2 i /u^[1]_j - u^[2]_k + 2 i ] [u^[1]_j - u^[2]_k + i /u^[1]_j - u^[2]_k - i ]] . From the fact that the spin-chain has rank two, there exists a mapping between pairs {λ_j , - λ_j}, and two possible solutions to the Bethe equations,λ_j ⟺ u^[1]_j,- λ_j ⟺- u^[1]_j,λ_k ⟺ u^[1]_k, - λ_k ⟺- u^[1]_k ,take the form,[ λ_j - i/λ_j + i]^L ≈m_1k ≠ j∏m_2k=1∏[ [ λ_j - λ_k - 2 i /λ_j - λ_k + 2 i ] [λ_j - λ_k + i /λ_j - λ_k - i ]], under the assumption that, sin( u^[1]_j - i γ)≈ u^[1]_j - i ,sin( u^[1]_j + i γ)≈ u^[1]_j + i ,for γ≈ 0. Under the identification, [1], u^[1]_j ⟶ x_j + δ^[1]_j + i π/2 - i ( γ - ϵ^[1]_j ), u^[2]_j ⟶x_j + δ^[2]_j + i( π/2 +ϵ^[2]_j) ,of the first and second roots of the Bethe equation, for sufficiently small parameters,δ^[1]_j , δ^[2]_j , ϵ^[1]_j , ϵ^[2]_j∈R ,whose complex conjugates satisfy,u̅^̅[̅1̅]̅_̅j̅⟶ x_j + δ^[1]_j - i π/2 + i ( γ - ϵ^[1]_j ),u̅^̅[̅2̅]̅_̅j̅⟶x_j + δ^[2]_j - i( π/2 +ϵ^[2]_j) , one can substitute these expressions for the first and second root types appearing in the Bethe equation, with the following rearrangements.Given the two possible root types for solutions to the Bethe equation, for even L,log[ | sinh(u^[1]_j - i ) /sinh(u^[1]_j + i) |^L]≈log[ |u^[1]_j - i/ u^[1]_j + i |^L] =log[ | x_j + δ^[1]_j + i π/2 - i ( γ - ϵ^[1]_j )/ x_j + δ^[2]_j + i( π/2 +ϵ^[2]_j) |^L]= L [ log[ |x_j + δ^[1]_j + i π/2 - i ( γ - ϵ^[1]_j )|] - ⋯ log[ | x_j + δ^[2]_j + i( π/2 +ϵ^[2]_j) |] ]=L [ log[x_j + δ^[1]_j + i π/2 - i ( γ - ϵ^[1]_j ) ] - ⋯ log[x_j + δ^[2]_j + i( π/2 +ϵ^[2]_j)] ], corresponding to terms on LHS of the Bethe equations, and, log[ m_1k ≠ j∏m_2k=1∏[ [ sinh( u^[1]_j - u^[k]_k - 2 i ) /sinh( u^[1]_j - u^[2]_k + 2 i )] [sinh( u^[1]_j - u^[2]_k + i ) /sinh( u^[1]_j - u^[2]_k - i ) ]] ]= log[ m_1k ≠ j∏m_2k=1∏[sinh( u^[1]_j - u^[k]_k - 2 i ) /sinh( u^[1]_j - u^[2]_k + 2 i )] ] + ⋯ log[ m_1k ≠ j∏m_2k=1∏[sinh( u^[1]_j - u^[2]_k + i ) /sinh( u^[1]_j - u^[2]_k - i ) ] ] ≈log[m_1k ≠ j∏m_2k=1∏[u^[1]_j - u^[k]_k - 2 i /u^[1]_j - u^[2]_k + 2 i ] ] +log[m_1k ≠ j∏m_2k=1∏[u^[1]_j - u^[2]_k + i /u^[1]_j - u^[2]_k - i ] ],corresponding to terms on the RHS of the Bethe equations, which can be expressed as,log[m_1k ≠ j∏m_2k=1∏[x_j - x_k + δ^[1]_j - δ^[2]_k + i ( 1 + π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) / x_j - x_k + δ^[1]_j - δ^[2]_k + i ( -1 + π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) ] ] +⋯ log[m_1k ≠ j∏m_2k=1∏[ x_j - x_k + δ^[1]_j - δ^[2]_k + i ( - 2+ π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) / x_j - x_k + δ^[1]_j - δ^[2]_k + i ( 2+ π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) ] ].Hence,L [ log[x_j + δ^[1]_j + i π/2 - i ( γ - ϵ^[1]_j ) ] - log[x_j + δ^[2]_j + i( π/2 +ϵ^[2]_j)] ]≈log[m_1k ≠ j∏×⋯ m_2k=1∏[x_j - x_k + δ^[1]_j - δ^[2]_k + i ( 1 + π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) / x_j - x_k + δ^[1]_j - δ^[2]_k + i ( -1 + π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) ] ] +⋯ log[m_1k ≠ j∏m_2k=1∏[ x_j - x_k + δ^[1]_j - δ^[2]_k + i ( - 2+ π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) / x_j - x_k + δ^[1]_j - δ^[2]_k + i ( 2+ π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) ] ] .In terms of λ_j and λ_k, the approximate relation for the Bethe equations for anisotropy parameters that are approximately 0 reads,L log[ λ_j - i/λ_j + i] ≈log[ m_1k ≠ j∏m_2k=1∏[ [ λ_j - λ_k - 2 i /λ_j - λ_k + 2 i ] [λ_j - λ_k + i /λ_j - λ_k - i ]] ], under the identification, λ_j ⟶x̂_̂ĵ + δ̂^̂[̂1̂]̂_̂ĵ + i π/2 - i ( γ - ϵ̂^̂[̂1̂]̂_̂ĵ),λ_k ⟶x̂_̂ĵ + δ̂^̂[̂2̂]̂_̂ĵ + i( π/2 +ϵ̂^̂[̂2̂]̂_̂ĵ) ,for sufficiently small parameters, x̂_̂ĵ , δ̂^̂[̂1̂]̂_̂ĵ ,ϵ̂^̂[̂1̂]̂_̂ĵ ,x̂_̂ĵ , δ̂^̂[̂2̂]̂_̂ĵ , ϵ̂^̂[̂2̂]̂_̂ĵ∈R . Under invariance of solutions to the Bethe equations, in which solutions come in pairs {λ_j , - λ_j } and {λ_k , - λ_k }, the Bethe equations also take the form,L [ log[- ( x_j + δ^[1]_j + i π/2 - i ( γ - ϵ^[1]_j ))] - log[ - (x_j + δ^[2]_j + i( π/2 +ϵ^[2]_j)) ] ]≈log[m_1k ≠ j∏×⋯ m_2k=1∏[- ( x_j - x_k + δ^[1]_j - δ^[2]_k + i ( 1 + π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) )/ - ( x_j - x_k + δ^[1]_j - δ^[2]_k + i ( -1 + π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ))] ] +⋯ log[m_1k ≠ j∏m_2k=1∏[- (x_j - x_k + δ^[1]_j - δ^[2]_k + i ( - 2+ π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) ) / - ( x_j - x_k + δ^[1]_j - δ^[2]_k + i ( 2+ π/2)- i ( γ - ϵ^[1]_j- π/2 - ϵ^[2]_k ) ) ] ] ,from the fact that the identification from [1] also takes the form, -u^[1]_j ⟶- (x_j + δ^[1]_j + i π/2 - i ( γ - ϵ^[1]_j )) , - u^[2]_j ⟶- ( x_j + δ^[2]_j + i( π/2 +ϵ^[2]_j)) , for pairs of solutions, { u^[1]_j ,- u^[1]_j } and { u^[1]_k ,- u^[1]_k }. From the set of relations above for the Bethe equations after taking natural logarithms, one obtains the density for roots of the Bethe equations which can be used to study the ground state, [1], ρ^x ( x ) ≡1/2 ( π - 4 γ) ( cosh( π x/π - 4 γ) )^-1 , from the expression for the centers of the counting function, [1], from the root density approach,z^x ( x ) ≡1/2 π( ψ( x , 2 γ) + 1/L1 ≤ k ≤L/2∑χ( x - x_k , 4 γ)) ,for roots of the Bethe equation. For the counting function above, the two functions are given by, χ( x , y )≡2 arctan( tanh ( x ) cot( y )) ,ψ( x , y ) ≡2 arctan( tanh( x ) tan( y )).Altogether, the density approximation of the roots to the Bethe equation for anisotropy parameters which almost vanishes falls into the following characterization:∙Ground state: h_1 ≡ h_2 ≡ 0 , ∙Type I excitation to the ground state: h_1 > 0, h_2 ≡ 0,∙ Type II excitation to the ground state: h_1 ≡ 0, h_2 > 0,∙ Type III excitation to the ground state: h_1 , h_2 > 0.Under each set of possible choices for h_1 and h_2 provided above, one can characterize solutions to the Bethe equations from the density provided earlier with , similar to the arrangement of roots provided in Figure 1, Figure 2, Figure 3, Figure 4, and Figure 5 of [1]. § REFERENCES[1] Frahm, H., Gehrmann, S., Nepomechie, R.I., Retore, A.L. The D^(2)_3 spin chain and its finite-size spectrum. arXiv: 2307.11511 (2023).[2] Frahm, H., Gehrmann, S. Finite size spectrum of the staggered six-vertex model with U_q ( sl( 2 ) )-invariant boundary conditions. Journal of High Energy Physics 70 (2022). [3] Frahm, H., Gehrmann, S. Integrable boundary conditions for staggered vertex models. Journal of Physics A: Mathematical and Theoretical 56(2): 025001 (2023). [4] Jimbo, M. Quantum R matrix for the generalized Toda system. Communication Mathematical Physics 102: 537-547 (1986).[5] Nepomechie, R.I., Retore, A.L. Factorization identities and algebraic Bethe ansatz for D^(2)_2 models. Journal of High Energy Physics 89 (2021).[6] Nepomechie, R.I., Pimenta, R.A., Retore, A.L. The integrable quantum group invariant A_2n-1^(2) and D_n+1^(2) open spin chains. Nuclear Physics B 924: 86-127 (2017).[7] Nepomechie, R.I., Pimenta, R.A., Retore, A.L. Towards the solution of an integrable D^(2)_2 spin chain. Journal of Physics A: Mathematical and Theoretical 52(43): 434004 (2019).[8] Robertson, N.F., Pawelkiewicz, M., Jacobsen, J.L., Saleur, H. Integrable boundary conditions in the antiferommagnetic Potts model. Journal of High Energy Physics 144 (2020).[9] Robertson, N.F., Jacobsen, J.L., Saleur, H. Conformally invariant boundary conditions in the antiferromagentic Potts model and the SL( 2 , R) \ U ( 1 ) sigma model. | http://arxiv.org/abs/2310.18499v1 | {
"authors": [
"Pete Rigas"
],
"categories": [
"cond-mat.stat-mech",
"hep-th",
"quant-ph"
],
"primary_category": "cond-mat.stat-mech",
"published": "20231027212943",
"title": "Open boundary conditions of the $D^{(2)}_3$ spin chain and sectors of conformal field theories"
} |
Generative AI for Software Metadata: Overview of the Information Retrieval in Software Engineering Track at FIRE 2023 [ January 14, 2024 ===================================================================================================================== Active Queue Management (AQM) is a mechanism employed to alleviate transient congestion in network device buffers, such as routers and switches. Traditional AQM algorithms use fixed thresholds, like target delay or queue occupancy, to compute random packet drop probabilities. A very small target delay can increase packet losses and reduce link utilization, while a large target delay may increase queueing delays while lowering drop probability. Due to dynamic network traffic characteristics, where traffic fluctuations can lead to significant queue variations, maintaining a fixed threshold AQM may not suit all applications. Consequently, we explore the question: What is the ideal threshold (target delay) for AQMs? In this work, we introduce DESiRED (Dynamic, Enhanced, and Smart iRED), a P4-based AQM that leverages precise network feedback from In-band Network Telemetry (INT) to feed a Deep Reinforcement Learning (DRL) model. This model dynamically adjusts the target delay based on rewards that maximize application Quality of Service (QoS). We evaluate DESiRED in a realistic P4-based test environment running an MPEG-DASH service. Our findings demonstrate up to a 90x reduction in video stall and a 42x increase in high-resolution video playback quality when the target delay is adjusted dynamically by DESiRED. § INTRODUCTIONIn the modern domain of computer networks, the necessity to meet progressively rigorous service requirements, including ultra-reliable low-latency communications and high bandwidth, has resulted in an unparalleled upsurge in network traffic, amplifying the intricacies associated with traffic management. Subsequently, approaches aimed at assisting congestion control mechanisms, such as Active Queue Management (AQM), are consistently embraced.In scenarios where incoming packet rates exceed a network device's processing capacity, a transient queuing of packets occurs in the appropriate output queue, often causing transmission delays. To mitigate this bottleneck, an effective strategy involves notification congestion status to the packet sender, allowing the congestion control algorithm to reduce transmission rates. The primary methods for conveying congestion conditions to senders include packet marking using Explicit Congestion Notification (ECN) bits and selective packet dropping. These approaches are the predominant means of communicating congestion information in network environments.Traditionally, AQM mechanisms have been primarily focused on draining packets directly from queues, with the overarching objective of mitigating transient congestion occurrences and reducing the queuing delay. Prominent examples of these traditional AQM algorithms include Random Early Detection (RED) <cit.>, Blue <cit.>, CoDel <cit.>, CAKE <cit.>, and PIE <cit.>. More recently, owing to the inherent flexibility of the programmable data plane (PDP), the prevailing state-of-the-art AQM solutions designed to operate within PDP hardware environments and made publicly accessible comprise iRED <cit.>, P4-CoDel <cit.>, and the (dual) PI2 <cit.>. These AQM implementations exemplify the synergy between novel programmable data plane capabilities and the evolving demands of congestion control within modern network infrastructures. An integral aspect of AQM algorithms pertains to the selection of an appropriate threshold value, often determined based on considerations of either queue delay (referred to as the target delay) or queue depth. Opting for an excessively small threshold value can lead to an increased occurrence of packet losses, resulting in a higher drop probability while reducing overall link utilization. Conversely, employing a high threshold value can lead to extended queuing delays but a lower likelihood of packet drops, characterized by a reduced drop probability. Additionally, the dynamic nature of network traffic necessitates the avoidance of static threshold values for specific applications. In this context, we explore this issue as the fixed target delay problem, as illustrated in Fig. <ref>, delving into the intricate dynamics of threshold determination in AQM algorithmsAt the core of this matter lies a fundamental trade-off, giving rise to a pivotal question: What is the ideal target delay for AQM? Estimating this value presents a challenging task. However, recent advancements in the field of artificial intelligence as applied to computer networks <cit.> introduce a potential avenue, leveraging the capabilities of Deep Reinforcement Learning (DRL) as a powerful tool to enhance decision-making in addressing this challenge.Although DRL models are known for their appetite for data, the provision of real-time data at the requisite granularity has posed an obstacle within the realm of computer networks. However, recent advances in the domain of PDP, in tandem with the integration of In-band Network Telemetry (INT) <cit.>, have conducted a paradigm shift. These advancements have presented us with the capability to attain granular visibility, discernible on a per-packet basis, effectively altering the scenario of the challenges associated with data availability in the context of DRL applications within computer networks.The hypothesis of this study posits that INT measurements can serve as valuable input features for a DRL model. This DRL model is intended to dynamically adjust the target delay, departing from our prior work with a fixed target delay in iRED <cit.>. The overarching goal is to utilize this DRL model for real-time optimization of QoS, thereby introducing a novel approach aimed at enhancing network performance and adaptability.iRED represents a pioneering P4-based algorithm that introduced the concept of disaggregated AQM in PDP hardware. Disaggregated AQM involves the segmentation of AQM operations into distinct blocks, specifically Ingress and Egress, within the PDP architecture. Addittionally, iRED achieves full compliance with the L4S framework (Low Latency, Low Loss, and Scalable Throughput)<cit.>. It accomplishes this by categorizing traffic as either Classic (subject to dropping) or Scalable [TCP Prague in L4S framework.] (marked with the ECN bit), thus ensuring fairness among various flows through a combined packet dropping and marking mechanism.Through the integration of INT, DRL, and the iRED framework, we introduce the innovative paradigm of DESiRED (Dynamic, Enhanced, and Smart iRED). To our knowledge, DESiRED serves as the leading implementation of a dynamic AQM system based on P4 architecture. This advancement combines the cutting-edge capabilities of fine-grained network measurements enabled by INT with the cognitive capabilities provided by the Deep Q-Network (DQN), thereby representing an integrated embodiment of state-of-the-art progress in the field of AQM.We undertake a comprehensive evaluation of DESiRED within a realistic testbed environment, focusing on the delivery of an MPEG-DASH (Dynamic Adaptive Streaming over HTTP) service <cit.>. Our experiments involve the provision of diverse video catalogs to video clients traversing a programmable network. Fine-grained INT measurements, collected at line rate in the data plane, are utilized to inform the DRL mechanism in the control plane. The DRL mechanism guides the agent's actions, dynamically adjusting the target delay to optimize the QoS for the DASH service. This forms a Smart Control Closed Loop, as depicted in Fig. <ref>Our empirical findings elucidate that DESiRED wields an impact, with the potential to alleviate video stall occurrences by a factor of up to 90x. Moreover, the enhancement in the QoS within the MPEG-DASH framework is evident, as measured by an augmentation of up to 42x in terms of Frames per Second (FPS), underscoring the considerable efficacy of DESiRED in elevating the video streaming experience. In summary, the main contributions of this work are:* We design and implement DESiRED, which is a smart Closed Control Loop that unifies the state of the art in network telemetry (INT), Deep Reinforcement Learning (DQN), and congestion control in-network (AQM).* We conduct an evaluation of the DESiRED algorithm within the context of a DASH service. This entails the practical implementation of DESiRED within a real-setup DASH environment, followed by a systematic evaluation of its performance and effectiveness.* We have created and made publicly available datasets used throughout our experiments that encompass network and application data, collectively characterizing the complexities of an adaptive video service.The remainder of the paper is organized as follows. In Section 2 we describe INT and DRL fundamental concepts. Additionally, we detail DESiRED, describing the main components implemented in the P4 language (data plane) and the DRL integration (control plane) in Section 3. In Section 4, the experiments and evaluation are presented, including a brief view of the testbed and the workloads used. Results and discussion are detailed in Section 5. Some Lessons learned are given in Section 6. Finally, the conclusions are depicted in Section 7.§ BACKGROUND In this section, we expound upon the foundational principles that underpin the functionality of DESiRED. Sub-section <ref> provides a concise elucidation of the programmable data plane and In-band Network Telemetry. Furthermore, Section <ref> delves into the principal facets of Deep Reinforcement Learning. §.§ In-band Network TelemetryRecent progress in programmable hardware and the utilization of the P4 language <cit.> have enabled network devices to autonomously report the network's state, eliminating the need for direct control plane intervention <cit.>. In this scenario, packets incorporate telemetry instructions within their header fields, facilitating the fine-grained collection and recording of network data. The telemetry instructions are defined in the INT data plane specification <cit.>. Figure <ref> illustrates the operation of INT within an arbitrary network. The network comprises four end systems, namely H1, H2, H3, and H4, along with four nodes equipped with P4 and INT support, denoted as S1, S2, S3, and S4. Each network node possesses a set of metadata, represented by orange (S1), magenta (S2), green (S3), and blue (S4) rectangles. This metadata contains information specific to each node, such as Node ID, Ingress Port, Egress Spec, Egress Port, Ingress Global Timestamp, Egress Global Timestamp, Enqueue Timestamp, Enqueue Queue Depth, Dequeue Timedelta, and Dequeue Queue Depth, as specified in the V1Model architecture.In Figure <ref>, there are two distinct flows depicted: one represented by red packets and the other by black packets. The red flow is required to adhere to the prescribed network path f1=H1, S1, S3, S4, H4, while the black flow must traverse the designated path f2=H1, S1, S2, H2.At each network hop along these paths, the data plane of the network devices employs telemetry instructions to facilitate the collection and inclusion of metadata within the packets as they traverse each node. This process is iteratively performed throughout the journey, starting from the first node after the source and concluding at the last node before reaching the destination. Upon reaching the destination node, the metadata is extracted from the packet and subsequently relayed to the monitoring system. The original packet is then directed to its final destination.In addition to the modes delineated in the INT specification, alternative approaches exist for collecting metadata within programmable networks. One such approach involves the utilization of an “exclusive telemetry flow" to monitor the network's state, which, in this work, is referred to as “Out-of-band Network Telemetry" (ONT).In the ONT scenario, dedicated probe packets are employed to gather metadata, eliminating the need for any modifications to the data packets associated with the services operating within the network. The primary advantage of this approach lies in its ability to maintain the integrity of application traffic, as it traverses the programmable network without undergoing alterations, thereby mitigating issues related to packet growth, such as fragmentation.Conversely, the use of an exclusive telemetry flow introduces additional overhead to the overall network traffic. This is due to the necessity of having a dedicated monitoring flow ONT for each service running within the network.One of the primary advantages of employing telemetry lies in the exceptional level of granularity it offers. Every individual packet traversing the network carries pertinent information directly to the monitoring system at the line rate. This level of granularity aligns with the perspective presented in <cit.>, wherein it is recognized that a substantial volume of data can prove immensely valuable for Deep Reinforcement Learning (DRL) algorithms, which have a voracious appetite for information.§.§ Deep Reinforcement Learning Reinforcement Learning (RL) is an Artificial Intelligence (AI) learning paradigm centered on actions and rewards. Unlike the conventional supervised and unsupervised learning approaches, where models learn from predefined dataset features, an RL learner, also known as an agent, interacts with an environment and receives rewards or penalties based on the actions it takes.The model depicted in Figure <ref> illustrates the formalization of a sequential decision-making strategy known as a Markov Decision Process (MDP). In this framework, the agent continually interacts with the environment by executing actions (A) at specific time steps (t) and observing new states (S_t+1) resulting from these actions. After each interaction, a reward value (R_t+1) is generated to assess the correctness of the action, with the aim of maximizing cumulative rewards throughout the agent's training process <cit.>.In this context, the agent learns to maximize its cumulative rewards by determining a policy[A policy defines the agent's strategy for associating actions with states. Such strategies can be stochastic, specifying probabilities for each action that could be taken when a particular state is observed <cit.>.] that optimizes an action-value function, denoted as Q. This function estimates the quality of actions taken by the agent in specific states.Formally, an optimal action-value function, denoted as q_*, can be defined using the Bellman optimality equation <cit.>: q_*(s,a) = 𝔼[R_t+1 + γ a'maxq_*(S_t+1,a')|S_t = s, A_t = a] Intuitively, the Bellman optimality equation suggests that the optimal Q-value for any state-action pair (q_*(s,a)) is expected to be the immediate reward obtained after taking an action (a) in a given state (s) at time step (t), augmented by the maximum expected return achievable by adhering to an optimal policy for subsequent state-action pairs, which are discounted[This discounting approach allows the agent to prioritize actions that maximize the cumulative rewards it receives in the future. The discount rate, denoted as γ, determines the present value of future rewards. For instance, a reward received k time steps in the future is worth only γ^k-1 times what it would be worth if it were received immediately <cit.>.] by γ <cit.>.Hence, the resolution of the Bellman optimality equation provides a pathway to ascertain an optimal policy, offering a potential solution to a RL problem. Nevertheless, it is imperative to acknowledge that, in practice, this solution is seldom feasible. It resembles an exhaustive search that requires consideration of all possible scenarios, involving the computation of occurrence probabilities and expected reward returns. Additionally, it relies on three assumptions that are often challenged when implementing solutions for real-world problems:a) The accurate knowledge of environmental dynamics. b) Sufficient computational resources to complete the computational requirements of the solution.c) The adherence to the Markov property.In light of these challenges, the only pragmatic approach to tackle the Bellman optimality equation is to seek a policy approximation derived from actual experiences, where transitions involving state s, action a, and reward r are considered, as opposed to relying solely on the expected outcomes <cit.>.When dealing with scenarios characterized by a well-defined set of finite states, it becomes feasible to model an approximation of the Bellman optimality equation using tabular data structures. Each entry within these structures corresponds to a state-action pair. In this context, the Q-Learning algorithm, introduced by Watkins <cit.>, marked a significant milestone in the early stages of the RL paradigm.Q-Learning is noteworthy for its direct approximation of the Bellman optimality equation, irrespective of the policy in use. It simplified the analysis of agent algorithms and facilitated early convergence proofs <cit.>. The Watkins Q-Learning algorithm is formally defined as follows: Q(S_t,A_t) = (1 - α)Q(S_t,A_t)+α[R_t+1 +γ amax Q(S_t+1,a)] where the approximated optimal Q-value is calculated by blending the current Q-value, denoted as Q(S_t, A_t), with the target temporal difference. The target temporal difference represents the reward R_t+1 obtained when transitioning to the subsequent state S_t+1 after taking action a. This value is then weighted by the discount factor γ and modulated by a learning rate α (0 ≤α < 1) <cit.>.However, it is essential to acknowledge that Q-learning operates under the assumption of a tabular representation for state-action pairs and approximates the optimal Q-value in a linear fashion. In practice, real-world applications often exhibit complexity, characterized by non-linear relationships and encompassing high-dimensional state spaces. Such complexities render the storage of comprehensive tables unfeasible <cit.>.Networking management serves as a compelling example of such scenarios, where modern Tofino switches can process INT packets at a nanosecond timescale.To address these limitations, Mnih et al. leveraged the Q-Learning algorithm by integrating it with a Deep Neural Network (DNN) to approximate the optimal Q-value,a methodology known as Deep Q-Network (DQN). In their seminal work <cit.>, the authors showcased the effectiveness of this approach by training and evaluating the DQN on an Atari 2600 emulator. Impressively, the DQN-based agents achieved performance levels surpassing those of human players in 49 distinct games, relying solely on pixel inputs and game scores for guidance.Of note, the authors maintained a consistent algorithm, DNN architecture, and hyperparameters across all games, eliminating the need for game-specific feature engineering. Thus, DQN not only outperformed agents employing linear function approximation but also demonstrated the capacity to attain or exceed human-competitive skills across diverse gaming environments. This pioneering work exemplified the synergy between RL and contemporary Deep Learning (DL) techniques, signifying a significant advancement in the state of artificial intelligence. It underscored the potential of RL when combined with modern DL methods, yielding remarkable outcomes <cit.>. In line with this, we present an RL-based approach designed to dynamically fine-tune the iRED target delay to an optimal value during video streaming, named DESiRED. This process is facilitated by an agent built on the foundation of DQN. In the subsequent Subsection, we will delve into the constituent elements that constitute this innovative approach.§.§.§ Deep Q-Network workflow The DQN architecture, as proposed by Mnih et al. <cit.>, consists of a Deep Convolutional Neural Network (CNN) designed to receive emulated game frames as input and subsequently generate predicted Q-values for each potential action within the given input state. To facilitate such predictions, Mnih et al. introduced two critical modifications to the conventional Q-Learning algorithm. These alterations were essential to mitigate instabilities inherent in using Deep Neural Networks (DNNs) for Q-value approximation <cit.>.The first modification entails the incorporation of a biologically inspired mechanism referred to as 'experience replay.' In this approach, the agent's experiences are stored as tuples containing the current state (S_t), the action taken (A_t), the reward received (R_t+1), and the subsequent state (S_t+1). Periodically, after reaching a predefined replay memory limit, a mini-batch of these experiences is uniformly sampled for training the DNN <cit.>.This approach plays a pivotal role in mitigating the emergence of correlations within the observed state space. By decoupling the dependence on successive experiences, it effectively reduces the variance in the parameters of the DNN. Fig. <ref> illustrates the interaction between a DQN agent and an environment, taking into account the experience replay mechanism. Within this context, the agent selects actions following an ϵ-greedy rule.Specifically, when employing this rule, the agent chooses between two strategies: “exploitation" and “exploration". A “greedy action" involves selecting an action from the action space based on the maximum estimated Q-value. Conversely, a “non-greedy action" entails the random selection of an action. Exploitation, represented by the selection of a greedy action, aims to exploit the current knowledge to maximize immediate rewards. In contrast, exploration, represented by non-greedy actions, focuses on traversing the action space to maximize cumulative rewards in the long run <cit.>.In RL, achieving a balanced trade-off between exploration and exploitation is paramount. However, it's important to acknowledge that, at a single time step, it's not possible for an agent to simultaneously exploit and explore actions. To reconcile these opposing strategies, a solution is to allow the agent to primarily act greedily, favoring exploitation, while intermittently choosing an action from the action space at random, independent of the estimated Q-values. This random selection is determined by an exponentially decreasing probability parameter ϵ. Consequently, as the time steps progress, the probability of selecting an optimal action gradually converges to a value greater than 1-ϵ, approaching near certainty in favor of exploitation as the agent refines its strategy over time <cit.>.A second significant contribution introduced by Mnih et al. <cit.>, relative to classical Q-Learning, pertains to the learning stage of the DQN. In this stage, a separate network, referred to as the 'target network,' is employed to estimate target values for the Q-network, often referred to as the 'online network.' This modification enhances the algorithm's stability compared to using a single online network. The rationale behind this improvement lies in the fact that updating the parameters of the online network for the current state-action pair can inadvertently influence the Q-values of the next state, potentially leading to oscillations or even policy divergence.To address this challenge, the online network's parameters are periodically cloned to the target network at intervals of every C time steps. Consequently, the target network's predictions serve as target values for the online network during the subsequent C time steps. This introduces a delay in updating the Q-values between the current and next states, effectively reducing the likelihood of policy oscillations or divergence <cit.>.Figure <ref> illustrates the DQN learning workflow, incorporating the approach described above. A concise introduction to the functionality of DQN is presented in Subsection <ref>. Furthermore, for a detailed exposition on the implementation of DQN within the scope of this research, please refer to Subsection <ref>.§ DESIRED - DYNAMIC, ENHANCED, AND SMART IRED. DESiRED, herein referred to as an advanced iteration of iRED, which was initially introduced in the work of <cit.>, constitutes a notable enhancement within the realm of intelligent network control systems. Specifically, it introduces a novel capability wherein the intelligent control plane harnesses the power of DRL to dynamically optimize and fine-tune the target delay parameters. In alignment with its predecessor, iRED, DESiRED remains faithful to the fundamental concept of disaggregated AQM. In this paradigm, AQM operations are compartmentalized into discrete functional blocks within the architecture.The concept of disaggregation emerges from the imperative to expedite packet discarding processes. In the pursuit of resource efficiency, we contend that the optimal location for packet discarding is the Ingress block. However, a noteworthy challenge arises as the vital metadata pertaining to queue delay (or queue depth), which constitutes the primary information utilized as input for the AQM algorithm to determine packet discarding decisions, is captured by the Traffic Manager and traditionally accessible within the Egress block. Within this context, DESiRED leverages a congestion notification mechanism, designed to incur minimal overhead, in order to relay the imperative to execute packet discarding actions to the Ingress block.As illustrated in Figure <ref>, the decision-making process within DESiRED takes place at the Egress block, while the corresponding actions are subsequently executed at the Ingress block. The following Subsections will elucidate the functioning of DESiRED, with a distinct focus on data plane and control plane operations.§.§ Data plane operation (AQM) To provide a more comprehensive understanding, we will commence our description of DESiRED's operation from the data plane perspective, focusing initially on the Egress block. Our exploration will initiate with the drop or marking decision process, a critical component housed within the decision module. At the Egress, iRED computes the Exponentially Weighted Mean Average (EWMA) of the queue delay (or queue depth[The programmer can choose whether to use DESiRED's delay-based or depth-based approach.]) for each individual packet, entirely within the data plane. The inherent absence of division and floating-point operations poses challenges in calculating average values within the data plane. To surmount this limitation, as applied in <cit.>, we employ an approximation method following Eq. <ref>: S_t = α· Y_t + (1 - α)· S_t-1 where S_t is the updated average queue delay, S_t-1 is the previous average queue delay and Y_t is the current queue delay. The constant α∈ [0,1] determines how much the current value influences the average. We use α=0.5, such multiplication can be replaced by bit shifts operations. The output of the EWMA will represent the average queue delay over time. When the observed value, representing the average queue delay, falls within the range (minimum and maximum thresholds) configured by the DRL mechanism, DESiRED proceeds to calculate the drop probability in accordance with the RED approach. Simultaneously, it employs a coupling mechanism to generate various levels of congestion signals, which may entail either packet drops or packet marking (ECN bit).Once the DESiRED decision module (Egress) has detected that a packet must be dropped, DESiRED must notify the action module (Ingress) to perform this action. The first challenge in the PDP context is to achieve communication between the Ingress and Egress blocks with minimum overhead. Obviously, DESiRED will not drop the packet that generated the discard decision, but a future packet <cit.>. Discarding future packets is one of the main features differentiating DESiRED from other state-of-the-art AQMs. For the congestion notification to reach the Ingress block, DESiRED creates a congestion notification packet (clone packet) and sends it through an internal recirculation port to reach the Ingress block. The action module, situated in the Ingress block, maintains the congestion state table on a per-port/queue basis and activates the drop flag (ON) for the corresponding port/queue. The current packet is forwarded to the next hop without introducing any additional delay. Subsequently, future packets intended for the same output port/queue, where the drop flag is set to ON, will be dropped, and the drop flag will be reset to OFF. This mechanism, facilitated by DESiRED, ensures that the Ingress pipeline can proactively mitigate imminent queue congestion. §.§ Control plane operation (DRL)As mentioned earlier, DESiRED tackles the issue of fixed target delay through the implementation of an intelligent control plane, denoted by the orange box in Figure <ref>. This intelligent control mechanism is responsible for updating the register that maintains the dynamic target delay threshold, as determined by the DRL decision process. Now, let us provide a comprehensive account of the operational intricacies of the intelligent control plane, elucidating the inputs and outputs in detail.The control plane operates by receiving data from two pivotal sources: the network state and the application state. In this particular implementation, fine-grained INT measurements constitute the input layer for the Deep Q-Network from the network state. The DQN's output layer is responsible for generating the agent's actions. Concurrently, the application state encompasses DASH metrics, including parameters such as FPS and the Local Buffer Occupancy (LBO) of the video player, which play a crucial role in computing the agent's reward. Fig. <ref> illustrates this Control Loop. INT measurements comprise observations that effectively depict the network's state with remarkable granularity, affording an unprecedented perspective on the extent of congestion. These measurements are acquired within the programmable data plane and subsequently routed to the intelligent control plane. Within the control plane, they are aggregated into compact dataframes, which collectively form what we term the “observation space." In the context of this study, the term observation space refers to the temporal window within which the intelligent control plane conducts an integrated analysis of both the network's state and the application's behavior.For each received observation space, the DQN incorporates INT measurements as an input layer. Following neural network processing (refining its internal weights), the DQN generates an action, which is manifested as an activation in one of the neurons within the output layer. In this study, the possible actions include: 1) increasing the target delay; 2) decreasing the target delay; and 3) maintaining the current state (i.e., taking no action).Subsequently, the control plane retains a record of the executed action and enters a state of anticipation for the forthcoming observation space. Upon the arrival of data from the subsequent observation space, the DRL mechanism evaluates whether the undertaken action has led to the optimization of DASH QoS, particularly with regard to enhancements in FPS and LBO metrics. In the event of a positive outcome, the agent is rewarded, whereas in cases of QoS deterioration, the agent incurs a penalty.Leveraging insights from the dynamic network traffic patterns, DESiRED demonstrates a remarkable capability to adapt with precision to prevailing congestion conditions. This adaptability facilitates a continuous enhancement in the quality of video services offered.It is imperative to elucidate that DESiRED is inherently application-agnostic, signifying its capacity to accommodate diverse reward policies tailored to evaluate a wide array of service metrics. This flexibility extends to metrics such as the response time of a web server or even the frame rate in video playback, underscoring its versatility across various service domains.§ EVALUATION In this section, we provide a comprehensive overview of all the components utilized for the thorough evaluation of our proposal. This encompasses a detailed exposition of the research methodology, an in-depth portrayal of the experimental environment and its configuration, the load pattern employed, the DRL mechanism implemented, the metrics and measurements used for comprehensive analysis. §.§ Research methodology Our methodology is rooted in experimental research aimed at evaluating the effectiveness of the DRL mechanism within DESiRED. Specifically, our objective is to ascertain whether this mechanism can optimize the QoS for MPEG-DASH services by dynamically adapting the target delay under conditions characterized by both stationary and non-stationary loads within a Content Delivery Network (CDN) environment.In this experiment, our aim is to conduct a comprehensive evaluation of DESiRED in comparison to iRED, where iRED employs fixed target delay settings of 5ms, 20ms, 50ms, and 100ms. We evaluate these approaches under both stationary (low and high) and non-stationary (sinusoidal) load conditions. To mitigate potential biases, each round of the investigation, spanning one hour, was repeated ten times for each approach, resulting in a cumulative duration of over fifty hours across independent runs. Furthermore, to gauge DESiRED's robustness, we aggregated the DRL agents derived from all preceding executions by employing an ensemble approach. This involved combining the model parameters through an exponentially decaying running average, as described by Eq. <ref> <cit.>: θ̂^(t) = αθ̂^(t-1) + (1 - α) θ^(t) where θ represents a parameter from the Q-network; t the gradient descent iterations; θ̂^(t) the average from such parameters (1/tΣ_i θ^(i)); and α the exponential decaying factor (defined as 2.0).We evaluate the application's performance from the client-side perspective, focusing on three key metrics: FPS LBO, and Rebuffering Rate (Starvation) as measured within the video player. Higher values for FPS and LBO correspond to improved QoS, while for Rebuffering Rate, a lower value signifies enhanced QoS.In addition to evaluating application quality metrics, we also scrutinize the performance metrics of the DRL agent, encompassing Loss function and Rewards. §.§ Environment description The experiment was constructed within a realistic testbed, adopting an Infrastructure as Code (IaC) approach, and implemented using Vagrant, Virtualbox (version 6.1.28), and Ansible (version 2.10.8). In this setup, each infrastructure component is represented by an isolated virtual machine, interlinked through a P4 programmable data plane network, as visually depicted in Figure <ref>. Each switch in the experiment was equipped with both the iRED and DESiRED approaches. On the control plane side, the DRL engine was implemented, comprising approximately 750 lines of code and utilizing Tensorflow as its backend framework. The CDN was deployed to facilitate an MPEG-DASH service, featuring live streaming of a soccer game and a playlist housing the ten most frequently accessed YouTube videos. Load management was executed using WAVE <cit.> [https://github.com/ifpb/wave], a versatile load generator that orchestrates instances of an application over time.This infrastructure was hosted on a bare-metal server, namely the Dell EMC PowerEdge R720, equipped with 2 Intel Xeon processors (E5-2630 v2, 2.60GHz) boasting 6 cores per socket (amounting to 24 virtual CPUs), 48GB of RAM, a 2TB HDD, and running the Ubuntu 20.04.6 LTS operating system. All pertinent artifacts and resources can be accessed within the repository available at our GitHub[https://github.com/dcomp-leris/DESiRED.].The MPEG-DASH Server serves video content using the DASH standard to both the Video Client and the Load Generator. It offers various configurations, as detailed in Table <ref>, with each configuration having a chunk segment size of 4 seconds. The Video Client dynamically selects and transitions between these configurations based on network traffic conditions and the adaptation logic embedded within the video player.The infrastructure is equipped with Apache version 2 as the web server, FFmpeg (version 2.8.17) for video encoding, and MP4box (version 0.5.2) for creating the MPEG-DASH manifest files, ensuring seamless video streaming.The Video Client utilizes DASH.js, a contemporary DASH reference player equipped with an Adaptive Bitrate Streaming (ABR) algorithm. It employs this ABR algorithm to consume the video stream of the soccer game, with the TCP New Reno congestion control algorithm managing network congestion.The Load Generator is responsible for introducing network noise, operating the WAVE framework with a variety of loads, including both stationary and non-stationary scenarios. It dynamically adjusts the number of video player instances over time to simulate changing network conditions. Further elaboration on this aspect can be found in Subsection <ref>.All the switches utilized in this experiment were implemented within the BMv2 software switch environment, incorporating the respective P4 code for both iRED (fixed target delay) and DESiRED (dynamic target delay with DRL) approaches. Across all approaches, telemetry instructions were meticulously programmed to append telemetry metadata to all probe packets. Notably, this experiment follows the out-of-band (ONT) approach, wherein dedicated ONT probes are dispatched from the DASH server to the Video Client. Consequently, no modifications are made to data packets to accommodate telemetry metadata. The specifics of the telemetry metadata, consisting of 32 bytes, gathered at each node within this experiment, are elaborated upon in Table <ref>. §.§ Load PatternThe Load Generator, powered by WAVE, orchestrates the instances of video clients over time based on input parameters described by a mathematical function that defines the load pattern. In its current iteration, WAVE supports constant, sinusoidal, and flashcrowd load patterns. It initiates and concludes video player processes, generating network load through genuine video requests (real traffic) that flow from the video player to the MPEG-DASH Server.In this study, our aim is to evaluate DESiRED under various load conditions, aiming to simulate diverse network state scenarios. To achieve this, we employ two distinct categories of load patterns: stationary and non-stationary. For stationary loads, which remain constant throughout the experiment, we classify them into two types: low and high. In this context, a low load is characterized by the presence of ten video client instances operating concurrently throughout the duration of the experiment, as depicted in Figure <ref>. Conversely, a high load is characterized by the simultaneous operation of forty video player instances, representing a high-intensity load, as illustrated in Figure <ref>. Under low load conditions, it is anticipated that the target delay will be attained relatively infrequently, given the shorter queuing delays that typically prevail. In this scenario, both AQM strategies, whether employing a fixed or dynamic target delay, are likely to yield comparable results in terms of QoS.However, when the network experiences predominantly high load, the surge in traffic volume can lead to an increase in queue delay, thereby prompting AQM strategies to respond in accordance with the specified target delay, whether fixed or dynamic. In such instances, the dynamic adaptability of DESiRED's target delay is expected to confer advantages in terms of QoS compared to the rigid, fixed target delay approach employed by iRED. This dynamicity enables DESiRED to better accommodate and optimize QoS in the face of fluctuating and demanding network conditions.It is indeed unrealistic to assume that network loads will always remain stationary or static. Consequently, in the second phase of our evaluation, we undertook a more comprehensive evaluation under a realistic load scenario, one that mirrors the dynamic nature of real-world network environments. Our objective was to evaluate non-stationary load patterns, encompassing both peak (high load) and off-peak (low load) periods within a single experiment. To achieve this, we employed a sinusoidal periodic load pattern characterized by the sinusoidal function detailed in Equation <ref>, where A represents the amplitude, F denotes the frequency, and λ signifies the phase in radians. The specific input parameters utilized for this evaluation were: A = 15, F = 1, and λ = 25, culminating in the load pattern illustrated in Figure <ref>. This approach captures the fluctuations in network load more realistically, offering a dynamic and challenging environment for our evaluation.f(y) = Asin(F + λ) §.§ Deep Reinforcement Learning mechanism To accomplish the objectives outlined in this paper, we tailored the DQN architecture and agent-environment workflow to align with the distinctive characteristics of the DESiRED environment, as elucidated in Subsection <ref>. In doing so, we designed the DQN using a Multi-layer Perceptron (MLP) architecture, which is well-suited for handling the tabular nature of network telemetry metadata. The MLP network adopted in our approach consists of an input layer featuring units corresponding to each INT feature, two hidden layers each comprising 24 neurons, and an output layer containing units for each possible action that the agent can undertake, as depicted in Figure <ref>. Importantly, both the online and target networks share this identical architecture. Table <ref> provides a detailed breakdown of the hyperparameters utilized for training DESiRED.To facilitate the desired agent-environment interaction, we formulated the agent's behavior as an MDP with the video chunk size serving as the discrete time steps. In this framework, DESiRED operates within the environment, dynamically adjusting the target delay in all switches at 4-second intervals, synchronized with the video chunk size. A comprehensive discussion regarding the strategy of simultaneous actuation in all switches versus individual actuation in each switch is presented in Section <ref>. The agent's action space is delineated in Table <ref>, where it is evident that the action to increase the target delay brings about a modification that is proportionally twice as substantial as the decrease action. This choice was made to prompt DESiRED to respond promptly to transient congestion while retaining the flexibility to decrease the target delay when necessary, mirroring the rationale discussed in <cit.>.It's important to highlight that the calculation of rewards does not occur immediately after an action is taken in the current state. This delay in reward calculation is attributed to the fact that the effects of the agent's action do not manifest instantly, primarily due to the inherent control mechanisms incorporated within TCP and ABR systems, as detailed in <cit.>. Consequently, the computation of rewards is deferred until the subsequent state's observation. In this context, the agent relies on network status data derived from INT measurements to form its states, selects actions, and is subsequently rewarded based on its ability to optimize the video's QoS, which is characterized by metrics such as FPS and LBO.Indeed, the intrinsic correlation between metrics such as LBO and FPS presents a challenge when devising a reward policy. As the LBO increases, there is a tendency for the FPS to also increase. However, this relationship is not always straightforward due to the complex dynamics of network congestion and video streaming.To calculate a reward (R_t+1) for a specific action (A_t), we adopt a strategy that first evaluates whether the LBO in the next state (LBO_t+1) improves compared to the LBO observed when the action was executed (LBO_t). Subsequently, a reward score is assigned based on the effects of this action on both the next state's LBO and FPS (FPS_t+1). Consequently, the agent receives maximum reward whenever the action taken leads to the maximization of LBO_t+1, and is penalized in an inversely proportional manner if the video experiences stalls. The algorithmic logic for calculating rewards is detailed in Algorithm <ref>. This approach ensures that the agent's reward is contingent on its capacity to optimize both LBO and FPS, balancing the trade-offs inherent to video streaming in dynamic network conditions.These actions were executed according to the ϵ-greedy strategy as elucidated in Subsection <ref>. To implement this strategy, we established initial and final probabilities for taking random actions, specified the number of decaying steps, and defined an exponential decay factor (as outlined in Table <ref>). In this scheme, ϵ commences its linear decrease over a span of 250 time steps to facilitate exploration. Subsequently, the probability of selecting random actions is exponentially reduced, gradually transitioning to a minimal value to emphasize exploitation over exploration. This strategy allows the agent to strike a balance between exploring new actions and exploiting its existing knowledge as it interacts with the environment.Taking into consideration the agent's action frequency of once every 4 seconds and the requirement for 250 iterations to initiate the exponential decay of ϵ, the exploration phase is expected to persist for approximately 17 minutes (equivalent to 1000 seconds). In tandem, the experience replay memory buffer necessitates a minimum of 100 samples to facilitate the online network parameter updates (as indicated in Table <ref>. Since experiences resulting from the agent-environment interaction are stored every 8 seconds, it would take approximately 13 minutes (or 800 seconds) for this condition to be met. Consequently, the online network undergoes an update each time a new experience is stored, as illustrated in Figures <ref> and <ref>.In the case of the non-stationary load, it follows a trajectory of 15 minutes to reach its peak, maintains a plateau for an additional 15 minutes, and subsequently begins to decline. During this period, the agent explores the action space during the ascending phase of the sinusoidal curve and exploits these actions during the plateau and descending phases. Consequently, when the exploitation stage commences, the agent should have already gleaned insights from past experiences, encompassing both low and high load scenarios. This enables the agent to adapt and respond effectively to the fluctuating network conditions. §.§ Metrics and Measurements On the video client side, we evaluate the QoS by monitoring key metrics, including: * FPS (Frames Per Second): This metric quantifies the number of frames displayed per second on the screen, reflecting the smoothness of the video playback.* LBO (Local Buffer Occupancy): LBO measures the remaining time, in seconds, for frames stored in the player's local buffer. It provides insights into the buffer's capacity to absorb network fluctuations and maintain continuous playback. From these primary metrics, we derive additional insights, including: * Resolution Distribution: We analyze the percentage of video content played at different resolutions (Maximum, Medium, and Minimum) to assess the adaptive streaming capabilities.* Rebuffering Rate: This metric represents the percentage of time during which the video experiences stalls or freezes on the screen, indicating interruptions in playback. To facilitate these measurements, we configure the DASH.js player to log these metrics on a per-second basis. Within the DRL mechanism, we focus on evaluating the performance metrics of the DQN:* Loss: This metric is calculated as the Mean Squared Error (MSE) between the predicted q-values for the current and next states. It reflects the convergence and accuracy of the DQN's predictions.* Reward: Reward represents the cumulative rewards and penalties acquired throughout the experiment. It offers insights into the agent's performance in maximizing QoS. Additionally, we capture the action history for each experiment, documenting the agent's selected actions at each observation space (every 4 seconds). These metrics provide a comprehensive view of the agent's learning and adaptation throughout the experiment.§ RESULTS In this section, we will present the outcomes of our experiments, where we evaluate how DESiRED enhances the QoS of MPEG-DASH. We offer an in-depth analysis from the client-side perspective, showcasing the results and delving into instances where video QoS has benefited from the dynamic adjustments facilitated by DESiRED. Furthermore, we scrutinize the performance of the DRL model, presenting evidence that the agent has successfully learned the designated policy and has been able to identify an optimal target delay value that maximizes QoS across the range of experiments conducted. §.§ Stationary Loads The motivation behind evaluating performance under stationary loads stemmed from the necessity to ascertain whether the DRL agent would exhibit distinct learning behaviors during moments of low load (ample resources) and high load (congested resources) across separate executions.When the network load predominantly remains low, as illustrated in Figure <ref>, network resources are readily available. In such scenarios, there is minimal contention for the use of the queue, resulting in limited or no intervention from auxiliary congestion control mechanisms like AQM. This phenomenon can be observed from the perspective of the video client, as depicted in Figure <ref>.Figures <ref> and <ref> illustrate the Cumulative Distribution Function (CDF) of FPS and LBO under low load conditions. In Figure <ref>, we observe some variation in FPS for iRED with a fixed target delay of 5ms and 20ms/50ms. Conversely, in the cases of iRED with a fixed target delay of 100ms and DESiRED, the video client consistently played the video at 30 FPS throughout all experiments.Concerning LBO, as depicted in Figure <ref>, the results exhibit similar behavior across approaches, with the local buffer maintaining a near-full state for most of the evaluations, approximately 60 seconds. The only exception is the iRED with a 5ms fixed target delay. In this specific scenario, the use of such a small threshold value appears to have triggered a higher frequency of AQM actions. This, in turn, might have led to more frequent drops within a time interval of less than one Round-Trip Time (RTT), as discussed in <cit.>. Paradoxically, this increased AQM activity, rather than alleviating congestion, may have exacerbated the situation, demonstrating the potential for unintended side effects when setting overly aggressive congestion control thresholds.Conversely, when the network experiences predominantly high load conditions, as illustrated in Figure <ref>, the dynamics shift significantly. In such scenarios, all approaches employing fixed target delay mechanisms encounter challenges in maintaining acceptable MPEG-DASH QoS. DESiRED, on the other hand, manages to distinguish itself from the fixed target delay approaches, as evident in Figure <ref>.To gain a deeper understanding of these results, it's important to clarify some aspects of the Adaptive Bitrate (ABR) adaptation logic employed by the DASH.js player, as described in <cit.>. The adaptation logic used in DASH.js, known as DYNAMIC, employs two different algorithms at different stages of video playback. During instances when buffer levels (LBO) are low, such as startup and seek events, a straightforward THROUGHPUT algorithm (based on throughput) is utilized. Conversely, when buffer levels are high, the player switches to the BOLA algorithm <cit.>. This dynamic adaptation approach aims to optimize video streaming under varying network conditions, aligning the bitrate selection algorithm with the network's congestion state.DYNAMIC starts with THROUGHPUT until the buffer level reaches 10s or more. From this point on, DYNAMIC switches to BOLA which chooses a bitrate at least as high as the bitrate chosen by THROUGHPUT. DYNAMIC switches back to THROUGHPUT when the buffer level falls below 10s and BOLA chooses a bitrate lower than THROUGHPUT <cit.>.Indeed, from the perspective of the video player's adaptation logic, the LBO metric proves to be far more sensitive to variations in network buffer levels compared to FPS. It's important to note that changes in bitrate and FPS should only occur when the LBO drops below 10 seconds. Consequently, it is logical to aim for maintaining an LBO greater than 10 seconds for the majority of the time, as this instructs the ABR algorithm to select the highest-quality video levels.Figure <ref>, which pertains to LBO, contributes significantly to understanding why DESiRED achieves superior FPS levels, as indicated in Figure <ref>. In this context, it is plausible to surmise that fine-tuning the target delay has provided an advantage in terms of preserving a sufficient LBO during periods of severe network congestion. This, in turn, aids the ABR algorithm in making optimal bitrate and quality level selections, ultimately leading to improved video QoS.§.§ Non-stationary LoadRecognizing the dynamic nature of network traffic, we embarked on an evaluation under non-stationary load conditions. To achieve this, we leveraged the WAVE framework, which effectively managed the execution of video client instances over time, adhering to a mathematical model of sinusoidal periodic load as detailed in Subsection <ref>.The choice of a sinusoidal periodic load model holds significance because it encapsulates moments of congestion and resource relief in the network, particularly within router buffers, within a single execution. This approach allows us to evaluate our agent's performance in situations of both high congestion, where rapid adaptation is crucial, and congestion-free states where shared resources are not overwhelmed. In essence, our expectation is that the agent will learn distinct patterns that differentiate between these varying states.This evaluation under non-stationary load conditions provides valuable insights into how the agent responds to fluctuations in network congestion, thereby contributing to a more comprehensive understanding of its adaptability and effectiveness.The initial result we would like to present pertains to the actions taken by the agent (DESiRED) within the network environment. Figure <ref> provides an overview of the agent's actions throughout the experiment. Notably, there is an initial phase of random exploration (indicated by the vertical dashed red line) extending up to the first 250 observations. During this exploratory phase, the agent gathers data about the network state, which is used to populate the experience replay buffer (as outlined in Subsection <ref>).Subsequent to this initial exploration phase, the agent commences taking actions based on its learned knowledge, drawing from the experiences stored in the experience replay buffer. It's important to highlight that this buffer is continually updated, enabling the agent to learn from new states. Consequently, the agent can adapt to previously unseen states, a capability that proves particularly valuable in scenarios with non-stationary loads.This analysis of the agent's actions provides insights into its learning process and the transition from exploration to exploitation as it becomes more knowledgeable about the environment.Analyzing the agent's actions, it becomes apparent that during the initial phase of the experiment, characterized by an increase in network load, the agent frequently opted to increase the value of the target delay. Subsequently, as the load stabilized, the agent chose to take no action, potentially reducing the overhead of control plane operations in the data plane. Towards the end of the experiment, as the network load decreased, the agent shifted its strategy towards reducing the target delay.Having observed how these actions mirror the agent's interactions with the environment, we can now delve deeper into the model's performance. Figure <ref> provides an overview of the model's behavior, illustrated by the curves representing key performance metrics such as Loss and Reward.Figure <ref> illustrates the trajectory of Loss throughout the experiment. A decline in Loss signifies a lower MSE in predicting q-values. In essence, a low Loss value suggests that the model is effectively learning the policy by selecting actions that maximize rewards (QoS). During the initial phase of filling the experience replay buffer, Loss tends to be higher as actions are taken without the benefit of learning, effectively representing random actions. However, as the experience replay buffer becomes populated and the Q-network is updated based on these experiences, the agent begins to make more informed and assertive decisions. This shift towards lower Loss values reflects the agent's ability to learn and improve its policy.Turning our attention to Rewards (Figure <ref>), we observe that the model incurs some penalties during the initial phase of the experiment. This corresponds to the period when the agent transitions from an initial stationary state with no charges to reaching the peak of the sinusoidal load curve, marked by the presence of 40 instances of the video player simultaneously. Subsequently, as the agent refines its decision-making, it starts receiving rewards consistently. These rewards indicate that the agent effectively maximizes the QoS of MPEG-DASH, further underscoring the model's learning and adaptive capabilities.The insights gleaned from the agent's performance analysis are supported by the LBO and FPS metrics observed by the video client in response to DESiRED's actions, as outlined in Table <ref> and depicted in Figure <ref>. At this conjuncture, we aim to provide an interpretation of the results from the video client's perspective, highlighting how DESiRED outperformed other approaches considered in this study.An essential piece of data when evaluating the QoS of a video service is the resolution displayed on the screen by the video player. In this context, video consumers were offered three distinct quality levels:* Minimum Resolution: 426x240 pixels at 18 FPS.* Medium Resolution: 854x480 pixels at 24 FPS.* Maximum Resolution: 1280x720 pixels at 30 FPS.Even under challenging conditions, Table <ref> clearly demonstrates that DESiRED exhibits the highest percentage of video playback at the maximum resolution (58.07%) and the lowest rate of playback at the minimum resolution (31.71%). This finding aligns with the data presented in Figure <ref>.The discussion initiated in Subsection <ref> remains pertinent in this context as well. To reiterate, during periods of intense competition for shared resources, probabilistic drops facilitated by a target delay that adjusts in response to network load fluctuations have proven instrumental in maximizing the QoS of the video service. Once again, DESiRED effectively maintains a higher level of LBO filling, as depicted in Figure <ref>, ultimately contributing to superior FPS performance, as evidenced in Figure <ref>. Figure <ref> presents a boxplot representing the percentage of video stalls, which signifies moments when the video remains frozen without any frames being displayed. A cursory glance at this figure might lead to the incorrect assumption that a longer delay at a fixed target would yield better results. However, it's important to note that DESiRED imposes an upper limit of 70ms, which is lower than the value employed by iRED100ms, thereby dispelling this theory. In this context, we believe that DESiRED's fine-tuned approach enables it to determine the optimal target delay value for each network state during the sinusoidal load.§ LESSONS LEARNEDIn this section, we will provide insights and lessons learned from our research on applying RL to computer network problems. These insights may be valuable to the scientific community interested in using RL for similar applications.1) The network has an intrinsic dynamism in its behavior: In the realm of RL, the challenges posed by computer networks present an intriguing and multifaceted problem. In essence, an RL problem can be likened to a strategic game where an agent interacts with an environment, making decisions and receiving rewards, all within the framework of a MDP. In each of these interactions, often referred to as episodes, the agent engages in a continuous process of trial and error, striving to acquire a policy that maximizes its cumulative rewards.However, the application of RL models to computer network-related predicaments introduces a unique set of challenges. Contemporary networks, characterized by their dynamic nature and intricate traffic dynamics, necessitate a novel approach to the integration of RL. One of the central predicaments lies in adapting an RL agent to an environment that is in perpetual flux, a paradigm well-embodied by the ever-changing states of queues within network routers.Of notable significance is the realization that RL agents draw their learning from the experiences accumulated through their interactions with the environment. This very dependence on real-time experiences, further compounded by the interdependence between video player metrics and network conditions—themselves subject to the agent's actions—renders the use of static datasets for agent training impractical. In situations where a physical network infrastructure is not readily available, a promising alternative entails the utilization of a model capable of simulating authentic network behaviors, such as a Generative Adversarial Network (GAN) as proposed by Navidan et al. <cit.>.An additional layer of complexity is introduced through the modulation of network load patterns, a deliberate endeavor aimed at inducing the RL agent to adapt dynamically to both peak (high load) and trough (low load) network scenarios. In this pursuit, an array of network load settings was meticulously explored, encompassing flashcrowd and sinusoid patterns. Notably, the most compelling outcomes were achieved when employing sinusoidal patterns, characterized by single instances of peak and trough conditions within the duration of video streaming.Furthermore, the intricate calibration of parameters pertaining to the reward policy emerged as an arena of paramount importance within the implementation of the DESiRED system. It was during this phase that some of the most noteworthy findings and developments transpired. Remarkably, the strategic revision of the reward policy wielded a disproportionate influence over the observed outcomes, eclipsing the impact of various other elements intrinsic to the proposed approach. As such, it underscores the pivotal significance of meticulous and judicious reward policy design tailored to the specific problem domain.In summation, the application of RL methodologies to the domain of computer networks is an enthralling endeavor replete with challenges and opportunities. It necessitates an astute orchestration of dynamic simulations, judicious load modulation, and the nuanced refinement of reward policies—a multifaceted tapestry of considerations aimed at navigating the intricate terrain of modern network optimization.2) The core of solution design lies in the rewards policy: As previously mentioned, a significant portion of our modeling effort was dedicated to defining a rewards policy that aligns with our goal of maximizing the QoS in MPEG-DASH. Initially, we considered focusing solely on the FPS values during video playback. However, this approach proved insufficient due to the dynamic nature of the video player's adaptation logic, which considers factors like throughput and buffer level. As the agent's actions influence the target delay on network devices, we anticipated that FPS values would only exhibit noticeable changes following alterations in the LBO, as LBO is more responsive to network variations. Consequently, we opted to construct our rewards policy, with a primary emphasis on evaluating LBO levels, and secondary consideration given to FPS.3) Why actuate in all devices at the same time: As actions taken by our agent are intrinsically intertwined with the rewards policy, this study delves into several approaches, including the independent execution of actions on individual switches (each switch having its specific action) or the simultaneous execution of identical actions across all switches within the network. Initially, we contemplated that employing independent actions for each switch could be an appealing strategy. However, this approach did not align seamlessly with the scope of our problem.Firstly, modifying the target delay for a single switch might not suffice to effectively assist the TCP congestion control algorithm, potentially yielding inconspicuous improvements in application-level QoS. Secondly, the adoption of such an approach would entail a proliferation of actions, scaling exponentially with the number of switches in the network (i.e., 2^n actions, with n representing the count of switches). This increase in action space complexity could substantially augment the neural network architecture's intricacy. 4) When we need to think in Transfer Learning (TL): Given the vast diversity of services, applications, topological configurations, and network loads encountered, it is imperative to acknowledge that an agent trained within a specific network environment cannot be expected to replicate its performance in other heterogeneous settings. In response to this challenge, TL has emerged as a promising approach, aiming to address several intricacies not typically encountered in the realm of RL. However, the application of TL within an RL framework is a non-trivial undertaking, necessitating numerous adaptations to enable the agent to effectively leverage knowledge acquired in a source domain for application in a target domain.Amidst the inherent complexities of this context, numerous questions naturally arise, including but not limited to: a) What types of knowledge are amenable to successful transfer? b) Which RL structures are best suited for integration into a TL framework? c) What truly distinguishes a source domain from a target domain? These inquiries, among many others, prompt a comprehensive in exploration. While extant literature, such as previous work by <cit.>, has endeavored to shed light on these considerations, we posit that a dedicated examination of these issues within the specific context of Transfer Learning in RL, particularly within computer network problem domains, is required.§ CONCLUSIONS AND FUTURE DIRECTIONS In summary, this study introduces DESiRED (Dynamic, Enhanced, and Smart iRED) as an innovative solution to tackle the long-standing issue of fixed target delay in AQM systems. By harnessing advanced network telemetry within programmable data planes and leveraging the capabilities of deep reinforcement learning, DESiRED emerges as a formidable tool to augment TCP congestion control mechanisms. In this novel framework, DESiRED utilizes high-resolution router buffer measurements, collected at line rate within the data plane, as inputs to deep reinforcement learning models residing on the control plane. Empowered by these synergistic components, the agent undertakes dynamic adjustments to the AQM's target delay in real-time, with the overarching goal of optimizing QoS for networked applications.The comprehensive evaluation conducted within a realistic testbed, featuring the contemporary adaptive bitrate schemes for HTTP-based streaming (MPEG-DASH), reaffirms the viability of DESiRED. Throughout a diverse range of scenarios, encompassing various real-world traffic loads, our results consistently indicate the efficacy of dynamic target delay adjustments in enhancing the QoS of DASH video services for end users.Considering the inherent dynamism of computer network environments, the prospect of transitioning toward TL has surfaced as a compelling avenue for future exploration. Nevertheless, the intricate challenges associated with this paradigm necessitate dedicated research endeavors to delve into these complexities in greater depth. As such, we recommend that this critical topic be addressed in forthcoming investigations. ieeetr | http://arxiv.org/abs/2310.18159v1 | {
"authors": [
"Leandro C. de Almeida",
"Washington Rodrigo Dias da Silva",
"Thiago C. Tavares",
"Rafael Pasquini",
"Chrysa Papagianni",
"Fábio L. Verdi"
],
"categories": [
"cs.NI"
],
"primary_category": "cs.NI",
"published": "20231027140657",
"title": "DESiRED -- Dynamic, Enhanced, and Smart iRED: A P4-AQM with Deep Reinforcement Learning and In-band Network Telemetry"
} |
Positional Encoding-based Resident Identification in Multi-resident Smart Homes Athman Bouguettaya===============================================================================§ INTRODUCTION Over the last decade, the study of the clustering of galaxies through Large Scale Structure (LSS) surveyshas emerged as a crucial probe within precision Cosmology. Spectroscopic surveys, such as the Sloan Digital Sky Survey[https://www.sdss.org/www.sdss.org] (SDSS) andthe Dark Energy Spectroscopic Instrument[https://www.desi.lbl.gov/www.desi.lbl.gov] (DESI) provide three-dimensional maps of the Universe, where angular positions and redshifts of millions of galaxies are measured with high accuracy. These maps constitute appropriate data sets for quantifying the clustering characteristics of galaxies, including correlation functions and other summary statistics, which then can be compared to models' theoretical predictions. The study of clustering statistics has primarily relied on two significant sources of cosmological information. First, Baryon Acoustic Oscillations (BAO) in the early Universe freeze up at the drag epoch, whose signature can be observed at later times in the correlation function as a well-distinguished peak around a scale of 150 Mpc. Second, peculiar velocities of galaxies contributes to the redshift that we measure adding to the Hubble flow component. Since this contribution occurs only along the line-of-sight direction, we observe an apparent anisotropic distortion in the matter distribution, and hence, galaxy statistics which otherwise would be isotropic become dependent on the angle of observation. This effect is known as Redshift Space Distortions (RSD).RSD and BAO effects most of the relevant information about the correlation function of galaxies. Consequently, the SDSS-III BOSS <cit.> and SDSS-IV eBOSS <cit.> collaborations have chosen compressed methodologies for their standard analysis. In such approaches, the cosmological parameters of the matter power spectrum are fixed to fiducial values, and a set of parameters characterizing the BAO and RSD effects are explored. Given that RSD depends on the average velocity of galaxies, it is sensitive to the growth rate of structure, so one of the chosen parameters is fσ_8. Two additional degrees of freedom should be included to account for the distortions in the position of the BAO along and across the line-of-sight, which arise from potential mismatches between the fiducial and true cosmologies, that is,the Alcock-Paczyński effect <cit.>.On the other hand, the Effective Field Theory of LSS (hereafter simply EFT) <cit.> built on top of Perturbation Theory <cit.> has been developed during the past years, and by now is currently used in analyses that confront theoretical models of the galaxy distribution directly to the data gathered by our telescopes. These methods are commonly known as full-modeling or full-shape analyses, and operate in a similar fashion that it has been done for the CMB over the years.Nowadays, these full-shape templates are used routinely to constrain cosmological parameters <cit.>, including higher-order statistics <cit.> and even beyond ΛCDM models <cit.>. One of the primary advantages of the compressed methodology is its agnostic nature: the parameters it explores are relatively model-independent when compared with those obtained from a direct, full-shape analysis. However, generating full-shape theoretical templates of the power spectrum comes with significant computational costs, which, until recently, hindered our ability to perform cosmological parameter estimation. This is one of the reasons why the compressed methodology has been favored by some part of the community. But even if the full-shape analysis is expensive and model-dependent, it is capable of extracting more cosmological information from the power spectrum.Therefore, both the compressed and full-shape methodologies have their own merits, and there should be an incentive to pursue both approaches in parallel. As we transition into a new era of cosmological surveys, the costs of full-shape models are set to increase even further. This is due to the unprecedented precision achieved in measurements on small scales of the correlation function (smaller than 50 h^-1 Mpc), where the nonlinearities of perturbations exert a none negligible influence on halo distributions. Furthermore, at these small scales, the relationship between the clustering of observable astrophysical tracers and the underlying dark matter halos becomes complex. As a consequence, building accurate templates at these small scales might require the evaluation of even more complicated models, thereby introducing more intricate calculations and increasing the computational cost of an individual template. Finally, the analytical modeling of higher order statistics will pose new computational time challenges.In recent years, a number of avenues have been developed to tackle these difficulties and going beyond the standard three parameters compress methodology and into more intricate models at this smaller scales. One possible road is to expand the compress approach by introducing a small subset of new free parameters that encompass most of the remaining relevant information in the power spectrum <cit.>. Another possibility is to perform full shape analyses encompassing various cosmological parameters. To achieve this, several optimizations in the computational methods used to construct theoretical templates have been developed, significantly reducing the computational cost of full shape analyses. As a result, many groups have reanalyzed the BOSS and eBOSS data using full-shape modeling. The optimized methodologies employed for these analyses can be categorized into two groups: efficient theoretical templates of the power spectrum <cit.> or emulator techniques that learn how to reproduce expensive models <cit.>.The consensus from all of these reanalysis methods is that the constraints on cosmological parameters like the Hubble constant H_0 are significantly tighter when using these improved approaches <cit.>. In configuration space, the number of analyses in the literature is reduced, mainly because the consensus is that direct-fits are more constrictive in Fourier space. Nevertheless,working directly in configuration space has its benefits, particularly when dealing with the well localized BAO peak, which in Fourier space becomes distributed across a wide range wave numbers. Adding to that, some of the observational systematic have different effects in Fourier and configuration space. Consequently there is an incentive to study both spaces simultaneously. In ref.<cit.> the main analysis is performed in Fourier space but the authors have also worked out the correlation function as a consistency check. On the other hand, the work of <cit.> is devoted to perform fitting only to the correlation function using the PyBird code. The main difference with our approach is that we work from the beginning in a Lagrangian framework, and get directly the correlation function without the necessity of obtain first the power spectrum (plus adding infrared resummations) as an intermediate steps, to at the end Fourier Transform the results to the obtain the correlation function.The number of free parameters to explore in full shape analysis is generally large when compared to the standard approach. With this in mind, considerable efforts have been made in building efficient sampling methods <cit.> which reduce the number of model evaluations required to run Markov Chain Monte Carlo (MCMC) explorations. However, the models will most likely continue to increase in complexity in the next years, since higher loop contribution or higher order statistics can be considered in the analyses. Therefore, there is an incentive to reduce the computational cost of generating these theoretical templates.Machine Learning algorithms like Neural Network have been successfully used to drastically reduce the evaluation times of complex models <cit.>. These techniques use datasets of pre-computed templates at different points within the parameter space and learn how to reproduce them. When trained correctly, neural networks can reproduce fairly complex models with an error smaller than the precision needed by LSS surveys. Also, given that neural networks are not local interpolators, in principle the errors in their predictions are not as strongly dependent on the distance of the nearest point within the training set, as methodologies like Gaussian processes emulators would be. In this work, we model the redshift space correlation function up to one-loop perturbation theory using a Gaussian Streaming model <cit.> in combination with Effective Field theory (EFT). Throughout this work, we refer to our modeling as EFT-GSM. To implement our model, we release a code[https://github.com/alejandroaviles/gsmhttps://github.com/alejandroaviles/gsm] that uses a brute force approach, but still can compute the correlation function in 𝒪(1sec) time, as described in section <ref>. However, the number of evaluations required to build a convergent MCMC chain for our baseline analysis is large, and this process can take a considerably amount of time. Therefore, we also built a neural network emulator to accelerate the computation of individual templates, reducing the running time of an MCMC chain from a few tens of hours to around 60 minutes (using the same computing settings) and below 20 minutes when we run the code in parallel depending on the cluster settings. We utilize our methodology to reanalyze the BOSS DR12 LRG data obtaining the tightest constraints on the ΛCDM parameters using the 2-point correlation function alone. Throughout this study, we pursue two primary objectives. The first is to bring forward the potential of our full shape modelling approach in configuration space for extracting cosmological information when compared to its counterpart in Fourier space. The second objective is to demonstrate that neural network surrogate models can be used safely to optimize cosmological analysis, leading to significant savings in both time and computational resources without sacrificing accuracy when analyzing real data.We have also tested extended regions in parameter space compared with the baseline analysis to include scenarios with prior configurations beyond Planck <cit.> and Big Bang Nucleosynthesis (BBN) <cit.> on parameters Ω_b and n_s that will serve us to explore the potential of LSS observables standalone. This serves as a test of the full methodology in highly more degenerated scenarios than the baseline and with aim to prove the viability of the use of neural network in this larger and more complex parameter space. This paper is organized as follows, we begin in section <ref>, introducing the data from the BOSS collaboration that we analyze here, as well as introducing a set of different mock simulations that we use for testing our methodology and building the covariance matrices required for our likelihood estimations. Then in section <ref> we introduce the EFT-GSM model that we utilize to construct our theoretical templates.Then, in section <ref> we introduce our neural network methodology, which is used as a surrogate model instead of the EFT-GSM model, we also quantify how much efficiency is gained with this. In section <ref> we describe the fitting methodology, including a brief description of full shape fits. Here we also emphasise our parameter space and the priors that we impose on each parameter. Section <ref> presents the validation of the methodology with high precision mocks and section <ref> presents the results of our baseline analysis and how it compares with published alternative analysis of BOSS. We also include a subsection with results expanding the priors for cosmological parameters usually constrained independently by other observables and a subsection exploring the information content in the multipoles.§ DATA AND SIMULATIONS§.§ DataWe analyse the publicly available data from Sloan Baryon Oscillation Spectroscopic Survey (BOSS) <cit.>, which was a part of theSloan Digital Sky Survey III <cit.>. Specifically, we utilize the Data Release 12 galaxy catalogues <cit.>, gathered using the 2.5-meter telescope situated at the Apache Point Observatory in New Mexico, USA <cit.>, and all the spectra were measured using a set of multi-object spectrographs <cit.>. The details about the data reduction methodology can be found at <cit.>. The BOSS target selection was designed to collect data for two different samples: the low-redshift sample (LOWZ), which targeted luminous red galaxies at redshifts z<0.4, and the Constant Stellar Mass sample, (CMASS), which targeted massive galaxies in the 0.4<z<0.7 redshift range. As explained in <cit.>, LOWZ and CMASS samples were later combined into three partially overlapping bins, this was done to optimise obtaining the strongest constraints on the dark energy parameters. Throughout this work we will refer to these bins as z_1, z_2 and z_3 respectively. The catalogue construction is described in <cit.> where the masks, completeness, and weights of the sample are also discussed. The main properties of these samples are summarized in Table <ref>. Our final analysis is performed using the low and high redshiftbins (z_1 and z_3, respectiveely) which do not overlap in redshift and have a similar effective volume, V_eff.[The effective volume is defined by V_eff=∑_i( n̅(z_i) P_0/1+n̅(z_i)P_0 )^2 Δ V(z_i) where Δ V(z_i) is the volume of the shell at z_i with P_0=10,000 h^-3Mpc^3. The value of P_0 is chosen for being the amplitude of the power spectrum where the BAO signal is larger <cit.>.]§.§ SimulationsIn this work, we employ two distinct sets of simulations that we require for constructing the necessary covariance matrices for our likelihood estimations (see section <ref>), and for validating our methodology using high-precision mocks. We now present a brief overview of these simulations. * The NSERIES <cit.> mocks are a suit of high-resolution N-body simulations that were used in both BOSS DR12 and eBOSS DR16 analysis. Their main purpose was to test the various fitting methodologies used for theoretical systematics. NSERIES consists of 84 mock catalogues. These mocks are generated from seven independent simulations conducted in a volume of (2.6h^-1 Gpc)^3 and created using the N-body code<cit.>. Furthermore, each simulation is projected into seven different orientations and cuts, resulting in a total of 84 distinct mock datasets. These mocks are populated with galaxies using an HOD scheme designed so that the galaxy catalogue matches the CMASS sample. Thecosmological parameters adopted for N-Series are: Ω_m=0.286, h=0.7, Ω_b=0.047, σ_8=0.820, A_s × 10^9=2.146and n_s=0.96.Here, we use the cutsky NSERIES mocks, whose footprint and number density correspond to that of CMASS north galactic cap at a redshift of z=0.55.* The MultiDark Patchy BOSS DR12 mocks (hereafter MD-Patchy mocks) <cit.> are a suit of 1000 simulations used to estimate the covariance matrix required for analyzing BOSS data. MD-Patchy mocks are based on second-order Lagrange perturbation theory and use a stochastic halo biasing scheme calibrated on high-resolution N-body simulations. Each mock is built from a box of (2.5h^-1 Gpc)^3 and is populated with halos following an HOD scheme calibrated to match the BOSS samples. The MD-Patchy cosmology is: Ω_m=0.307115, Ω_Λ=0.692885, Ω_b=0.048, σ_8=0.8288 and h=0.6777. MD-Patchy mocks were designed to match the number density and footprint of both the CMASS and LOWZ samples from Data Release 12 and were also split into the 3 redshift bins defined above.§ MODELLING THE REDSHIFT SPACE CORRELATION FUNCTION In this work we adopt a Lagrangian approach, on which we follow the trajectories of cold dark matter particles with initial positionthrough the map(,t) =+ (,t),whereis the Lagrangian displacement field. The observed positions of objects are distorted by the Doppler effect induced by their peculiar velocities relative to the Hubble flow, = a= a Ψ̇. That is, for a tracer located at a comoving real space position , its apparent redshift-space position becomes =+, with along the line-of-sight “velocity”≡v⃗·/a H = Ψ̇·/H,where we are using the distant observer approximation on which the angular observed direction of individual galaxies _i are replaced by a single line-of-sight direction , which is representative to the sample of observed objects.The map between Lagrangian coordinates and redshift-space Eulerian positions becomes s⃗ =+ Ψ + Ψ̇·/H. The correlation function of tracer counts in redshift space is given by the standard definition1+ ξ_s() = ⟨(1+δ_s(0)) (1+δ_s()) ⟩where the tracer fluctuation in redshift space is δ_s(). Now, the conservation of number of tracers X, that reads [1+δ_s(s⃗)]d^3s = [1+δ_X()]d^3x, yields the redshift-space correlation function <cit.>1 + ξ_s(s⃗) = ∫k d^3re^i·(s⃗ - )[ 1+ℳ(,) ],where = _2 - _1 and = _2 - _1. The density weighted pairwise velocity generating function is1+ℳ(J⃗,) =⟨(1+δ_X(_1) )(1+δ_X(_2))e^-i J⃗·Δu⃗⟩,where Δu⃗ = u⃗(_2)-u⃗(_1). The generating function is now expanded in cumulants 𝒞. That is, 𝒵(J⃗,)≡log[ 1+ℳ(J⃗,) ] = ∑_n=0^∞(-i)^n/n!J_i_1⋯ J_i_n𝒞^(n)_i_1⋯ i_n()where in the second equality we use the Taylor series of 𝒵 about J⃗=0. The cumulants are then obtained by𝒞^(n)_i_1⋯ i_n() = i^n∂^n 𝒵(J⃗, )/∂ J_i_1⋯∂ J_i_n|_J⃗=0.Then, 1 + ξ_s() = ∫k d^3xe^i·( - )exp[ ∑_n=0^∞(-i)^n/n! k_i_1⋯ k_i_n𝒞^(n)_i_1⋯ i_n() ], On the other hand, the generating function can be alternatively expanded in moments Ξ^(n) as followsΞ^(n)_i_1 ⋯ i_n() = i^n ∂^n/∂ J_i_1⋯ J_i_n(1+ℳ(J⃗,) )|_J⃗=0= ⟨(1+δ_X(_1)) (1+δ_X(_2)) Δ u_i_1⋯Δ u_i_n⟩, which lead us to relations between cumulants and moments𝒞^(0)()=log[1 + ξ(r)], 𝒞^(1)_i()= Ξ^(1)_i()/1+ξ(r)≡ v^n̂_12,i,𝒞^(2)_ij()=Ξ^(2)_ij ()/1+ξ(r) - 𝒞^(1)_i() 𝒞^(1)_j() ≡σ̂^2n̂_12,ij - v^n̂_12,i v^n̂_12,j = σ^2n̂_12,ij,where we introduced the pairwise velocity along the line of sight v^n̂_12,i and the pairwise velocity dispersion along the line of sight moment and cumulant, σ̂^2n̂_12,ij and σ^2n̂_12,ij, respectively. These relations will serve us below, since moments are more directly computed from the theory than cumulants. Using eq. (<ref>), the correlation function in redshift space is1 + ξ_s() = ∫ d^3 r [1 + ξ(r)] ∫kexp[i· ( -- v⃗_12^) -1/2^T σ^2_12+ ⋯]. If we stop at the second order cumulant σ^2_12, the k-integral can be formally performed analytically, giving1 + ξ_s( s) = ∫d^3 r/(2π)^3/2 |σ^2_12|^1/2[1+ξ(r)] exp[ -1/2 ( s - -v⃗_12^ )^𝐓 [σ^2_12]^-1( s - -v⃗_12^ )], which is the Gaussian Streaming Model correlation function. Now, depending on how one computes the ingredients ξ(r), v_12,i() and σ_12,ij^2(), different methods can be adopted from here. For example: 1) Reference <cit.> computed ξ(r) within the Zeldovich approximation, but the pairwise velocity (v_12) and pairwise velocity dispersion (σ_12^2) using Eulerian linear theory. 2) In <cit.> Convolution Lagrangian Perturbation Theory (CLPT) is used for the three ingredients, but instead of computing σ_12^2, the authors computed σ̂_12^2.This latter reference also released a widely used code by the community.[Available at https://github.com/wll745881210/CLPT_GSRSDgithub.com/wll745881210/CLPT_GSRSD.] Here, we will use the method of <cit.>, where all moments are computed using CLPT. Further, in our modeling we consider a Lagrangian biasing function F that relates the galaxies and matter fluctuations through<cit.>1+δ_X() = F(δ, ∇^2 δ) = ∫d^2Λ/(2π)^2F̃(Λ) e^i 𝐃·Λ.In the second equality F̃(Λ) is the Fourier transform of F( D), with arguments 𝐃=(δ, ∇^2 δ) and spectral parameters Λ=(λ,η), dual to 𝐃.A key assumption that we follow here, is the number conservation of tracers,[Notice the number conservation assumption of tracers is not even true for halos. However, the biasing expansionobtained in this way is automatically renormalized and coincides with other more popular methods that introduce the biasing through the symmetries of the theory; see <cit.> for a review.] from which one obtains1+δ_X() = ∫k∫ d^3q e^i · ( -)∫F̃(Λ) e^i 𝐃·Λ - i ·Ψ,and evolves initially biased tracer densities using the map between Lagrangian and Eulerian coordinates given by eq. (<ref>). Renormalized bias parameters are obtained as <cit.>b_nm = ∫dΛ/(2π)^2F̃(Λ) e^-1/2Λ^TΣΛ (i λ)^n (i η)^m,with covariance matrix components Σ_11 = ⟨δ_cb^2 ⟩, Σ_12 =Σ_21 = ⟨δ∇^2 δ⟩ and Σ_22 = ⟨ (∇^2δ)^2 ⟩. We notice b_n =b_n0 are local Lagrangian bias parameters, and b_∇^2δ = b_01 is the curvature bias.In this work we consider only b_1 and b_2. However, tidal Lagrangian bias, b_s^2, can be easily introduced following <cit.>.[Indeed, our codeconsider tidal bias, but we use the formulae presented in <cit.>, which differ slightly from that in <cit.>.]To obtain the cumulants we need to compute the moments.The procedure is exactly the same as with the correlation function (the zero order moment), but now we have to keep track of the velocity fields.That is, using1+δ_X(_1) = ∫d^3q_1 k_1 dλ_1/2π F̃(λ_1) e^iλδ_1 + i _1 ·(_1-q_1-Ψ_1),we obtainΞ^(n)_i_1 ⋯ i_n()= n̂_i_i⋯n̂_i_n∫ d^3q ∫k e^i ·(-)∫dλ_1/2πdλ_2/2πF̃(λ_1)F̃(λ_2) ×n̂_j_i⋯n̂_j_n⟨Δ̇_j_1/H⋯Δ̇_j_n/H e^i [ λ_1 δ_1+ λ_2 δ_2 + ·Δ ]⟩,where we defined Δ_i ≡Ψ_i(_2) - Ψ_i(_1) and used Δ u_i =H^-1 (n̂_j Δ̇_j ) n̂_i.The real space correlation function ξ_X(r), which corresponds to the zeroth-order moment for tracer X, is obtained within CLPT, <cit.>,1+ξ_X(r) = ∫d^3 q/(2 π)^3/2 |𝐀_L|^1/2 e^- 1/2( r-)^𝐓𝐀_L^-1( r-) { 1 - 1/2 A_ij^loopG_ij +1/6Γ_ijkW_ijk + b_1 (-2 U_i g_i - A^10_ijG_ij) + b_1^2 (ξ_L - U_iU_jG_ij- U_i^11g_i) + b_2(1/2ξ_L^2 -U_i^20g_i - U_iU_jG_ij)- 2 b_1 b_2 ξ_L U_i g_i+ 2(1+b_1) b_∇^2 δ∇^2 ξ_L + b^2_∇^2 δ∇^4 ξ_L },where the matrixA_ij() = ⟨Δ_i^(1)Δ_j^(1)⟩_c, with Δ_i = Ψ_i(_2) - Ψ_i(_1), is the correlation of the difference of linear displacement fields for initial positionsseparated by a distance =_2-_1, which is further split in linear and loop pieces:A_ij() = 2 ∫p( 1 - e^i·)p_i p_j/p^4 P_L(p) =A^L_ij() +A^loop_ij().where P_L is the linear matter power spectrum. We further use the linear (standard perturbation theory) correlation function ξ_L(q) = ∫p e^i · P_L(p), and the functions W_ijk =⟨Δ_iΔ_i Δ_k⟩_c,A_ij^mn = ⟨δ^m()δ^n(0)Δ_i Δ_i ⟩_c,U^mn_i = ⟨δ^m()δ^n(0)Δ_i ⟩_c.The involved r and q dependent tensors are g_i = (𝐀_L^-1)_ij(r_j - q_j), G_ij = (𝐀_L^-1)_ij - g_ig_j, and Γ_ijk = (𝐀_L^-1)_{ij g_k} - g_ig_jg_k. The first and second moments of the generating function yield the pairwise velocityv_12,i( r) = f/1+ξ_X(r)∫d^3 qe^- 1/2( r-)^𝐓𝐀_L^-1( r-) /(2 π)^3/2 |𝐀_L|^1/2{ -g_r Ȧ_ri - 1/2 G_rsẆ_rsi + b_1 ( 2 U̇_i - 2 g_r Ȧ^10_ri - 2 G_rs U_r Ȧ_si) + b_1^2 ( U̇^11_i- 2 g_r U_r U̇_i- g_r Ȧ_riξ_L) + b_2 ( U̇^20_i - 2 g_r U_r U̇_i) + 2 b_1 b_2 ξ_L U̇_i+ 2 b_∇^2δ∇_i ξ_L}, and thepairwise velocity dispersionσ^2_12,ij( r)= f^2/1+ξ_X(r)∫d^3 q e^- 1/2( r-)^𝐓𝐀_L^-1( r-) /(2 π)^3/2 |𝐀_L|^1/2{Ä_ij - g_r Ẅ_rij - G_rsȦ_riȦ_sj + 2 b_1 ( Ä^10_ij -g_r Ȧ_r{iU̇_j} - g_r U_r Ä_ij)+ b_1^2 ( ξ_L Ä_ij + 2 U̇_i U̇_j )+ 2 b_2 U̇_i U̇_j },with q-coordinate dependent correlators Ȧ_ij^mn() = 1/f H⟨δ^m_1 δ^n_2 Δ_i Δ̇_j ⟩, Ä_ij^mn() = 1/f^2 H^2⟨δ^m_1 δ^n_2 Δ̇_i Δ̇_j ⟩, Ẇ_ijk = 1/f H⟨Δ_iΔ_j Δ̇_k ⟩,Ẅ_ijk = 1/f^2 H^2⟨Δ_iΔ̇_j Δ̇_k ⟩,U̇^mn() = 1/f H⟨δ^m_1δ^n_2 Δ̇_i ⟩.As in the case of the undotted A_ij function, we have omitted to write the superscripts m,n when these are zero; e.g, Ȧ_ij≡Ȧ^00_ij.Now, consider the terms inside the curly brackets in eq. (<ref>). Taking its large scales limit (→∞), we obtain{⋯}|_q→∞ = δ_mn∫_0^∞dk/3π^2[P_L(k) + loop terms],which is a non-zero zero-lag correlator. However, since perturbation theory cannot model accurately null separations, one needs to add an EFT counterterm that has the same structure.This new contribution shifts the pairwise velocity dispersion as <cit.>σ̂^2_12,mn→σ̂^2_12,mn + σ^2_EFTδ_mn1+ξ^ZA(r)/1+ξ^CLPT_X(r).As the separation distance r increases, the ratio (1+ξ^ZA(r))/(1+ξ_X(r)) approaches unity, then the EFT counterterm adds as a constant shift to the pairwise velocity dispersion at large scales. That is, we can identify it with the phenomenological parameter σ^2_FoG widely used in early literature to model Fingers of God (FoG). Comparisons for the modeling of the second moment when using the EFT parameter σ^2_EFT and the constant shift σ^2_FoG can be found in <cit.>(see for example Fig. 2 of that reference where a particular example exhibits a clear improvement of the EFT over the phenomenological constant shift).Finally, we notice our counterterm is related to that in <cit.> by σ^2_EFT = α_σ f^2. There are others EFT countertermsentering the CLPT correlation function and the pairwise velocity and velocity dispersion, but they are either degenerated with curvature bias (as is the case of c_1^EFT) or subdominant with respect to the contribution of eq. (<ref>) (see the discussion in <cit.>). So, in this work we keep only σ^2_EFT. Since this EFT parameter modifies the second cumulant of the pairwise velocity generation function,its effect on the redshift space monopole correlation function is small, while the quadrupole is quite sensitive to it, particularly at intermediate scales r<40 h^-1Mpc.Now, let us comeback to eq. (<ref>), that we formally integrated to obtain eq. (<ref>). However, notice the matrix σ^2_12 is not invertible since σ^2_12=σ^2_12 ⊗ and hence det(σ^2_12)=0. Hence in the following we will approach this integration differently that will also serves us to rewrite the resulting equation in a more common form and also more directly related with the computational algorithms in a code. We decompose the vectors ,andin components parallel and perpendicular to the line of sight : = k_∥ + _⊥,= r_∥+ _⊥,= s_∥ +_⊥,with _⊥· = 0, and so on.We will use the following definitions μ = · = r_∥/r,v_12(r)=v_12,ir̂_i,σ^2_12(r,μ)= μ^2σ^2_12,∥(r) + (1-μ^2) σ^2_12,⊥(r) = μ^2 (σ̂^2_12,∥ - v_12v_12)+ (1-μ^2) σ̂^2_12,⊥(r), with σ̂^2_12,∥(r) = r̂_ir̂_j σ̂^2_12,ij and σ̂^2_12,⊥(r) = 1/2(δ_ij - r̂_ir̂_j)σ̂^2_12,ij, and v⃗^_12 = μ v_12(r) ,andσ^2_12 =σ^2_12(r,μ) ⊗,Then, we can split theintegral eq. (<ref>) in parallel and perpendicular to the line-of-sight integrations,∫k e^i· ( -- v⃗^_12) -1/2^T σ^2_12 = ∫dk_∥/2πe^ik_∥ (s_∥ - r_∥ - μ v_12) -1/2 k_∥^2 σ^2_12∫d^2k_⊥/(2π)^2 e^i _⊥· (_⊥ - _⊥)=e^-1/2 σ^2_12(s_∥ - r_∥ - μ v_12)^2/[2πσ^2_12]^1/2(_⊥ - _⊥),obtaining a Dirac delta function from the integral of the perpendicular component _⊥ and a Gaussian kernel from the parallel k_∥ one.Hence, the correlation function within the GSM becomes 1 + ξ_s(s_∥,s_⊥) = ∫_-∞^∞dr_∥/[2πσ^2_12(r,μ)]^1/2[1+ξ(r)] exp[-(s_∥ - r_∥ - μ v_12(r) )^2 /2 σ^2_12(r,μ)],with r^2 = r_∥^2 + s_⊥^2. This is a wide popular expression, but remind that hereσ^2_12 is the second cumulant of the density weighted velocity generating function, instead of its second moment. Also, it suffers correction from EFT counterterms. The streaming models <cit.>describe how the fractional excess of pairs in redshift space 1 + ξ_s() is modified with respect to their real-space counterpart 1 + ξ(r):1+ξ_s(s_∥,s_⊥) = ∫_-∞^∞ dr_∥[ 1 + ξ(r) ] 𝒫(s_∥-r_∥|).Here r^2 = r^2_∥ + r^2_⊥ and s_⊥ = r_⊥.The above expression is exact; see eqs.(1)-(12) of <cit.>. This means that a knowledge of the form of the pairwise velocity distribution function 𝒫(v_∥|) = 𝒫(s_∥-r_∥|) at any separation , yields a full mapping of real- to redshift-space correlations. In the GSM approximation, the distribution function becomes Gaussian centered at μ v_12 and with width equal toσ_12. The main drawback of this approach is that the 𝒫(v_∥|) is, of course, not a Gaussian <cit.>. In <cit.>, the authors extract the ingredients of the pairwise velocity disribution moments directly from simulations and use them to obtain the correlation function multipoles using the GSM and the Edgeworth streaming model of <cit.>, finding good agreement with the redshift space correlation function extracted from the same simulations, but only above scales of around s=20 h^-1Mpc. Our findings also indicate that our modeling and pipeline fits well the simulations above this same scale. §.§code Together with this work, we release the -language code [Available at https://github.com/alejandroaviles/gsmgithub.com/alejandroaviles/gsm], which computes the multipoles of the one-loop GSM two-point correlation function in about half a second. The code receives as input, the linear power spectrum, as obtained from , as well as the set of nuisance parameters: These includes the biases b_1, b_2, b_s^2 and b_∇^2δ, the EFT parameters σ_EFT^2 (a.k.a. σ_FoG^2), c_1^EFT and c_2^EFT, and the cosmological parameter Ω_m, which is necessary to calculate the growth rate f at the output redshift.Notice that the CLPT integrals involve the -integration with a Gaussian kernel centered at =. This can be challenging for large r because a naive calculation with the origin centered at =0 will require a very fine grid for the angular integration, which should get finer asr gets larger. Hence we adapt the integrals to be always be centered at =. This change of variable allows us to perform the angular integration with high accuracy using a Gauss-Legendre method with only =16 weights. Finally, when exploring the parameter space in an MCMC algorithm, the cumulant σ_12^2 can become negative. To avoid this unphysical behavior we do the following.[In <cit.> it is warned that this can happen, and advise to keep only the linear part of σ_12^2 in the exponential and expand the rest. This approach is well physically motivated since only the loop terms are expanded, which is also in the spirit of CLPT. However, we indeed tried this method, and find no very satisfactory results. A second approach we followed, yielding even worst results, is to simply impose a sharp minimum cut to σ^2_12 to a very small, but still positive number.]We split the cumulant of eq. (<ref>) as σ^2_12=σ^2_12,L+σ^2_12,loop, that is, in linear and loop pieces, the latter containing the EFT counterterm and velocity moments. When σ^2_12,loop< - c_tol σ^2_12,L, withc_tol close but below unity, we transform the variable σ^2_12 = σ^2_12,L+σ^2_12,loop⟶σ^2_12 = σ^2_12,L+f(σ^2_12,L,σ^2_12,loop) with f(σ^2_12,L,σ^2_12,loop) =A σ^2_12,loop/σ^2_12,loop+B -A -σ^2_12,L, where the constants A and B are given byA=(-1 + c_tol)(B-c_tol σ^2_12,L)/Bσ^2_12,L B= (-1 + 2 c_tol)σ^2_12,L with this transformation the range (-∞,-c_tol σ^2_12,L) is shortened to (-σ^2_12,L,-c_tol σ^2_12,L), while the one-loop cumulant σ^2_12 stays smooth and is strictly positive. After preliminar tests, we chose the value c_tol=0.999.§ ACCELERATING MODELING WITH NEURAL NETWORKSOur full shape analysis requires an exploration of a relatively large parameter space. Each model requires approximately 1.5 seconds in our computer to run. Given the large numberof evaluations required to explore the parameter space (of the order of 10^5), and the large amount of MCMC chains we are interested in running, there is an incentive to optimize the evaluation process of our model. There are various methodologies available for accelerating the estimation of these statistics. The choice between using one or another depends on several factors, one of them being the number of models that can be constructed to use as a training set. In our case, the Gaussian streaming model presented in Section <ref> is relatively cost-efficient. To run our model, we first construct a template of the power spectrum using the publicly available Code for Anisotropies in the Microwave Background (CAMB) <cit.>, which completes in approximately one second. We then utilize this template as input for ourcode, which requires an additional half-second to compute the correlation function multipoles.Considering this, we can efficiently generate training data sets of several tens of thousands of points within a reasonable computational time. Recently, neural networks, have proved to be a suitable framework to accelerate the estimation of clustering statistics for training sets of this size <cit.>. Moreover, neural networks are particularly efficient in data generalization, affording an almost constant reliability over the full parameter space (i.e. the model error does not strongly depends on the distance with the nearest point used on the training set). In what follows, we present our emulating methodology. Our approach is derived from the methodology proposed in <cit.>, but we have made modifications to adapt it for configuration space. The following subsection provides a detailed explanation of our methodology and highlights the specific changes we have implemented to transition into configuration space. Once our neural networksare trained, we reduce the evaluation time needed for a single point in parameter space to around0.015 seconds, which improves in two orders of magnitude the Likelihood evaluation time. We note that a distinct neural network emulator is necessary for each multipole of the correlation function at a specific redshift.[One can decide to train a global neural network including the two multipoles. However, it corresponds to expanding the exit layer by a factor of three without a win of information between them.] Throughout this study, we utilize the first two non-zero multipoles of the correlation function. Consequently, each analysis presented in this work entails constructing two neural network emulators. The construction process for each emulator takes approximately 30 minutes when performed on our personal laptops. It is worth mentioning that it might be feasible to reduce the building time of a single neural network by adjusting certain parameters, as discussed below. Nevertheless, since the number of neural networks we use is small, we find the current building time to be manageable.Other methodologies that operate in Fourier space <cit.>, aim to predict values for all wave numbers of interest, which typically comprise hundreds of points. If a brute force approach were employed, where one asks the neural network to directly predict the power spectrum, it would need to make hundreds of predictions, which would increase the time required to build the network. Therefore, methodologies utilizing Fourier space often employ techniques such as principal component analysis to address this issue.[In principal component analysis, the input power spectra matrix is divided into eigenvectors, which are dependent on the wave number, and their corresponding eigenvalues, which only rely on the cosmology. This enables an approximation of the power spectra by considering a linear combination of the most significant eigenvalues and discarding the rest, reducing the number of predictions necessary.]Here, we model the correlation function from 20 h^-1 Mpc to 130 h^-1 Mpc in 22 bins with a 5 h^-1 Mpc width between each bin in redshift space distance r, therefore, each emulator needs to predict only 22 numbers. We have found no necessity to incorporate principal component analysis as part of our methodology as Fourier space works utilize a number of principal components similar to the number of bins we employ.To train each neural network, we generate 60,000 models distributed across the parameter space. Out of these, 50,000 models are utilized for training, while 5,000 models constitute the validation set. The validation set is employed during the training process to test the data and determine when to decrease the learning rate of the network, as explained below. The remaining 5,000 models form our test set and are reserved for evaluating the accuracy of the methodology on unseen data so that we perform a fair assessment of the trained models.We use Korobov sequences <cit.> to select the points in the parameter space used for building and testing our neural networks. Korobov sequences are a robust approach for generating extensive and uniform samples in large dimensional spaces. We run three distinct sequences to create the training, test, and validation sets at each distinct redshift that we model. We also make sure that the three sequences are independent and that there are no overlapping points between them. Finally, we employ our EFT-GSM model pressented in section <ref> to calculate the multipoles of the power spectra for all 60,000 data points. These EFT-GSM multipoles are used to train our neural networks, that aim to accurately replicate the GSM predictions for a new point in parameter space.In order to make the training of the neural network more efficient, it is convenient to keep the values of the output layer neurons in a similar range of values. Here, we use a hyperbolic sinus transformation on each of the training set multipoles for this purpose. We run our neural networks by adapting the public code from <cit.>,[Which is available at https://github.com/sfschen/EmulateLSS/tree/mainhttps://github.com/sfschen/EmulateLSS/tree/main] to reflect the changes expressed above. We use the Multi-Layer Perceptron architecture suggested by them, with four hidden layers of 128 neurons each. As we discuss below, the accuracy that we obtain in our predictions is well within the precision we need and this is achieved in a manageable time. Therefore, we decided that no further optimization of the architecture to fit our particular data was necessary. When training our networks, we reduce the learning rate of the algorithm from 10^-2 to 10^-6 in steps of one order of magnitude and double the training batch size at every step. As suggested by several works <cit.>, we use the following activation function, a(X)=[γ+(1+e^-β⊙ X )^-1(1-γ)]⊙ X, which we found outperforms other more common activation functions like Rectified linear units. Here, γ and β are new free parameters of a given hidden layer within the neural network that are fitted during the training process of the network.The algorithm decreases the learning rate when a predetermined number of training epochs have passed without any significant improvement on the accuracy of the model, this number is commonly referred to as the patience of the algorithm.[We track the mean square error (MSE) of the validation set, the algorithm records the best value found so far. When a number of epochs equal to our patience value have elapsed whiteout a better MSE being found, the algorithm switches the learning rate. Note that a larger patience allows the model more time to exit local minima, and address slow convergence issues.] The patience we use determines the time required to train our neural networks, longer patience usually leads to more accurate models (provided the model is not overfitted). The results presented in this work correspond to waiting 1000 epochs before reducing the learning rate, which, as stated above corresponds to approximately 30 minutes of training time, also, we have monitored our validation set to ensure that there are no signs of overfitting at this point.[ An overfitted model would start to worsen the MSE of the validation set after a given training epoch.] If our goal were to reduce the training time of the algorithm, we could decrease the patience value. However, this would reduce the accuracy of our models, we note that a patience of around 100 epochs reduces the training time to around 5 minutes on our personal laptops while still maintaining sub-percent accuracy in most multipole models. § METHODOLOGYIn this section, we define the methodology employed to extract cosmological information from galaxy clustering. First, we describe the clustering measurements we utilize. Next, we provide an overview of our full shape methodology. We also discuss the likelihood, priors, covariance, and MCMC samplers used throughout our analysis. §.§ Clustering measurements and fiducial cosmology Throughout this work, we focus on the anisotropic 2-point correlation function ξ(μ,s), which we project under the Legendre polynomial basis L_ℓ(μ) following equation <ref>:ξ_ℓ(s)≡(2 ℓ +1)/2∫_-1^1 L_ℓ(μ)ξ(μ,s)dμ. Here, ℓ is the order of the polynomial and μ is the cosine of the angle between the separation vectors and the line-of-sight direction.We use the legacy multipoles from BOSS <cit.> computed with the fiducial cosmology Ω_m=0.31, Ω_Λ=0.69, Ω_k=0, Ω_b h^2=0.022, Ω_ν h^2=0.00064, ω_0=-1, ω_a=0, h=0.676, n_s=0.97, and σ_8=0.8. In this work, we utilize the first two non-zero multipoles of the correlation function that correspond to ℓ=0,2. §.§ Full Shape Methodology The full-shape methodology followed to constrain cosmological parameters consists of varying a theoretical model ξ_ℓ^Model(s) (or the equivalent statistics in Fourier space) at different points in parameter space and comparing the resulting models directly with the measured clustering ξ_ℓ^Data(s) without any compression of the information. The way in which we select the points to be explored in parameter space is described in section <ref> below. Throughout this work, we use the GSM-EFT model to build templates of the multipoles of the galaxy correlation function in redshift space, using eq. (<ref>). The methodology implemented here can be used with any other perturbation theory correlation function code; e.g. [https://github.com/sfschen/velocileptorshttps://github.com/sfschen/velocileptors] <cit.>. In order to compute a given correlation function template with GSM-EFT, it is necessary to provide a fixed value of the free parameters of the model. These free parameters can be divided into three distinct subsets. The first subset corresponds to the cosmologicalcosmological parameters required to construct the linear power spectrum from CAMB, these parameters are h, ω_b, ω_cdm, A_s, n_s, N_eff, Ω_ncdm. Our second set of parameters are the three nuisance parameters used to model the relationship between galaxies and matter and the EFT counterterms. These parameters are b_1, b_2, b_∇^2δ and b_s^2, andσ_EFT^2 and c_1,EFT.As explained in section 4, we use surrogate models built with neural networks to optimize the speed at which we can generate theoretical templates. Clustering measurements employ a reference cosmology for transforming redshift to distance measurements. This reference cosmology introduces Alcock-Paczyński distortions that must be considered when comparing our data and model multipoles. To address this issue, we employ a pair of late-time re-scaling parameters, denoted as q_|| and q_⊥, which introduce the necessary corrections to the galaxy clustering in two directions: along and perpendicular to the line of sight. This approach enables us to account for the impact of an inaccurate fiducial cosmology when calculating the clustering. Thecomponents of the separation in the true cosmology (s_||',s_⊥') are expressed in terms of the components of separation in the fiducial cosmology (s_||,s_⊥) as follow:s_||'=s_||q_||, s_⊥'=s_⊥q_⊥.The geometric distortion parameters, perpendicular and parallel to the line of sight, are defined asq_⊥(z_)=D_A(z_) /D_A^ref(z_) , q_(z_)=H^ref(z_) /H(z_) ,hereD_A is the angular diameter distance, H is the Hubble parameter, and the ref superscript indicates that the estimate is done in the reference or fiducial cosmology of the data multipoles. We use an alternative parametrization of the distortion parameters defined as:q_α=q_||^1/3q_⊥^2/3q_ϵ= ( q_||/q_⊥)^1/3We implement the distortions directly in the clustering by replacings→ s'(s_ref,μ_ref) and μ→μ'(μ_ref), this can be computed using the re-scaling parameters{q_α,q_ϵ} as follows: s'(s_ref,μ_ref)=s_refq_α√((1+q_ϵ)^4 μ_ref^2 +(1-μ_ref^2)(1+q_ϵ)^-2),μ'^2(μ_ref)= [1+(1/μ_ref^2-1)(1+ q_ϵ)^-6 ]^-1The multipoles ξ_ℓ(s_ref) are estimated in the reference cosmology with s'(s_ref,μ_ref) and μ'(μ_ref). In order to apply the dilation parameters into our implementation, we interpolate each multipole ξ_ℓ^model using s'(s_ref,μ_ref), we also compute the observed Legendre polynomials ℒ^obs(μ') using μ'(μ_ref).Finally we construct ξ^obs(s'(s_ref,μ_ref) , μ'(μ_ref)), as the sum of the multipoles times their respective Legendre polynomial, and the observed multipolesin the reference cosmology becomeξ_ℓ (s')= ∑_ℓ' a_ℓℓ'ξ_ℓ'(s),As expected, when using the distortion parameters the different multipoles get mixed, and so the matrixa_ℓℓ' is not diagonal. In principle, since we are working up to one-loop the sum is truncated at ℓ=8. However, notice first that the for a fixed ℓ, the dominant coefficient is a_ℓℓ. Secondly, the loop contributions of multipoles ℓ=6 and 8 are highly suppressed, in comparison to the one-loop contribution of ultipoles ℓ=0, 2 and 4,because the correlation function is a very smooth function on μ at large scales. Because of this, it is an excellent approximation to truncate the sum at ℓ'=4.That is, to simplify our data analysis, we have chosen not to incorporate the dilation parameters nor their effects into our neural network training.This choice has two advantages: first, the neural network is trained without specifying a reference cosmology, leaving the possibility of changing it in order to compare different reference cosmologies; second, training the network to reproduce the multipole vectors is more convenient than training it to reproduce the 2D correlation function. As we have stated, our primary objective is to determine the posterior distributions of the cosmological parameters given our data multipoles. These posterior distributions are found by doing a thorough exploration of the parameter space using MCMC chains. In the following section, we present the methodology we employ for this exploration.§.§ Likelihood and Priors Since we are not interested in model comparison, we can express the posterior distribution of a point in parameter space as:𝒫(θ|D)∝ℒ (D|θ) ×π(θ). Here, ℒ (D|θ) and π(θ) are the likelihood and the prior distributions, respectively. We assume Gaussian errors on the 2-point correlation function data, and therefore, the likelihood can be written as𝒫(D|θ) ∝ (χ^2)^ν-2/2exp(-χ^2/2),where ν is the number of degrees of freedom, and χ^2 is defined as:χ^2= (m⃗-d⃗)^T C^-1(m⃗-d⃗),where m⃗ and d⃗ are the model and data vectors, respectively, and C is the covariance matrix of the data. Our sample covariance matrix, between bins i and j, is computed from the 1000 MD-Patchy mock realizations, as presented in section <ref>, using the following expression: C_s^(ij)=1/N_mocks-1∑_m=1^N_mocks(ξ_i^m-ξ̅_i)(ξ_j^m-ξ̅_j) where N_mocks represents the number of mock samples, and ξ̅_i denotes the average of the i^th bin in the analysis. We also include the Hartlap corrections <cit.>, which involve rescaling the inverse sample covariance matrix as C^-1=C_s^-1N_mocks-N_bins-2/N_mocks-1. Our parameter space exploration is done using<cit.>, an open MCMC code that implements the affine invariant ensemble sample proposed in <cit.>. The boundaries of the regions explored are delineated by a set of predefined priors, which are presented in Table <ref>.As shown in the table, we explore seven parameters and held the remaining parameters constant. Almost all of our parameters are assigned flat priors that correspond to the boundaries of the hyperspace within which our neural network is trained. We have checked that these boundaries are sufficiently large so the priors can be considered uniform. The only exception is ω_b, for which we have employed a Gaussian prior.For this work we decided to use a local Lagrangian bias prescription, which means to fix b_s^2 and b_∇^2δ to zero. Further, since c_1,EFT is highly degenerate with higher-derivative bias, we also keep it fixed to zero. Hence, the only nuisance parameters considered in this work, are b_1, b_2 and σ^2_EFT. Further, as we show in the upcoming sections, using this simplification we can recover the cosmological parameters of simulated data with high accuracy, and our posteriors when fitting the BOSS DR12 correlation function are competitive with other analyses of the full-shape power spectrum in the literature.[In a work currently in preparation (Sadi Ramirez et al, In prepararion), which compares compressed and full-shape methodologies, we will relax these assumptions] We observe that our approach is not a complete one-loop theory because of the constraints on the free parameters. Consequently, we expect that the posterior distributions we have obtained would be more extensive if all parameters were unrestricted. However, it is worth noting that the existing full-shape studies in the literature rely on Gaussian priors for certain biasing, EFT, or shot noise parameters. Therefore, they also do not constitute full one-loop analyses. In table <ref> we show the varied parameters and their priors for our baseline full-shape analyses. We keep fixed the slope n_s=0.97, the effective number of relativistic degrees of freedom N_eff=3.046, and the massive neutrino abundance ω_ncdm = 0.00064. Finally, to ensure convergence, we utilized the integrated autocorrelation time, checking it at intervals of 100 steps. Convergence criteria were considered reached if two conditions were met simultaneously: the chain's length exceeded 100 times the estimated autocorrelation time, and the change in this estimation remained below the 1 per cent.§ VALIDATING OUR METHODOLOGY WITH HIGH PRECISION MOCKSWe have introduced our methodology for generating full-shape EFT-GSM models of the correlation function multipoles. Our ultimate objective is to re-analyze the BOSS data sets presented in section <ref>. We will present all the results on real data in section <ref> below. In this section, we establish a series of tests to evaluate the performance of our methodology. These tests involve applying our methodology to the NSERIES simulations presented in section <ref>. We begin in section <ref> by assessing the accuracy and precision with which our methodology can recover the parameters of the simulations. The results presented in this section are built utilising the surrogate models built with the neural network presented in section <ref>, here we assume that these surrogates are a fair representation of the EFT-GSM models. Then, in section <ref>, we test this assumption by comparing the results of our neural network surrogate models with those from the EFT-GSM model. §.§ Testing EFT-GSM ModelAs stated above, we assess the effectiveness of our methodology by recovering the known free parameters of the N-series simulations. In this section, we show the accuracy and precision of these results. We fit the mean multipole of the 84 mocks instead of fitting one individual mock. This approach effectively mitigates shot noise errors in the multipole models caused by inaccuracies in the shape of a single multipole.Our error estimates are computed using one thousand MD-Patchy z_3 simulations, which are introduced in section <ref>. For estimating the sample covariance we used the multipoles from the combined sample that includes NGC and SGC which corresponds to a V_eff=4.1h^-3Mpc^3. We rescale the covariance matrix by a factor of 1/10 (10 × 4.1 h^-3Mpc^3), this is done to test the methodology in a volume of the order of DESI volume, so that we can assess whether our methodology accuracy will suffice for the upcoming next-generation surveys.Given the complexity of modelling clustering statistics at weakly non-linear scales, we should assess the scales in redshift space distance at which our model still generates accurate fits. With this in mind, we simultaneously fit both the monopole and quadrupole of the correlation function using three different ranges with varying lower limits: s_min = 20, 30 and 40h^-1Mpc. We use these fits to determine the range at which our model estimate of the parameters gets closer to the true values. Throughout this work, we maintain a fixed upper limit for our fits at s_max = 130 h^-1Mpc, and we fix the width of our distance bins to 5 h^-1Mpc. The resulting parameter estimations, along with the error estimates obtained through Markov Chain Monte Carlo (MCMC), are presented in Table <ref>. The table illustrates that, in general, the errors become narrower as the minimum scale decreases. Specifically, when employing a minimum scale of 20h^-1Mpc, we recover the most stringent constraints on all parameters and still consistent with the simulation Cosmology. We are also interested in assessing the accuracy of our models, which involves comparing the mean values obtained from our MCMC chains with the actual cosmological values from the NSERIES simulations for our four non-fixed cosmological parameters. Figure <ref> illustrates a triangular plot of our MCMC results. In this plot, the gray lines represent the NSERIES cosmological values, while the colored histograms depict the 1D distribution of each parameter.We observe that for all four parameters, the predictions with a minimum range of 40 h^-1,Mpc perform worse than in the other two cases. This discrepancy is particularly noticeable when comparing the histograms, which appear more centered around the gray lines in the other two scenarios. This trend can be attributed to the smaller scale bins having smaller error bars, and therefore when we exclude them the overall constraining power of the model decreases.The colored contours in the figure represent the 1σ and 2σ confidence surfaces. It is worth noting that the actual NSERIES cosmology falls within 1σ of the mean value for the 20 h^-1Mpc case, as indicated by the intersection of all gray lines within the solid green contours. Figure <ref> summarizes this information in a more easily interpretable format. The plot demonstrates that, in general, all three models can reproduce ω_cdm, ω_b, and h within 1 σ. However, the model with a minimum scale of 40 h^-1Mpc deviates further from the true values for both ω_cdm and h. Additionally, only the the 20 h^-1,Mpc model agrees with the true value of A_s within 1σ. It is also worth noting that the constraints on both ω_cdm and A_s are tighter in the model with a minimum scale of 20h^-1Mpc compared to the one with a minimum scale of 30 h^-1Mpc.Given that the 20 h^-1Mpc constraints show to be more accurate and precise than the other two cases, in the following we fix s_min to this value.§.§ Testing Neural NetworksIn what follows we present a set of tests of the accuracy of the neural network methodology presented in section <ref>.We begin by testing the ability of our models to predict the multipoles of the GSM templates of section <ref>. This is done by asking our trained networks to predict the multipoles of our 5000 test set points at redshift of z=0.55. For the j^th test point, we define the percent error of the network prediction as P_j^err(s)= | [ξ^GSM_j(s)-ξ^NN_j(s)]/ξ^M_j(s) |. Whereξ^GSM_j(s) is the value of the multipole predicted by the GSM and ξ^NN_j(s) is the value predicted by the neural network. This error quantifies the size of the emulator errors when compared with the size of our original statistics. Figure <ref> illustrates the percentile plots for the 50%, 68%, 90%, and 95% percentiles of the errors. These plots show the threshold below which the specified percentage of our 5000 errors lie for a given s.We note that all lines are situated below the percentile accuracy line (black line), except for the 90% percentile of the quadrupole at small scales, which is only slightly above. This indicates that our neural network models are capable of reproducing the multipoles of the GSM model with an accuracy below 1%. Additionally, it is worth noting that the 68% percentile line is positioned around the 0.1% error threshold, implying that the majority of our multipoles are predicted with a precision of one-tenth of a percent, while models with an accuracy around one percent are rare. As a second test of our methodology, we conducted two sets of different MCMC fits to the mean mock of the NSERIES from section <ref>. The first set utilizes the GSM model outlined in section <ref>, while the second set employed a neural network surrogate model trained to replicate the behavior of the GSM model at the redshift of the NSERIES mock. We run both sets utilising two different configurations, the first is our standard range configuration of 20 h^-1 Mpc to 130 h^-1 Mpc, and the second changes the minimum range to30 h^-1 Mpc.Figure <ref> shows the triangular plots comparing the likelihood contours of both models. The 1-D histograms exhibit remarkable similarity in both plots. This results in parameter predictions that are virtually indistinguishable from each other when using the EFT-GSM model or the surrogate model. It's worth mentioning that when ussing our standard 20 h^-1 Mpc configuration there are negligible differencesin the 2D contours that do not affect the best fits values and errors. Given that our neural network surrogate models can accurately reproduce the data with a significantly lower convergence time for MCMC chains, all fits presented throughout the rest of this work are built using surrogate models. § RESULTS WITH SDSS-III BOSS CATALOGUESIn the previous section we have shown the capability of the EFT-GSM model for recovering the cosmological parameters of the NSERIES simulations, we also tested that our surrogate models accurately reproduce the results of our EFT-GSM code. In what follows, we apply our methodology to our real galaxy data and compute our constraints on the cosmological parameters from the BOSS DR12 LRG correlation function. §.§ Baseline Analysis We begin this section by introducing our constraints on the cosmological parameters obtained by applying our baseline methodology. We computed three distinct fits, each using a different combination of the BOSS samples introduced in Section <ref>. The first two fits utilize the monopole and quadrupole moments of the datasets z_1 and z_3 respectively. The third fit is a combined analysis where both the z_1 and z_3 multipoles were fitted simultaneously. We labeled the resulting model as z_1+z_3. As mentioned in section <ref>, we select a scale range from 20 h^-1Mpcto 130 h^-1Mpc as our standard configuration. As with our NSERIES tests, our covariance matrix is computed using the MD-Patchy mocks introduced in section <ref>.Table <ref> shows the constraints on our four varied cosmological parameters. We note that the error bars for z_1 and z_3 are similar, whereas the constraints for z_1+z_3 are slightly tighter, with h and A_s having error estimates that are ∼ 25% smaller then the z_1 and z_3 predictions. And the errors on ω_cdm being around ∼ 33% smaller than the one from z_3. Figure <ref> shows the triangular plot of the MCMC fits to our three BOSS datasets. We observe that all parameters agree with each other within 1σ, except for A_s, which only agrees at the 2σ level for z_1 and z_3. Mismatches between the estimated parameters for the redshift bins z_1 and z_3 are well-known, and has been reported in other works <cit.>, particularly in the full-shape correlation function analysis of <cit.>. We notice that z_1 predicts lower values for ω_cdm and h, while z_3 predicts a lower value for A_s. As expected, the predictions for each parameter in z_1+z_3 fall between the predictions from the individual samples. Notably, the predictions for ω_b are indistinguishable across all three samples as the constraints on ω_b are dominated by the prior. This is explored further in section <ref> where we widen the prior to explore the capability of LSS alone to constraint cosmological parameters and to test the methodology in a more extended parameter space. §.§ Comparison to other Full Shape AnalysisAs stated at the beginning of this work, several groups have reanalyzed BOSS data using a full-shape methodology. We also mentioned that most of these analyses have been conducted in Fourier space. In contrast, our work is carried out in configuration space, therefore we are interested in assessing the agreement between these two different methodologies. In this section, we compare the parameter estimations obtained from our configuration space model with a set of Fourier space results (D'Amico <cit.>, Ivanov <cit.>, Philcox <cit.>, Troster <cit.>, and Chen <cit.>), we also compare with theconfiguration space results from Zhang <cit.>. To ensure a fair comparison we exclusively consider Zhang constraints derived using BOSS data, without incorporating information from other observations. As stated above an alternative to full-shape analysis is to expand the parameter space of the compression methodology by introducing asmall subset of new free parameters that account for the slope of the power spectrum. The Shapefit methodology, as presented in Brieden <cit.>, employs this approach to reanalyze the BOSS data, their methodology is also developed in Fourier space. In this section, we also compare the parameter estimations obtained using our model to those obtained using the Shapefit method.All of the analyses mentioned so far were carried out on the BOSS DR12 data, with most of them analyzing the data by dividing it into the z_1 and z_3 samples we have utilized. The only exception is D'Amico <cit.>, who employ the LOWZ and CMASS samples instead. Since all these studies investigate the same dataset, and the majority of them use the same samples from this dataset, we expect the parameter estimations to be consistent with each other within the uncertainty inherent to each methodology. The parameter estimations from these methodologies are depicted as square markers in Figure <ref>, the first column of the figure presents our parameter estimations for comparison, indicated by starred markers. We present results for three key parameters: H_0, A_s and the total mass density Ω_m, which includes mass energy density from all matter sources, including dark matter and baryons. These specific parameters were selected to facilitate the comparison with the other works. We highlight that only two works we are comparing with include A_s in their reports. Our results using these derived parameters are displayed in Table <ref>. Figure <ref> shows that our predictions for both A_s and H_0 are consistent within 1σ with the results of other studies. We also note that our predictions of Ω_m agree within 1σ with all results, except for three: D'Amico <cit.>,Zhang <cit.> and Tröster <cit.>, with whom we agree within2σ. We point out that D'Amico utilises the LOWZ and CMASS samples, instead of the z_1 and z_3 samples that we use, these samples are at slightly different redshifts and use different subsets of the BOSS galaxy sample. Which should contribute to the disagreement between our measurements. Tröster employs a wide prior on the parameter n_s, which remains constant throughout our standard methodology. Varying this parameter has an impact on the fitting results for Ω_m and H_0, which should contribute to our slight disagreement.For Zhang,the difference observed could be explained by the two extra parameters they varied,n_s and Σ m_ν, which can explain the difference in error bars and the position of the mean. In Section <ref> below, we explore the effects of varying n_s on our methodology. We show that when this parameter is left unfixed, it influences the position of the mean fit value of ω_b and ω_cdm, consequently leading to a deviation on Ω_m.Our model exhibit a level of precision similar to most works, with the exception of Philcox <cit.> and Chen <cit.>, who report narrower constraints than ours. This is attributed to that both studies incorporate geometrical information from the post-reconstruction of BAO in Fourier and Configuration space, which helps tighten their constraints. Additionally, Tröster <cit.> present slightly broader constraints compared to our results, which we attribute to their use of broader ω_b priors. We conclude that our results with EFT-GSM are in agreement with other full-shape analyses, we found differences within 1-2σ (1.7σD'Amico, 1.6σTroster and Zhang), this level of agreement can be attributed to the differences in the samples, number of free cosmological parameters, and priors. Therefore, we consider that our EFT-GSM model is a competitive and robust configuration space analysis, that can serve as a complement to other Fourier space methodologies.§.§ Extensions to Baseline analysis We have introduced our EFT-GSM methodology and demonstrated its capability to accurately recover the cosmology of the NSERIES simulations when assuming an error magnitude similar to that expected from future surveys like DESI. Additionally, we applied our methodology to the BOSS data and found that the results we obtained were consistent with those reported by others groups doing full shape analyses with BOSS data. We are now interested in running our methodology using different configurations of our model. This can teach us how various aspects of our methodology impact our final constraints on the parameters. Our first test involves exploring the capability of our model to constrain cosmological parameters when we modify the priors of two key cosmological parameters n_s, and ω_b.It is common practice, when conducting clustering analysis of large-scale structure (LSS), to constrain the values of certain cosmological parameters that are poorly constrained using LSS with external observables. With this in mind, in our baseline analysis, we held n_s constant with a value specified in Table <ref>, derived from CMB experiments. We also imposed restrictive priors on ω_b. These priors were estimated by measuring the deuterium to hydrogen abundance ratio in a near-pristine absorption system toward a quasar. By assuming a reaction cross-section between deuterium and Helium-3, one can determine strong constraints on ω_b values. We refer to these priors as Big Bang Nucleosynthesis (BBN) priors throughout this work. Here, we explore the constrains we obtain on the cosmological parameters when extending the analysis in these two cosmological parameters, by relaxing the priors on ω_b andletting n_s free: ω_b:𝒩[0.02237,0.00037]n_s:𝒰[0.5,1.5] The results of these analysis are shown in Table <ref> and Figure <ref>. We note that, when comparing yellow (BBN prior) posteriors/contours with green (10 × BBN priors), that in general widening the priors on ω_b reduces the precision of all other cosmological parameters in particular in h and ω_cdm, the error is 2 and 1.6 times larger, although there are no significant shifts of the central values of the posteriors.This is consistent with the results reported in Tröster <cit.>, they use priors of around 10 times the BBN results and find wider posteriors than other reanalysis of the BOSS DR12 data. This is shown in figure <ref>.Ivanov <cit.> also investigated the effect of varying the priors on ω_b, finding significantly weaker constraints on h and milder effects on Ω_m, this is consistent with our results as less constraining power in ω_cdm translates to ω_m.Brieden <cit.> also explored extending the priors in Full Shape (and ShapeFit) analysis. They find that in their Full Shape fits the constraints on ω_b derived from the amplitude of the BAO depends on the ratio ω_b/ω_cdm. Therefore, in the prior-dominated regime the tight constraints on ω_b helps to fix the shape and narrows the posterior ofω_cdm. When using wider priors the ability of the model to fit the amplitude of the BAO drives the accuracy of the fitting results. We would like to highligth that in the case of the configuration space multipoles the effect of varying ω_b is not isolated in the shape or position of the BAO peak as shown in Figure <ref>. We also analyze the effect of varying the parameter n_s, which as stated above is originally fixed to the Planck value in our baseline analysis. By comparing the yellow (n_s fixed) and magenta (n_s with a flat prior) contours in Figure <ref> we note that fixing n_s has a strong effect on the precision of ω_cdm but a smaller effect on h (as been observed in previous analysis in the Fourier space <cit.>), the rational is that n_s and ω_cdm information is coming from the slope, thus again fixing the shape contributes to find tighter constraints on ω_cdm. The results are shown in Table <ref>, the case with varying n_s and keeping the BBN prior on ω_b shows 2 times larger errors in ω_cdm and 1.25 times in h also affecting the constraints in A_s by 1.4 factor in the errors. We observe as well that with free n_s, the posteriors of ω_cdm and h are shifted towards higher values but still consistent between them within 1 σ, this behavior is also consistent with previous analysis in Fourier space where shifts of 1 σ and 0.5σ in Ω_m and H_0 respectively <cit.>. §.§ Exploring the Information Content of Multipoles This last section we explore the information content and constraining power of the multipoles. Our last test consists of running a new MCMC fit on the z_1+z_3 dataset using our standard configuration. However, this time, we only fit the monopole of the correlation function. Figure <ref> displays the results of this monopole-only fit (red dashed lines) and compare to our baseline analysis, which utilizes both the monopole and quadrupole (blue lines and filled contours).The results for both cases are summarized in Table <ref>. Interestingly, we observe that the monopole-only approach is capable of recovering our core cosmological parameters, namely ω_cdm and h, with nearly the same level of accuracy (Δω_cdm=0.0006, and Δh=0.001) and precision (Δσ_ω_cdm=0.0005, and Δσ_h<0.001) as when including the quadrupole. As expected, most of the valuable cosmological information resides within the monopole of the correlation function. However, A_s becomes poorly constrained. This is also expected, because RSD, mainly affects the amplitude of the quadrupole to monopole ratio at large scales, which breaks the degeneracy in the parameter β≡ f/b_1. Since A_s is highly degenerate with the large-scale bias, the inclusion of the quadrupole induces tighter estimations on A_s. The results obtained in this section are expected on theoretical grounds. However, the quadrupole also contains information on the BAO scales, and one would expect that this will translate on better estimation of ω_cdm and h, perhaps only a small improvement. Nevertheless, according to our results, the latter is not happening at all, which we findsligthly surprising. § CONCLUSIONSThere are two distinct philosophies for extracting cosmological information from the shape of the 2PS of LSS. In the first approach, denoted as the compressed methodology, the cosmological template is fixed and fits are done over a small set of compressed variables related to the BAO and RSD observables. By construction,the compressed methodology is designed to be more agnostic about the model but offers less modeling freedom. In the second approach, denoted as full modeling or full shape modeling, the fits are done with a varying template where all the parameters of an a priori chosen model are simultaneously fitted, including the cosmological parameters.Full Modelling has shown more constraining power compared withthe classical compressed approaches. However, it is naturally more costly in computational time, even if in recent years, several methods that make full shape analysis efficient have been developed.Extensions of the compressed methodology have been proposed as well, achieving similar levels of accuracy than full shape methodology.Since these methodologies complement each other and have different strengths and weaknesses, stage IV experiments are currently working to determine the optimal methodology for extracting cosmological information.In this work we focused on investigating the full shape methodology in configuration space.Until now, most of the analyzes of last-generation surveys with a full-shape methodology has been developed in Fourier space. Therefore, there is an incentive to explore full-shape analysis in configuration space.We present a full-shape analysis of the BOSS DR12 galaxy sample two-point correlation function. Our goal was two-folded: 1) to explore the potential of configuration space analysis and contrast it with its fourier space counterpart, and 2) to show the efficiency and robustness of using neural network acceleration for analysing real data.In order to analyze the anisotropic clustering signal in configuration space we use an EFT-GSM model, to build second-order perturbation theory templates of the correlation function. While the running time of our model implementation is relatively short (on the order of two seconds), executing a complete MCMC chain using our current EFT-GSM model implementation would require approximately 48 hours with 128 CPUs, due to the substantial number of evaluations required. This represents a significant computational expense. To alleviate the computational cost of our methodology, we employ neural network emulators to construct surrogate models of our EFT-GSM templates. These neural networks are significantly faster to execute and can converge in as little as 15 minutes when using the same 128 CPUs.We performed a systematic validation of our methodology in two categories: * Model Accuracy. We tested the ability of our methodology to reproduce the cosmological values of the high-resolution NSERIES simulation correspondant to an V_eff=40 h^-1Mpc. We tested three minimum scales: s_min=20, 30, 40 h^-1Mpc. Our conclusion is that by utilizing a minimum scale of s_min=20h^-1Mpc, we maximize the accuracy and precision of our methodology. Additionally, the predicted value of A_s only agrees with the true value to within 1σ at this scale. The cosmological parameter estimation is the least accurate when s_min=40h^-1Mpc, which can be attributed to missing the data bins with the smaller error bars.* Emulator Accuracy. The models presented in this work do not directly use the EFT-GSM model. Instead, we employ neural networks to construct surrogate models of the multipoles. We assessed the ability of these surrogate models to reproduce the true predictions made by the full EFT-GSM model. We tested this by constructing a test set of points in parameter space. We calculated the multipoles using both the EFT-GSM code and the surrogate model independently. We observed that the percentage difference between these models is usually less than 1%, and for most models, it's closer to 0.1%. Furthermore, we noticed that the MCMC fits generated using our surrogate models provide parameter estimations that are virtually indistinguishable from those produced by the full model.After validating the methodology, we conducted fits to the BOSS data using our baseline analysis. We used the combined sample used in BOSS DR12 final analysis, and we fitted separately the redshift bins z_1 and z_3 respectively, while the final fit was built on both bins fitted simultaneously. The fit including both bins resulted in slightly tighter constraints on the cosmological parameters. The measured values of the cosmological parameters are in agreement with each other across all three samples, within 1σ, with the only exception being the predicted value of A_s between z_1 and z_3, which agrees at the 2σ level. with the combined sample having constrains on h and A_s that are ∼ 25% smaller than on the individual samples, and ∼ 33% smaller that the ω_cdm constrain of the z_3 bin constrain. We compared our results with previous full-shape analysis performed on BOSS data. We include in the comparison six full-shape methodologies: five of them in Fourier space <cit.>,<cit.>,<cit.>,<cit.>,<cit.> and one in configuration space <cit.>. We also compare our results with those obtained using the Shape Fit methodology <cit.>. We find that our predictions for both H_0 and A_s agree within 1σ with the results from all seven works we compare with. Our predictions for Ω_m agree within 1σ with four out of the seven works, but we only agree within 2σ with the remaining three. We propose that these tensions can be explained by two of these three works using broader priors in n_s, and by the other work using a slightly different dataset. We also notice that our constraints have a level of precision comparable to that of five out of the seven works. The remaining two works included post-reconstruction information of the power spectrum and are therefore able to achieve better precision than us.Weperformedcomplementary tests to gain a better understanding of the impact of priors on our constraints.We have explored extending the baseline analysis by relaxing the priors on ω_b by 10 times the current range 𝒩[0.02237,0.00037] and by letting n_s be a free parameter with a flat prior of [0.5, 1.5].When we relax the priors on ω_b, we find significantly weaker constraints on h and, a milder effect on ω_cdm. When we vary n_s we note a strong effect on the precision of ω_cdm but a smaller effect on h, this is due to n_s and ω_cdm having a strong effect on the slope of the multipoles. All of these observations are consistent with what other works have found. Finally, we explored the information content of the multipoles. We conducted our standard fit using only the monopole of the correlation function and compared it with our baseline analysis that includes both the monopole and quadrupole. We discovered that the monopole-only fit already provides constraints on ω_cdm and h with similar accuracy and precision as when using both multipoles. This suggests that the majority of relevant cosmological information is contained in the monopole of the correlation function, which we find slightly surprising, given that the quadrupole also contains some BAO information. We also noted that the constraints on A_s do worsen significantly, as expected.This work was supported by the high-performance computing clusters Seondeok at the Korea Astronomy and Space Science Institute. MV, SR and SF acknowledges PAPIIT IN108321, PAPIITA103421, PAPIIT116024 and PAPIIT-IN115424. MV acknowledges CONACyT grant A1-S-1351. This research was partially sup- ported through computational and human resources provided by the LAMOD UNAM project through the clusters Atocatl and Tochtli. LAMOD is a collaborative effort between the IA, ICN and IQ institutes at UNAM.AA is supported by Ciencia de Frontera grant No. 319359, and also acknowledges partial support to grants Ciencia de Frontera 102958 and CONACyT 283151.JHEP | http://arxiv.org/abs/2310.17834v1 | {
"authors": [
"Sadi Ramirez",
"Miguel Icaza-Lizaola",
"Sebastien Fromenteau",
"Mariana Vargas-Magaña",
"Alejandro Aviles"
],
"categories": [
"astro-ph.CO"
],
"primary_category": "astro-ph.CO",
"published": "20231027011010",
"title": "Full Shape Cosmology Analysis from BOSS in configuration space using Neural Network Acceleration"
} |
In this work, we present , a multimodal 2D/3D dataset with rendered views of more than stylized 3D shapes carefully annotated at the part-instance level, alongside matching RGB point clouds, 3D textured meshes, depth maps, and segmentation masks.covers shape categories, fine-grained part categories, and fine-grained material classes that can be compositionally applied to parts of 3D objects.We render a subset of one million stylized shapes from four equally spaced views as well as four randomized views, leading to a total of renderings. Parts are segmented at the instance level, with coarse-grained and fine-grained semantic levels. We introduce a new task, called Grounded CoMPaT Recognition (GCR), to collectively recognize and ground compositions of materials on parts of 3D objects. Additionally, we report the outcomes of a data challenge organized at CVPR2023, showcasing the winning method's utilization of a modified PointNet++ model trained on 6D inputs, and exploring alternative techniques for GCR enhancement. We hope our work will help ease future research on compositional 3D Vision.The dataset and code have been made publicly available at <https://3dcompat-dataset.org/v2/>.3D vision, dataset, 3D modeling, multimodal learning, compositional learning.Habib Slim, Xiang Li, Yuchen Li, Mahmoud Ahmed, Mohamed Ayman, Ujjwal Upadhyay,Ahmed Abdelreheem, Arpit Prajapati, Suhail Pothigara, Peter Wonka, Senior Member, IEEE, and Mohamed Elhoseiny, Senior Member, IEEECorresponding authors: H. Slim and M Elhoseiny with the Department of Computer Science, KAUST, Thuwal, Saudi Arabia.E-mail: [email protected]; [email protected] A. Prajapati, S. Pothigara are with Polynine, San Francisco, California. X. Li, Y. Li, M. Ahmed, M. Ayman, U. Upadhyay, A. Abdelreheem, P. Wonka are with the Department of Computer Science, KAUST, Thuwal, Saudi Arabia.January 14, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTIONMultiple datasets have been proposed to facilitate 3D visual understanding including ShapeNet <cit.>, ModelNet <cit.>, and PartNet <cit.>. High-quality datasets like OmniObject3D <cit.> and ABO <cit.> were introduced in an attempt to provide 3D assets with high-resolution, realistic textures. 3D-Future <cit.> was also proposed and contains 10K industrial 3D CAD shapes of furniture with textures developed by professional designers. More recently, Objaverse <cit.> and its larger counterpart Objaverse-XL <cit.> were introduced, which contain more than 10 million artist-designed 3D objects with high-quality textures. Despite these notable efforts to advance 3D understanding, recent object-centric 3D datasets (e.g. <cit.>) and 3D scene datasets (e.g. <cit.>) lack part-level annotations. ShapeNet-Part <cit.> was proposed as an extension to ShapeNet <cit.> with part-level annotations, but only contains coarse-grained part segmentations extracted using a deep active learning framework. In contrast, PartNet <cit.> builds on ShapeNet <cit.> and provides fine-grained part segmentation labels, but similarly does not contain material information.Material information offers several distinct advantages. First, it provides extra semantic information about an object, which enables a variety of important 3D object understanding tasks. Second, it helps create more realistic renderings, making the models better suited for transferring from synthetic to real scenarios. Finally, applying different materials to the same geometric 3D shape can be treated as a special form of training data augmentation. Current datasets lack part-level material information which underscores the need for a new resource. Our dataset fills this gap and invites researchers to explore new challenges and opportunities in 3D visual understanding. is an extension of the 3DCoMPaT <cit.> dataset, which was previously published at a conference. We introduce a new richly annotated multimodal 2D/3D large-scale dataset: , standing for Compositions of Materials on Parts of 3D Things. Our dataset comprises stylized 3D shapes rendered from views, across shape categories, unique fine-grained part names and coarse-grained part names, and unique materials from material classes. We sample object-compatible combinations of part-material pairs to create styles per shape. Each object with the applied part-material composition is rendered from equally spaced views and random views. We render images for a total compositions, leading to [Figure detail: (#) × (#) × (#)× 2 (#) =views] total rendered views. Examples of some rendered compositions and views are provided in Figures <ref> and <ref> respectively. Dataset. To create our dataset, we start with unique geometries which we segment at a fine-grained part level into a total of segmented parts (leading to average part instances per shape). For each part of each shape, human experts determine a list of compatible/applicable materials. Then, we generate a stylized model by sampling over the compatible materials of each part with a limit of styles per shape, leading to stylized shapes. Previous work. Our proposed dataset differs from previous work in numerous ways. Our dataset contains a diverse set of high-quality materials: for each part found in every 3D shape, we annotate possible compatible materials that may be applied to each part, allowing us to generate multiple material combinations for a single shape (we refer to a combination of materials (composition) applied to a model as a style). We also enrich our dataset with 2D renders, depth maps, part masks, and material masks for each rendered view, and hierarchical part and material annotations in both the 2D and 3D modalities.In summary, our dataset can be distinguished from existing datasets by the following four key aspects:a Human-generated vs. 3D scanned geometry. ScanNet <cit.> and Matterport3D <cit.> datasets are scanned 3D geometry datasets. Conversely, ShapeNet <cit.> and our dataset are human-created, mostly by professional 3D modelers. Human-created geometry is generally of higher quality and has fewer artifacts, but is however more expensive and time-consuming to collect. For the Objaverse <cit.> dataset, the authors thus propose to scrape 3D models from well-known web repositories which are mostly created by artists. However, the quality of these models is not guaranteed, and models are not annotated with part-level information. The realism of the collected objects is also not a given, as models in these repositories are not designed to be realistic, but rather to be visually appealing as they are typically targeted at the video game industry. Our dataset is human-generated and is designed to be realistic, and comprises high-quality textures and geometry.b Part segmentation information. For some datasets, none or only a subset of the shapes have segmented part information, which is an important feature of datasets like PartNet <cit.> and is also a core characteristic of our dataset. We provide part segmentation information following two semantic levels, in both 2D and 3D modalities.c Texture coordinates, textures, and materials. A key focus of our work lies in the stylization of 3D shapes with appropriate texture coordinates, textures, and materials. To achieve a superior level of quality when rendering numerous material compositions on each shape, our models are equipped with human-verified texture coordinates and part-wise material compatibility information. While previous attempts have been made to enhance a subset of ShapeNet with part-wise material information <cit.>, it falls short in comparison to our work in terms of the number of shapes (3080 vs. ), shape classes (3 vs ), and materials (6 vs. , and fine-grained annotated classes).d Automatically generated vs. human-generated annotations. shapes are annotated manually by a team of trained humans. Part names are consistent across and within categories, and are defined in shape category-specific guidelines. Each guideline is defined by a team of researchers and professional modelers, and contains rigorous definitions and examples for each part that may occur within a given shape category. All models are manually segmented at a part level rather than with deep learning models like OpenRooms <cit.> or ShapeNet-Part <cit.>.Grounded CoMPaT Recognition. We introduce a novel task called CoMPaT recognition, which focuses on collectively recognizing and grounding shape categories along with the associated part-material pairs composing them. In Figure <ref>, we illustrate the task with an example. Given an input shape, the task aims to recognize both the shape category and all part-material pairs composing it. In the example shown, an agent first needs to identify the shape as a chair, and then all part-material pairs, such as a "seat" made of "leather" and a "backrest" made of "fabric". This novel task, compatible with both 2D and 3D modalities, goes beyond recognition with a grounded variant requiring the precise segmentation of parts alongside the recognition of associated materials. Contributions. Our work introduces a new dataset, and introduces the GCR recognition task. The contributions of this work can be summarized in the following points:* We propose a new dataset comprised of stylized models to study the composition of materials on parts of 3D objects.Our dataset contains (a) a diverse set ofmaterials for 3D shapes, where (b) material assignment is done at a coarse and fine-grained part-level; (c) segmentation masks in 2D and 3D, alongside (d) human-verified texture coordinates.* We validate our dataset with experiments covering 2D and 3D vision tasks, including object classification, part recognition and segmentation, material tagging and shape generation.* We also propose Grounded CoMPaT Recognition (GCR), a novel task aiming at collectively recognizing and grounding compositions of materials on parts of 3D objects. We introduce two variants of this task, and leverage 2D/3D state-of-the-art methods as baselines for this problem.§ RELATED WORKEarly efforts. Several datasets have been initially proposed to facilitate 3D visual understanding, such as ShapeNet <cit.>, ModelNet <cit.>, and PartNet <cit.>. ModelNet <cit.> is one of the first datasets of 3D objects, and includes 40 shape categories and 12K unique 3D shapes. ShapeNet <cit.> is a large-scale dataset of 3D textured objects, with 55 shape categories and 51K unique 3D shapes. ShapeNet is annotated at the shape-level, and categories are extracted from WordNet <cit.>. It has emerged as an important benchmark for deep learning-based modeling, representation, and generation of 3D shapes. ObjectNet3D <cit.> is an object-centric dataset of 3D CAD models with 100 shape categories and 90K unique 3D shapes, and approximate 2D-3D image alignments. ModelNet, ShapeNet and ObjectNet3D are object-centric datasets, and do not contain part-level annotations. Part-understanding. In an attempt to bridge this gap, ShapeNet-Part <cit.> was first proposed as an extension to ShapeNet <cit.> with part-level annotations. It contains 16 shape categories and 31K 3D shapes, but part annotations are only provided at a coarse-grained semantic level, and are extracted using a deep active learning framework instead of human annotation. PhotoShape <cit.> is one of the earliest efforts in gathering 3D shapes with high-quality textures. It contains 5.8K 3D shapes from 29 shape categories, and proposes to transfer materials properties regressed from real images to untextured 3D shapes. PartNet <cit.> was built as a large-scale dataset of 3D shapes annotated with fine-grained, instance-level, and hierarchical part segmentations. PartNet is also created on top of ShapeNet <cit.> and contains 26K 3D shapes from 24 shape categories. PartNet is a valuable resource for advancing research in 3D shape analysis and understanding. Our work stands apart from PartNet in three main significant ways: * a We provide both coarse-grained and fine-grained material information for each part of each shape.* b We enrich 3D shapes with 2D renders, part masks, material masks, and depth maps.* c We use a human verification process to ensure the compatibility of sampled materials with each shape/part. High-resolution datasets. In an effort to provide high-quality and realistic shapes and textures, OmniObject3D <cit.> and ABO <cit.> datasets introduced 3D assets with rich, high-quality textures. Google Scanned Objects <cit.> is a scanned dataset of reconstructed 3D objects with high-quality textures and geometries, and contains 1K 3D shapes from 17 diverse categories of small objects. OmniObject3D <cit.> is a scanned dataset of 3D objects with high-quality textures, and contains 6K 3D shapes from 190 shape categories based on ImageNet <cit.> and LVIS <cit.>. ABO <cit.> is a dataset of 3D objects with high-quality textures and geometries, and contains 8K 3D shapes from 63 shape categories based on product catalogs extracted from Amazon.3D-Future <cit.> presented a dataset comprising 10K industrial 3D CAD shapes of furniture developed by professional designers. More recently, Objaverse <cit.> and Objaverse-XL <cit.> expanded the horizon of 3D object datasets by releasing over 10 million artist-designed 3D objects with high-quality textures.Despite these significant strides in advancing 3D understanding, these modern object-centric 3D datasets (e.g. <cit.>) and scanned datasets (e.g. <cit.>, <cit.>, <cit.>) lack part-level annotations. PartNet <cit.>, building on ShapeNet <cit.>, offers fine-grained part segmentations of 3D meshes but does not include material information. The absence of such part-level annotations and material data points to the significance of a dataset like , which bridges these gaps and serves as a comprehensive resource for furthering research in 3D visual understanding.Comparison with existing work. In Table <ref>, we compare with existing prevalent 3D datasets. We distinguish between datasets originating from 3D artists (first group), scanned objects datasets (second group), datasets with aligned 2D images (third group), and datasets with part-level annotations (fourth group). We scrutinize fundamental aspects, including the number of shapes provided, whether or not stylized shapes are included, the number of classes represented and whether shapes come from scans or are designed from CAD tools. Additionally, we assess the availability of material annotations. We differentiate cases where textured shapes are provided but without material annotations () like in GSO <cit.> and Objaverse-XL <cit.>, from cases where they are provided at a coarse-grained level only (e.g. 3D-Future <cit.> in which only coarse material annotations are available), or are provided at both coarse and fine-grained levels. Material annotations and part-wise material annotations are important as they provide essential contextual information about the surface properties and appearance of objects, facilitating compositional understanding and analysis of 3D shapes.We also consider the inclusion of aligned 2D images, and differentiate between cases where images are pseudo-aligned or exactly aligned with matching 3D shapes. Pseudo-alignment includes using a manual 3D alignment pipeline with close candidate CAD models <cit.>, or using an automatic 3D alignment strategy with exactly matching shapes (e.g. based on differentiable rendering <cit.>). Exact alignments are achieved by producing synthetic 2D images from 3D models using a rendering engine, and then projecting the 3D models into the 2D images using the camera pose ground truth (e.g. 3D-Future <cit.>, this work). In contrast to other works, emerges distinctively by offering a large collection of stylized shapes, each accompanied by complete multi-level part-material information. With PartNet, is the only dataset with instance-level part annotations, which are essential in tasks involving denumerating parts composing a shape. Notably, also offers a large collection of aligned 2D/3D data with images andshapes, enabling its use in diverse multi-modal learning applications benefiting from scale like object classification, part recognition or novel view synthesis. Mesh resolutions. In Figure <ref>, we compare model resolution statistics for 3D CAD models from and ShapeNet <cit.>. This comparison is important because ShapeNet <cit.> serves as the CAD model data source for PartNet <cit.> and ShapeNet-Part <cit.>, which are two of the most prominent datasets for 3D part understanding. We provide density plots over vertex counts, edges counts, and faces counts for 3D CAD models from both datasets. We also provide the median values of these metrics for both datasets. We show that exhibits shapes with significantly higher average numbers of vertices, edges, and faces when compared to ShapeNet and PartNet. While polygon count is not a perfect proxy for shape visual quality and realism, it is a useful metric for comparing the relative complexity of meshes in each dataset. This quantitative assessment underscores the richness of geometries within , making it a valuable resource for advancing research in 3D shape analysis and understanding.§ The dataset is based on a collection of artist-designed 3D CAD models collected and annotated in collaboration with an industry partner. It contains geometries annotated and segmented at a fine-grained part-instance level, with material compatibility information for each annotated part. For each shape, rendered views are provided from canonical and random viewpoints (see Figure <ref>). For each rendered view, depth maps, part maps, and material maps are rendered (see Figures <ref> and <ref>).All annotations are provided by trained annotators following a rigorous multi-stage review process. is a richly annotated, multimodal 2D/3D dataset: In Figure <ref>, we illustrate all data provided for a single stylized shape from our dataset. §.§ Dataset 3D Data. Alongside each stylized shape, we provide a part-segmented textured 3D mesh, an RGB pointcloud, and point-wise part and material annotations. All part segmentation information is provided in coarse-grained and fine-grained semantic levels. RGB pointclouds can be resampled at any resolution starting from the available textured 3D meshes. In Figure <ref>, we illustrate the 3D data provided for a single stylized shape. 2D Data. Each stylized shape is rendered from viewpoints: canonical viewpoints and random viewpoints. Canonical viewpoints are equally spaced around the shape. Random viewpoints are sampled uniformly on the upper hemisphere centered on the center of the shape's bounding box. In Figure <ref>, we showcase the 2D data provided for the first canonical viewpoint across four different 3D shapes. Each 2D image is accompanied by part segmentation masks, material masks, and depth maps. For each image, camera parameters are also provided. Part segmentation masks and material masks are available in two semantic levels: coarse-grained and fine-grained.§.§ Data collection pipeline The complete data collection pipeline is depicted in Figure <ref>, and includes the following steps:* Collection and Editing. 3D shapes are collected and edited by our industry partner.* Part annotations. Annotators follow each category-level guideline when adding instance-level part annotations and segmentations to each shape.* Material assignments. Annotators select compatible materials for each part of each shape, from among possible coarse classes.* Stylized shapes. We sample a set of fine-grained materials for each part of each shape, which we refer to as a style.* Rendering. We render each shape from multiple viewpoints with matching masks, depth maps and pointcloud data, as detailed in Section <ref>. Collection and Editing. All 3D shapes are collected by our industry partner. Editing steps include model scaling, the correction of UV maps, the removal of undesirable/invalid meshes in the shape (e.g., additional objects like a vase on top of a table), etc. Furthermore, as visible in Figure <ref>, all shapes are consistently aligned across classes and orientations are consistent for all 3D models.To align shapes, we use part annotations as a prior to automatically rotate a majority of misaligned shapes (for example, using the fact that the "" part should appear at the back of a shape). We then manually adjust the remaining misaligned shapes by using a web visualization tool (see Figure <ref>). 3D shapes are also scaled to fit within a unit cube centered at the world origin. Part annotations. By combining expert knowledge with the analysis of unannotated shapes, we define fine-grained part-level guidelines. A guideline is defined for each shape category and provides a non-ambiguous definition of each possible fine-grained part that can occur in shapes belonging to the category. Annotators follow each category-defined guideline when adding instance-level part annotations and segmentations to each shape (see Figure <ref> for an example of a shape guideline for theshape categories). Part segments and names are iteratively refined using a web-based shape visualizer (see Figure <ref>). This browser allows reviewers to visualize segments for a specific part class in a shape category, allowing to efficiently verify part semantics consistency across shapes and quickly identify annotation errors. Corner cases, when frequent enough, are identified and further refined into new meaningful part denominations for the category. Material assignments. In Figure <ref>, we illustrate material categories in with samples from our collection. We collect Physics-Based Rendering (PBR <cit.>) materials from various free-to-use repositories, including the NVIDIA vMaterials[vMaterials library: <https://developer.nvidia.com/vmaterials>] library and the ambientCG[ambientCG public domain repository: <https://ambientcg.com/>] public domain material library. We filter collected PBR materials to ensure 1) overall visual quality, 2) compatibility with our rendering pipeline, 3) visual affinity with our collected shapes. We collect a total of coarse material categories, for total PBR materials. With each segmented part, a set of compatible material categories is provided by the annotators (e.g. "metal, wood" for a leg in a chair.). The list of compatible materials for each part of each shape is first broadly defined at theshape category level and refined on a case-by-case basis for specific shapes. Stylized shapes. Using the collected material compatibility information associated with each part, we randomly sample a material for each part of a shape to create a style. A composition is a combination of materials that could be applied to any shape, and a style is an instance of a composition applied to a specific shape.We detail the process of shape stylization in Figure <ref>. An average of styles are sampled per shape. The number of possible styles per shape S can be defined as: 𝒩(S)= ∏_p ∈𝒫(S)^ |ℳ(S, p)| where 𝒫(S) denotes the set of parts belonging to shape S, and ℳ(S, p) the set of materials compatible with part p in shape S. For 14.6% of shapes, 𝒩(S) < , due to either a small number of parts or compatible materials per part. To compensate for this effect, we oversample from shapes where 𝒩(S) >> to reach the desired average of styles per model. §.§ Coarse/Fine-grained semanticsprovides part and material annotations in two hierarchical semantic levels: coarse and fine. Part hierarchies. Fine-grained part classes are defined from a hand-defined shape category-specific nomenclature. Coarse-grained part semantics are defined as shape category-specific groupings of fine-grained part categories. For example, in the "" shape category, the ,andfine-grained parts are all merged into thecoarse part. Shape categories in our dataset can share part names by default. Parts that are category specific and relevant to the category are prefixed by the name of the category (for example: ).We visualize these two semantic levels in Figure <ref> for a single 3D shape, and highlight resulting grouped parts. The coarse-grained semantic level considerably simplifies the compositional structure of shapes, while the fine-grained semantic level provides a more detailed description of the composition of shapes. In the coarse-grained setting, the number of shape category-specific parts is also significantly reduced, while the number of parts shared across shape categories is increased. In Figure <ref>, we provide additional examples of coarse-grained and fine-grained part semantics groupings for three distinct shape categories. Our coarse-level part semantics can be used for tasks that require a high-level understanding of shapes, while fine-grained part semantics can be used for tasks that require more detailed, shape category-specific understanding.We plot in Figure <ref> the sorted number of occurrences of each part (top) for fine and coarse semantic levels. We also compare the average number of unique parts per object (bottom). In the coarse-grained semantic level, parts occurrences are concentrated on a smaller number of parts, while the distribution for the fine-grained level is clearly long-tailed. The average number of parts per object is also equalized across shape categories in the coarse-grained level, while some shape categories present a significantly higher number of parts per object in the fine-grained level like cars and bicycles. Overall, the coarse-grained semantic level provides a more balanced distribution of parts across shape categories, while the fine-grained semantic level provides a more detailed description of the composition of shapes. Material hierarchies. Coarse-grained materials correspond to a high-level set of material categories (e.g. ", , , etc.). Each high-level material category is composed of fine-grained specific materials belonging to that category (e.g. "" in "".). In Table <ref>, we detail the number of fine-grained materials within each coarse category in .P[1]>p#1§.§ RenderingScene. We render each shape in the same scene with a single directional light and three area lights positioned around the shape. In Figure <ref>, we detail our rendering scene setup with an example shape. The stylized shape is placed inside an ovoid surface with a white color, to ensure that the shape is always rendered on a uniformly white background. Projected shadows only appear on the z=0 plane on which the shape is placed. When rendering depth maps and masks, the background surface is removed from the scene.All images are rendered in a 256x256 resolution, and 2D images are encoded in the PNG format. Depth maps are stored in the OpenEXR format to accommodate absolute distances to the image plane, which are represented using floating-point values. Viewpoints. Each stylized shape is rendered from multiple perspectives: canonical viewpoints and random viewpoints. We first translate each shape above the z = 0 plane. Camera viewpoints are defined in spherical coordinates (ϕ, θ) where the origin is set to the center of the shape's bounding box, which we note 𝐨_c. The camera is rotated around 𝐨_c by ϕ and θ. Canonical viewpoints are distributed evenly around the shape with a fixed elevation θ. We set the base spherical angle ϕ to 40 degrees and then increment it by 90 degrees for each of the four viewpoints, while keeping θ fixed at 0 degrees. Random viewpoints are sampled uniformly from an upper hemisphere above the plane. We randomly sample ϕ from the range [0, 2π] and θ from the range [-1/3π, 1/3π]. Using the obtained ϕ and θ angles, we define the position and orientation of the camera. The camera's initial position denoted as 𝐜_0 is rotated around 𝐨_𝐜. The orientation of the camera is then adjusted to ensure that the image plane is centered on 𝐨_𝐜. Extrinsic and intrisic camera parameters are recorded for each view and are provided alongside the rendered images.The sampling procedure of camera parameters is detailed in Algorithm <ref>.0.96!1.1§.§ Toolbox To support the use of , we provide a toolbox for easily loading and visualizing the data. Mainly, we provide the following elements:* Python API for easily loading the data, based on PyTorch <cit.> and WebDataset <cit.>.* Web-based browser for easily exploring 3D shapes and part annotations in both coarse and fine-grained semantic levels (see Figure <ref>).* Documentation and notebooks to facilitate the use of the dataset.All of these elements are available on the website[website: <https://3dcompat-dataset.org/doc>].§ EXPERIMENTS §.§ Classification and SegmentationShape classification. As illustrated in Figure <ref>, the shape class distribution of our dataset is significantly long-tailed. We conduct shape classification experiments on 2D renders and 3D XYZ pointclouds to assess the difficulty of this task on our dataset. All pointclouds are sampled with a resolution of 2048 points, and all methods are trained from scratch for 200 epochs. For 2D classification, we fine-tune ResNet models <cit.> pretrained on ImageNet <cit.> for 30 epochs. We report 2D and 3D shape classification results in Table <ref>. We reach a maximum top-1 accuracy of 90.20% on 2D renders with ResNet-50, and 85.14% on 3D pointclouds with CurveNet <cit.>. Part segmentation. We conduct 3D part segmentation experiments on pointclouds and 2D renders to assess the difficulty of this task on our dataset. We provide results for both fine-grained and coarse-grained 3D part segmentation in Table <ref>. We report pointwise accuracy (shape-agnostic) and mIOU for each model. For mIOU, we consider the shape-informed version of the metric where we restrict the set of predicted parts to the parts that are present in the ground-truth shape category, and the shape-agnostic version where all possible parts are considered. We also report results with and without using a shape prior during training and inference for PCT <cit.>, PointNet++ <cit.> and CurveNet <cit.>. We note that getting accurate part segmentations without RGB information is challenging but remains possible. Without using a shape prior, CurveNet <cit.> reaches a shape-agnostic mIOU of 53.09% on fine-grained part segmentation. In this setting, the model has to perform the challenging task of point-wise part classification from a set of possible parts. Overall, a large gap exists (around 30 accuracy points across models) between shape-informed and shape-agnostic mIOU, highlighting the difficulty of the task over the full space of possible parts. The task of coarse-grained part segmentation is easier, as the model only has to perform part classification from a set of possible parts. In this setting, CurveNet <cit.> reaches a shape-agnostic mIOU of 76.32%.For 2D fine and coarse part segmentation, we report results for SegFormer <cit.> in Table <ref>, alongside material segmentation results. We obtain a mIOU of 52.24% for fine-grained part segmentation, 73.35% for coarse-grained part segmentation, and 82.45% for material segmentation.§.§ Grounded Compositional Recognition (GCR) Task. One key property of our dataset is that it enables understanding of the complete part-material compositions of a given 3D shape. This involves predicting the category of the object, all part categories, and the associated materials for each of those parts within the 3D model. In Figure <ref>, we detail all information that has to be predicted for a given shape in this proposed GCR task. Grounded Compositional Recognition can be related to Zero-Shot Recognition, which aims at predicting the category of an object from a set of unseen categories, where the unseen categories are defined by unseen compositions of visual attributes <cit.>. The GCR task can also be related to situation recognition <cit.> which can be defined as the identification of role - entity pairs in a given scene <cit.>. Metrics. Drawing inspiration from the metrics introduced in <cit.> initially designed for the compositional recognition of activities in images, we define the GCR compositional metrics in 2D/3D as follows: * Shape accuracy. Proportion of correctly predicted shape categories.* Value. Proportion of correctly predicted part-material pairs.* Value-all. Accuracy of predicting all part-material pairs of a shape correctly.We extend these metrics to the segmentation of parts and materials in 2D/3D by defining grounded variants of value and value-all metrics: * Grounded-value. Proportion of correctly predicted part-material pairs, where the part is correctly segmented.* Grounded-value-all. Accuracy of predicting all part-material pairs of a shape correctly, where all parts are correctly segmented. We consider a part to be correctly segmented if the predicted part segmentation mask has an intersection over union (IoU) of at least 0.5 with the ground-truth part segmentation mask. In 2D, we use the pixel-wise definition of IoU. For the 3D modality, we use the point-wise definition.Note that Value and Grounded-value are both evaluated at the shape level: we divide the number of correctly identified (resp. grounded) part-material pairs by the total number of parts appearing in each shape, and then average across all samples. Value is thus upper bounded by Value-all, and Grounded-value by Grounded-value-all. Baselines. We experiment with two fusion-based baselines to assess the performance of the GCR task on .* “PointNet+SegFormer”.This baseline employs separate 2D/3D models and fuses predictions at evaluation time. We use PointNeXT <cit.> for 3D shape classification and SegFormer <cit.> for 2D material segmentation and 2D part segmentation. 2D dense predictions are projected to the 3D space using the depth maps and camera parameters. We use this baseline to assess the feasibility of the GCR task on when all part-pair predictions are performed on the 2D space.* BPNet. We adapt the BPNet 2D/3D multimodal method to the GCR task. BPNet leverages complementary information from 2D and 3D modalities by fusing features from both modalities using a bidirectional projection module for feature fusion. We detail the BPNet architecture we employ in Figure <ref> in the appendix. Challenge. We organized a compositional 3D visual understanding challenge on the GCR task of , with the goal of benchmarking the performance of various methods, in the context of the C3DV[C3DV@CVPR: <https://3dcompat-dataset.org/workshop/C3DV23>] workshop at CVPR 2023. The best-performing method (PointNet++RGB in Table <ref>) on the GCR task consisted of an unimodal 3D model based on a modified PointNet++ <cit.> trained on 6D inputs (XYZ coordinates and RGB color). One important design choice is the point grouping method employed which relies on spatial proximity only. The winning method achieved a Grounded-value-all accuracy of 72.14% in the coarse-grained setting and 17.55% in the fine-grained setting.Other solutions included a late fusion of 2D and 3D features by averaging logits of part and material segmentation and training a PointNet++ model with additional 2D segmentation features. More information about the challenge submissions can be found on the workshop website[C3DV Challenge: <https://3dcompat-dataset.org/workshop/C3DV23/#main-section>]. Results. Table <ref> summarizes GCR results of baseline methods and challenge winners. We report the GCR metrics under both fine-grained and coarse-grained settings, using 10 compositions for each shape. The PointNet+SegFormer 2D-based baseline is competitive with the BPNet multimodal baseline, even without explicit 3D-aware training for segmentation/modality fusion during training. The winning method PointNet++RGB, which takes only 3D point clouds as inputs and leverages a powerful point grouping module, beats both baselines by a large margin.More importantly, we notice the fine-grained GCR performance is still far from satisfying, for which we reach at most 17.55% on the Grounded-value-all metric. This suggests that creating a single model able to achieve strong performance across GCR metrics poses great challenges, especially in the fine-grained setting.In this sense, Grounded Compositional Recognition on is a challenging task that can be used to benchmark the compositional understanding of future multimodal 2D/3D models. Number of compositions. We conduct further ablation analysis to investigate the impact of varying the number of compositions during the training of the BPNet <cit.> model. We focus our analysis on the compositional metrics outlined in Figure <ref>, related to the GCR task (2D/3D material mIoU, 2D shape accuracy, and 3D part mIoU). We train the BPNet model for 30 epochs with 1/5/10/50 compositions from each shape and report the performance obtained for each epoch.Our findings reveal a clear and consistent improvement in the performance of all metrics as the number of compositions utilized in training is increased, specifically when going from N_c=1 to N_c=5 and N_c=10. However, the observed trend becomes less discernible when transitioning from N_c=10 to N_c=50 compositions. This highlights the need for further investigation into efficient ways of leveraging a large number of compositions during training.§ CONCLUSIONWe introduce , a large-scale dataset of Compositions of Materials on Parts of 3D Things, which contains styled models stemming from 3D shapes from object categories. contains 3D shapes, part segmentation information in fine-grained and coarse-grained semantic levels and material compatibility information, so that multiple high-quality PBR materials can be assigned to the same shape part. We also propose a new task, dubbed as Grounded CoMPaT Recognition (GCR), that our dataset enables and introduces baseline methods to solve them.§ ACKNOWLEDGEMENTSFor computing support, this research used the resources of the Supercomputing Laboratory at King Abdullah University of Science & Technology (KAUST). We extend our sincere gratitude to the KAUST HPC Team for their invaluable assistance and support during the course of this research project. We also thank the Amazon Open Data program for providing us with free storage of our large-scale data on their servers, and the Polynine team for their relentless effort in collecting and annotating the data.§ Example style variants for a few randomly sampled geometries in Figure <ref>.§ Additional examples for the multiple views rendered per stylized shape are provided in Figure <ref>.§ Additional examples for 2D/3D data provided with each stylized shape are provided in Figure <ref>. § Additional examples for fine-grained/coarse-grained part and material part categories groupings are provided in Figure <ref>.§ Examples for guidelines provided to annotators for the part segmentation task are provided in Figure <ref>.We provide all guidelines PDFs for reference in the following link: <https://3dcompat-dataset.org/v2/guidelines>.§ We conduct shape generation experiments on on theandcategories. We use LION <cit.> to generate shapes as pointclouds, with models trained separately for each category. LION is a latent diffusion denoising diffusion model (DDM <cit.>) that learns a hierarchical latent space of point clouds. In Figure <ref>, we show examples of generated shapes for both categories. Overall, generated shapes are diverse and realistic, highlighting the potential of for more general shape comprehension and generation tasks.§ Detailed architecture of the modified BPNet <cit.> model we employ for the GCR task is provided in Figure <ref>.ieeetr § BIOGRAPHY SECTION[< g r a p h i c s > ]Habib Slim is a Ph.D. student at KAUST, Saudi Arabia. He earned a M.Res. in Data Science from Université Grenoble Alpes (UGA), France, during which he worked on class-incremental learning for image classification at Université Paris-Saclay. He received a M.Eng. in Computer Science from École Nationale Supérieure d'Informatique et de Mathématiques Appliquées de Grenoble (ENSIMAG) in 2020. He is interested in continual/compositional 2D/3D vision.[< g r a p h i c s > ]Xiang Li is a postdoctoral researcher in computer vision at KAUST, Saudi Arabia. He received a B.S. degree in Remote Sensing Science and Technology from Wuhan University, Wuhan, China, in 2014. He received a Ph.D. in Cartography and GIS from the Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, China, in 2019. His research interests include computer vision, deep learning, and remote sensing.[< g r a p h i c s > ]Yuchen Li is a PhD Student at KAUST, Saudi Arabia.Before joining KAUST, Yuchen was a research intern at iFLYTEK, an intelligent speech and artificial intelligence company in Hefei, China, for three months. He was a Rocket MQ open source contributor and Alibaba summer of code student developer with rich research and engineering experience.He is interested in meta-learning, few-shot learning and 3D object recognition.[< g r a p h i c s > ]Mahmoud Ahmed is an M.S student at KAUST, Saudi Arabia, and received his B.S degree from the American University in Cairo (AUC), Egypt, in 2022. Prior to that, he worked as a Data Science intern at Dell Technologies, then as a 5G Software Engineer. His research interests include computer vision, graphics, and deep learning.[< g r a p h i c s > ]Mohamed Ayman is a M.S. student at University of Alberta and received his B.S. degree from the American University in Cairo(AUC), Egypt, in 2023. He worked as an Applied Science intern at Microsoft. Currently, he is a research intern at KAUST, Saudi Arabia. His research interests are focused on computer vision, NLP, and software optimization.[< g r a p h i c s > ]Ujjwal Upadhyay is an AI Scientist at qure.ai where he works on applying novel deep learning methods to medical data. His research interests include computer vision, adversarial machine learning, and representation learning. He has been involved in cutting-edge research in 3D vision, scene understanding, and neuroscience. -[< g r a p h i c s > ]Ahmed Abdelreheem is a Ph.D. student at KAUST, Saudi Arabia. He received a BSc in Computer Engineering from Cairo University, Egypt, in 2019. He attained his MSc degree in Computer Science from KAUST, Saudi Arabia, in 2022. His research interests lie in the intersection of 3D vision, computer graphics, and natural language. More specifically, he is interested in linking 3D object-centric representations to natural language.[< g r a p h i c s > ]Arpit Prajapati is the Director of Technology at Poly9, where his responsibilities encompass developing solutions for product sampling in the Home and Lifestyle industry. Prior to this role, he was the owner of Lanover Solutions for nearly 8 years, managing company operations. He holds a Bachelor of Engineering (BE) degree in Computer from Gujarat University, completed between 2005 and 2009.[< g r a p h i c s > ]Suhail Pothigara Suhail is the CEO of Poly9, a recognized Product Management leader in ecommerce and digital transformation. With 15 years of experience at luxury brands and retailers in fashion, home, and consumer electronics, his accomplishments include driving over $3 billion in revenue cumulatively through his roles at e-commerce and cloud businesses at Macy's, LVMH, and HP.[ < g r a p h i c s >]Peter Wonkais Full Professor in Computer Science at KAUST, Saudi Arabia, and Interim Director of the Visual Computing Center (VCC). Peter Wonka received his Ph.D. from the Technical University of Vienna in computer science. Additionally, he received a M.Sc. in Urban Planning from the same institution. After his Ph.D., he worked as a postdoctoral researcher at the Georgia Institute of Technology and as faculty at Arizona State University. His research publications tackle various topics in computer vision, computer graphics, remote sensing, image processing, visualization, and machine learning. The current research focus is on deep learning, generative models, and 3D shape analysis and reconstruction.[ < g r a p h i c s > ]Mohamed Elhoseiny is an Assistant Professor of Computer Science at KAUST, Saudi Arabia, and a Senior Member of IEEE and AAAI. He was a Visiting Faculty at Stanford Computer Science Department (2019-2020), a Visiting Faculty at Baidu Research Silicon Valley Lab (2019), and a Postdoc Researcher at Facebook AI Research (2016-2019). Dr. Elhoseiny completed his Ph.D. in 2016 at Rutgers University, during which he spent time at Adobe Research (2015-2016) for more than a year and at SRI International in 2014. He received an NSF Fellowship in 2014 and the Doctoral Consortium Award at CVPR 2016. His primary research interest is in computer vision, especially in efficient multimodal learning with limited data in areas like zero/few-shot learning, vision and language, and language-guided visual perception. He is also interested in affective AI and particularly in producing novel art and fashion with AI. His creative AI work was featured in MIT Tech Review, New Scientist Magazine, and the HBO show Silicon Valley. | http://arxiv.org/abs/2310.18511v1 | {
"authors": [
"Habib Slim",
"Xiang Li",
"Yuchen Li",
"Mahmoud Ahmed",
"Mohamed Ayman",
"Ujjwal Upadhyay",
"Ahmed Abdelreheem",
"Arpit Prajapati",
"Suhail Pothigara",
"Peter Wonka",
"Mohamed Elhoseiny"
],
"categories": [
"cs.CV",
"cs.AI"
],
"primary_category": "cs.CV",
"published": "20231027220143",
"title": "3DCoMPaT$^{++}$: An improved Large-scale 3D Vision Dataset for Compositional Recognition"
} |
In a recent work, Maciej Dołega and the author have given a formula of the expansion of the Jack polynomial J^(α)_λ in the power-sum basis as a non-orientability generating series of bipartite maps whose edges are decorated with the boxes of the partition λ.We conjecture here a variant of this expansion in which we restrict the sum on maps whose edges are injectively decorated by the boxes of λ. We prove this conjecture for Jack polynomials indexed by 2-column partitions. The proof uses a mix of combinatorial methods and differential operator computations.On the difference of mean subtree orders under edge contraction Ruoyu Wang January 14, 2024 ===============================================================§ INTRODUCTION§.§ Jack polynomials and maps Jack polynomials J_λ^(α) are symmetric functions indexed by an integer partition λ anda deformation parameter α, and which have been introduced by Jack in <cit.>. Jack polynomials interpolate, up to scaling factors, between Schur functions for α=1 and zonal polynomials for α=2.In his work <cit.>, Stanley initiated the combinatorial analysis of these symmetric functions. Later, they have been connected to various objects of algebraic combinatorics, such as partitions, tableaux, paths and maps <cit.>. In particular, Haglund and Wilson have given a combinatorial interpretation of the expansion of Jack polynomials in the power-sum basis in terms of weighted tableaux <cit.>.A different formula of this expansion has been obtained recently in <cit.> as a non-orientability generating series of maps whose edges are decorated with the boxes of a Young diagram.The formula of <cit.> answers the long-standing question, going back to to <cit.>, of giving an expression of Jack polynomials in terms of maps, and it was used in <cit.> to prove positivity conjectures on Jack characters. The purpose of this note is to study variants of these formula, in which an additional injectivity property is required. We make a conjecture in this direction, see <ref>.This injective conjecture is an α-deformation of two known formula for Schur and Zonal functions (which correspond to Jack polynomials for α=1 and α=2 respectively). We prove <ref> for Jack polynomials indexed by two-column partitions. Roughly, a map is a graph drawn on a locally orientable surface. The study of maps is a well developed area with strong connections with analytic combinatorics, mathematical physics and probability <cit.>. The relationship between generating series of maps and the theory of symmetric functions was first noticed via a character theoretic approach<cit.> and has then been developed to include other techniques such as matrix integrals and differential equations <cit.>.The introduction is organized as follows. In Sections <ref> and <ref> we introduce some definitions related to maps. In Section <ref>, we recall the maps expansion of Jack polynomials obtained in <cit.>. We conjecture an injective version of this expansion in <ref>. We formulate the main results of the paper in Sections <ref> and <ref>. In <ref>, we explain how our conjecture is connected to other conjectures relating Jack polynomials to maps. Finally, the outline of the paper is detailed in <Ref>. §.§ MapsA connected map is a connected graph embedded into a surfacesuch that all the connected components of the complement of the graph are simply connected (see <cit.>). These connected components are called the faces of the map. We consider maps up to homeomorphisms of the surface. A connected map is orientable if the underlying surface is orientable. In this paper[This is not the standard definition of a map; usually a map is connected. ], a map is an unordered collection of connected maps. A map is orientable if each one of its connected components is orientable. Finally, the size of a map is its number of edges.All maps considered here are bipartite; i.etheir vertices are colored in two colors, white and black, such that each edge connects two vertices of different colors.Note that in a bipartite map, all faces have even degree.Hence, we define the face-type of a bipartite map of size n, as the partition of n obtained by reordering the half degrees of the faces. For a given map M, we denote by |(M)| its number of vertices, and by (M) its number of connected components. We also denote its number of white and black vertices by |(M)| and |(M)| respectively. In order to enumerate maps with trivial automorphism group, we consider rooted maps; we say that a connected map is rooted if it hasa distinguished white oriented corner c. The corner c will be called the root corner andthe edge following the root corner is called the root edge. More generally, a map is rooted if each one of its connected components is rooted. We say that a rooted map of size n is labelled, if its edges are labelled with the integers 1,2,...n, such that the root edge of each connected component has the smallest label in its connected component. §.§ -decorated maps We call a mapping f ofM in a Young diagram λ, a function which associates to each edge e of M a box of λ, such that * if e_1 and e_2 are two edges incident to the same black vertex, then f(e_1) and f(e_2) are in the same row of λ.* if e_1 and e_2 are two edges incident to the same white vertex, then f(e_1) and f(e_2) are in the same column of λ.Such pair (M,f) is called a λ-decorated map. Moreover, we say that (M,f) is a λ-injectively decorated maps, if f satisfies the additional injectivity property: * two different edges are decorated by two different boxes; f(e_1)≠ f(e_2) if e_1≠ e_2. We consider a total order on the boxes of λ using the lexicographic order, see <ref> for a precise definition. A rooted λ-decorated map is a rooted map M equipped with a function f from M to λ with the condition that the root edge of each connected component is decorated with the smallest decorating box in its connected component.We denote by _n(λ) the set of λ-decorated maps of size n, orientable or not. Similarly, we denote _n(λ) (resp. _n(λ))the set of rooted λ-injectively decorated maps of size n, orientable or not (resp. orientable). These definitions were introduced in <cit.> as injective mappings of permutations and matchings respectively into partition diagrams.Later a reformulation in terms of maps has been considered in <cit.>. This reformulation is based on the fact that maps can be encoded using permutations or matchings (see e.g. <cit.>).We consider the 2-column partition λ=2^61^3 of size 15. In <ref> we give an example of a rooted λ-injectively decorated map of size 12. In <ref>, the left-hand side of the square should be glued to the right-hand one (with a twist)as indicated by the white arrows, and the top side should be glued to the bottom one (without a twist) as indicated by the black arrows. The root corner is indicated by the green arrow.§.§ The non-injective expansion ofWe start by the following definition due to Goulden and Jackson.[<cit.>] A statistic of non-orientability on bipartite maps is a statistic ϑ with non-negative integer values, such that ϑ(M)=0 if and only if M is orientable. In practice, a statistic of non-orientability is supposed to "measure" the non-orientability of a map. Generally, such a measure is obtained by counting the number of "twisted" edges, following a given algorithm of decomposition of the map. Several examples of such statistics have been introduced in previous works <cit.>. In the following, maps are counted with a weight of non-orientability of the form b^ϑ(M), where b:=α-1 is the shifted Jack parameter. We denote byJ^(α)_λ(𝐩) Jack polynomial associated to the partition λ expressed in the power-sum basis 𝐩:=(p_1,p_2,...), see <ref>.<cit.> Let n be a positive integer and let λ be a partition of n. There exists a statistic of non-orientability ϑ such that J_λ^(α)(𝐩)=∑_M∈_n(M)(-1)^n-|(M)|α^|(M)|-(M)/C(M)b^ϑ(M)p_(M) ,where the sum is taken over maps in _n(λ) considered with some labelling[The labelling used in <cit.> is different from the one considered here, and is related to the decoration of the map.], and C(M) is a normalization factor related to this labelling. The first special cases of <ref> which have been proved correspond to the cases α=1 (Schur functions) and α=2 (Zonal functions) see <cit.>. In both these papers, the authors start by proving the expansion in terms of maps injectively decorated. For α=1 this corresponds to a well known formula for Schur functions <cit.>, (see also <cit.> for a full proof of this result). We reformulate it here in the language of maps. <cit.> Let λ be a partition of size n. Then J^(α=1)_λ= ∑_M∈_n(λ)(-1)^n-|(M)|p_(M).For α=2, a similarinjective formula has been proved in <cit.>. <cit.> Let λ be a partition of size n. Then J^(α=2)_λ= ∑_M∈_n(λ)(-1)^n-|(M)|2^|(M)|-(M)p_(M).Féray and Śniady have proved in <cit.> and <cit.>that in the cases α=1 and α=2,the injective and the non injective variants of these formulas are equivalent. To do so, they constructed a fixed points free involution on the set of maps equipped with a non injective mapping in the Young diagram of λ.However, it seems hard to construct such an involution for general α because of the non-orientability weight. In <cit.>, the authors use an independent approach based on differential operators which allow to construct the generating series of decorated maps. They then prove that this series satisfies some characterization properties of the power-sum expansion of Jack polynomials, which allows to obtain directly the non-injective formula of <ref>. Nevertheless, it seems natural to try to obtain an α deformation of the injective formulas of Equations (<ref>) and (<ref>).§.§ An injective conjecture We consider the following injective variant of <ref>. Let λ be a partition of size n. Then there exists a statistic of non-orientability ϑ on _n(λ) such that, J_λ^(α)(𝐩)=∑_M∈_n(λ)(-1)^n-|(M)|α^|(M)|-(M)b^ϑ(M)p_(M) , In the following, we will refer to the quantity(-1)^|M|-|(M)|α^|(M)|-(M) as the α-weight of the map M and to b^ϑ(M) as its b-weight.While the non injective formula of <ref> allows to obtain polynomiality and positivity properties about Jack polynomials (see <cit.>), the injective conjecture we are considering here has the advantage of having less cancellation. Indeed, the fact that the considered maps are injectively decorated puts some restrictions on the underlying graph. For example, maps with multiple edges do not appear in <ref>. This also implies that there exist no λ-injectively decorated maps of size bigger than |λ| which would make obvious the vanishing property of Jack characters (see e.g <cit.>).Both the particular cases of(<ref>) and (<ref>) are obtained using representation theory tools which seem to be specific to these specialisations and can not been used for general α. New tools are then required to understand <ref>. §.§ Main theoremThe main contribution of this paper is to establish <ref> for Jack polynomials indexed by 2-column partitions.Let λ be a 2-column partition of size n. There exists an explicit statistic of non-orientability ϑ on _n(λ) such that J_λ^(α)(𝐩)=∑_M∈_n(λ)(-1)^n-|(M)|α^|(M)|-(M)b^ϑ(M)p_(M). Unfortunately, the statistic ϑ that we use here for 2-column partitions does not work for all partitions λ. We now briefly describeour proof strategy. The starting point is an expression of theJack polynomial J_λ^(α), indexed by a 2-column partition λ=[2^r,1^s] in the monomial basis which is due to Stanley <cit.>:J_λ^(α)(𝐩)=∑_i=1^r(r)_i(α+r+s)_i(2(r-i)+s)!m_2^i1^2(r-i)+s(),where we denote the falling factorial by(a)_k:=a(a-1)...(a-k+1),for any real a, and any non negative integer k.On the other hand, using a variant of the result of Chapuy and Dołega <cit.>, we give a particularly simple expression in the monomial basis for a generating series of labelled bipartite maps with non-orintability weights, which are not yet decorated, see <ref>. We prove then that the generating series of λ-injectively decorated maps can be obtained from the series of labelled maps using an "leaf addition" procedure, see <ref> and <ref>.After some computations, we show that the right-hand side of <ref> is equal to that of <ref>, concluding the proof.§.§ Low degree terms in the parameter It is known from the work of Lapointe and Vinet <cit.>, that the coefficients of the Jack polynomials in the power-sum basis are polynomial in the parameter α.As a consequence of the main result, we prove in <ref> that <ref> holds when we extract the coefficients of α^0 and α^1 in <ref> for any partition λ.§.§ Connection to other conjectures relating maps to Jack polynomials Hanlon has conjectured in <cit.> that Jack polynomials expansion in the power-sum basis can be expressed asweighted sums of pairs of permutation. This conjecture can be reformulated as follows. <cit.> Fix a partition λ of size n.Then, there exists a statistic w on orientable _n(λ) maps such that J^(α)_λ(𝐩)=∑_M∈_n(λ)(-1)^n-|(M)|α^w(M)p_(M). Hence, <ref> can be seen as a non orientable version of Hanlon's conjecture.Indeed, considering Theorems <ref> and <ref> above and other works relating Jack polynomials to maps <cit.>, it seems more natural to take a sum over non-oriented maps (as we do in <ref>) rather than a sum over oriented maps (as was done by Hanlon in <ref>).Note that for α=1 both conjectures are equivalent to <ref>. An other special case for which the two conjectures are equivalent, is the case of hook-shaped partitions λ, i.e λ=[r,1^s] for some integers r and s. Indeed, it follows from the definition of decorated maps that only orientable maps appear in the sum of <ref>. This case has been treatedby Hanlon <cit.>, see also <cit.>. Moreover, both the result of <cit.> and <ref> can be seen as dual questions for the celebrated Matchings-Jack and b-conjectures of Goulden and Jackson, which suggest that the multivariate non-orientability generating series of bipartite maps can be expressed through Jack polynomials. These conjectures are still open despite many partial results <cit.>. §.§ Outline of the paperIn <ref>, we introduce some useful notation and preliminary results. Based on the work of Chapuy and Dołega, we give in <ref> an expression of a particular specialization of the generating series of weighted labelled maps in terms of monomial functions. <ref> is dedicated to the proof of <ref>. As a consequence of the main result, we establish in <ref> the <ref> for terms of degree 0 and 1 in α. § NOTATION AND PRELIMINARIESFor the definitions and notation introduced in <ref>we refer to <cit.>.§.§ Partitions A partition λ=[λ_1,...,λ_ℓ] is a weakly decreasing sequence of positive integers λ_1≥...≥λ_ℓ>0. The integer ℓ is called the length of λ and is denoted ℓ(λ). The size of λ is the integer |λ|:=λ_1+λ_2+...+λ_ℓ. If n is the size of λ, we say that λ is a partition of n and we write λ⊢ n. The integers λ_1,...,λ_ℓ are called the parts of λ. For every i≥1, we denote by m_i(λ) the number of parts equal to i in λ, and we introduce the following notationz_λ:=∏_i≥1m_i(λ)!i^m_i(λ). We recall the dominance partial ordering on partitions ≤ defined by μ≤λ |μ|=|λ|and μ_1+...+μ_i≤λ_1+...+λ_ifori≥1. We identify a partitionλ with its Young diagram defined by λ:={(i,j),1≤ i≤ℓ(λ),1≤ j≤λ_i}. For a given partition λ, we define the total order on the boxes of λ given by the lexicographic order on their coordinates: _1=(i_1,j_1)< _2=(i_2,j_2) ⟺ i_1<i_2or(i_1=i_2and j_1<j_2).The conjugate partition of λ, denoted λ^t, is the partition associated to the Young diagram obtained by reflecting the diagram of λ with respect to the line j=i:λ^t:={(i,j),1≤ j≤ℓ(λ),1≤ i≤λ_i}. Fix a box :=(i,j)∈λ. Its arm-length is given by a_λ():=|{(i,r)∈λ,r>j}|=λ_i-j, and its leg-length is given by ℓ_λ():=|{(r,j)∈λ,r>i}|=(λ^t)_j-i.Two α-deformations of the hook-length product were introduced in <cit.>;_λ^(α):=∏_∈λ(α a_λ()+ℓ_λ()+1), _λ'^(α):=∏_∈λ(α(a_λ()+1)+ℓ_λ()).With these notation, the classical hook-length product is given by H_λ:=_λ^(1)=_λ'^(1).Finally, we define the α-content of a box :=(i,j) by c_α():=α(j-1)-(i-1). §.§ Symmetric functions and Jack polynomialsWe fix an alphabet 𝐱:=(x_1,x_2,..). We denote by 𝒮 the algebra of symmetric functions in 𝐱 with coefficients in ℚ. For every partition λ, we denote m_λ the monomial function and p_λ the power-sum function associated to the partition λ. Weconsider the associated alphabet of power-sum functions 𝐩:=(p_1,p_2,..).Let 𝒮_α:=ℚ[α]⊗𝒮 the algebra of symmetric functions with rational coefficients in α.We denote by ⟨.,.⟩_α the α-deformation of the Hall scalar product defined by ⟨ p_λ,p_μ⟩_α=z_λα^ℓ(λ)δ_λ,μ, for any partitions λ,μ. Macdonald <cit.> has proved that there exists a unique family of symmetric functions (J_λ^(α)) in 𝒮_α indexed by partitions, satisfying the following properties:{[Orthogonality: ⟨ J_λ,J_μ⟩_α=0,for λ≠μ,;Triangularity:[m_μ]J_λ=0,unless μ≤λ,;Normalization: [m_1^n]J_λ=n!,for λ⊢ n, ].where [m_μ]J_λ denotes the coefficient of m_μ in J_λ, and 1^n is the partition with n parts equal to 1. These functions are known as the Jack polynomials. In particular, Jack polynomials indexed by 1-column partitions are given byJ^(α)_1^n=n!m_1^n=∑_μ⊢ n(-1)^n-ℓ(μ)n!/z_λp_μ,and are independent of the parameter α. The squared norm of Jack polynomials can be expressed in terms of the deformed hook-length products, see <cit.>:j_λ^(α):=⟨ J_λ,J_λ⟩_α=^(α)_λ'^(α)_λ. In particular, we havej^(α)_1^n=n!(α+n-1)_n.In this paper, Jack polynomials will always be expressed in the power-sum variables 𝐩 rather than the alphabet(this is possible since the power-sum functions form a basis of the symmetric functions algebra).We have the following theorem due to Macdonald <cit.>, which gives an expression of Jack polynomials when all power-sum variables p_i are equal to a variable u.For every λ∈𝒫, we haveJ_λ^(α)(u)=∏_∈λ(u+c_α()),where u:=(u,u,...).We conclude this subsection with the following corollary. Let λ⊢ n≥1. We have the following expression for Jack polynomials specialized at u=-α:J_λ^(α)(-α)={[ (-1)^n (α+n-1)_nif λ=1^n,;0 otherwise. ]. §.§ Side-marked mapsRecall that a connected map is rooted if it has a marked oriented white corner. This is equivalent to saying that it has a marked edge-side (by convention this will be the side following the root corner). We say that a map is side-marked if each one of its edges has a distinguished side. Rather than using rooted maps, in which only one edge in each connected component has a distinguished side, it will be more practical in the proof of the main result to consider side-marked maps. Note that to a rooted map of size n, we can associate 2^n-(M) different side-marked maps by choosing the distinguished sides of non-root edges. In this paper, we consider statistics of non-orientability on labelled and injectively decorated maps. Such statistics only depend on the labelling or the decoration of the map, but not on itsrooting or side-marking.§ GENERATING SERIES OF -WEIGHTED BIPARTITE MAPS In this section, we introduce a family of non-orientability statistics on bipartite maps. We consider then the generating series of labelled bipartite maps, where each map M is counted with a b-weight b^(M), and we use a result of Chapuy and Dołega to give an expression of this function in terms of monomial functions.§.§ A statistic of non-orientability on labelled mapLet M be a bipartite map and let c_1 and c_2 be two corners of M of different colors. Then we have two ways to add an edge to M between these two corners. We denote by e_1 and e_2 these edges. We say that the pair (e_1,e_2) is a pair of twisted edges on the map M and we say that e_2 is obtained by twisting e_1 (see <ref>). Note that if M is connected and orientable, then exactly one of the maps M∪{e_1} and M∪{e_2} is orientable.For a given map with a distinguished edge (M,e), we denote (M,ẽ) the map obtained by twisting the edge e.We recall that b is the parameter related to the Jack parameter α by b=α-1. We now give the definition of a measure of non-orientability due to La Croix.<cit.> We call a measure of non-orientabilitya function ρ defined on the set of maps (M,e) with a distinguished edge, with values in{1,b}, satisfying the following conditions: * if e connects two corners of the same face of M\{e}, and the number of the faces increases by 1 by adding the edge e on the map M\{e}, then ρ(M,e)=1. In this case we say that e is a diagonal.* if e connects two corners in the same face M\{e}, and the number of the faces of M\{e} is equal to the number of faces of M, then ρ(M,e)=b. In this case we say that e is a twist.* if e connects two corners of two different faces lying in the same connected component of M\{e}, then ρsatisfies ρ(M,e)+ρ(M,ẽ)=1+b. Moreover, if M is orientable then ρ(M,e)=1. In this case we say that e is a handle.* if econnects two faces lying in two different connected components, then ρ(M,e)=1. In this case, we say that e is a bridge.We now introduce the following statistic of non-orientability on labelled maps which will have a key role in this paper.Let ρ be aand let M be a connected labelled map of size n. We define the b-weight ρ(M) of M, as the weight obtained by decomposing M by deleting the edges in a decreasing order of their labels;ρ(M):=∏_1≤ j≤ nρ(M_j,e_j),where e_j is the edge of M labelled by j, and M_j is the map obtained by deleting the edges e_n, e_n-1,..., e_j+1 from M.We extendthis definition by multiplicativity for disconnected maps;we define the b-weight of a labelled disconnected map M by ρ(M):=∏_iρ(M_i),where the product is taken over the connected components of M. We define the statistic ϑ_∘^ρ on labelled bipartite maps by ρ(M)=b^(M). It follows from the definitions thatis a statistic of non-orientability over labelled bipartite maps.The statisticis a variant of the statistics introduced in <cit.>; in these papers an "edge-deletion" procedure is used to decompose the maps while we use here the order given by the labels of the edges.§.§ Generating series of labelled bipartite mapsLet u_1 and u_2 be two variables.We introduce the generating series of labelled rooted bipartite maps of size n:(,u_1,u_2)= ∑_M1/n!u_1^|(M)|u_2^|(M)|α^-(M)b^(M)p_(M)(),where the sum is taken over labelled rooted bipartite maps of size n, and p_μ is the power-sum function associated to the partition μ.The following theorem is a variant of a special case in <cit.>, where a different statistic of non-orientability is used (see <cit.> and <ref>). For completeness, we give more details about the proof of this theorem in <ref>.For every n≥ 1, we have (,u_1,u_2)=∑_ξ⊢ nJ^(α)_ξ(𝐩)J^(α)_ξ(u_1)J^(α)_ξ(u_2)/j^(α)_ξ,where 𝐩 denotes the power-sum alphabet related to 𝐱 see <ref>. We deduce the following corollary which will be useful in the proof of <ref>.<cit.> For every n≥0 and for everyρ, we haveB^(α)_n,ρ(,-α,-α)=(α+n-1)_n/n!J^(α)_1^n(𝐩)=(α+n-1)_nm_1^n().Specializing u_1=u_2=-α in <ref> and using <ref> we get that B^(α)_n,ρ(,-α,-α)=((α+n-1)_n)^2/j^(α)_ξJ^(α)_1^n(𝐩).We conclude using <ref>. § PROOF OF§.§ PreliminariesFor every statistic of non-orientability ϑ, we introducethe generating series of λ-injectively decorated bipartite maps: (𝐩): =∑_rooted maps M(-1)^n-|(M)|α^|(M)|-(M)b^ϑ(M)p_(M)=∑_side-marked maps M(-1)^n-|(M)|/2^|λ|-(M)α^|(M)|-(M)b^ϑ(M)p_(M),where the two sums run over λ-injectively decoratedbipartite maps M of size |λ|, which are rooted in the first line and side-marked in the second one, see <ref>. Hence <ref> can be reformulated as follows: for some statistic of non-orientability ϑ we have J_λ^(α)=.The idea of the proof is to expandin the monomial basis and compare the expression obtained to <ref>. To this purpose, we rewriteas a sum on colored maps; we call a coloring of bipartite map a function 𝒞 on the faces of M with positive integer values. We say then that (M,𝒞) is a colored map. For any colored map (M,𝒞), we define x^(M,𝒞):=∏_fx_𝒞(f)^(f),where the product is taken over the faces f of M. Hence,p_(M)(𝐱)=∑_𝒞x^(M,𝒞),where the sum is taken over all the colorings of M. With this notation, the generating serieshas the following expression in the alphabet :():=∑_(M,𝒞)(-1)^n-|(M)|/2^|λ|-(M)α^|(M)|-(M)b^ϑ(M)x^(M,𝒞),where the sum runs over colored λ-injectively decorated side-marked maps of size |λ|.§.§ From -decorated maps to labelled -bipartite maps Fix a2-columnpartition λ and let M be a λ-injectively decorated map.We say that a white vertex v of M has color ∘^1 resp. ∘^2 if the edges incident to v have labels in the first resp. the second column of λ. For a face f of M, we denote by _∘^1(f) respectively _∘^2(f) the number of white corners of color ∘^1 respectively ∘^2 incident to f. Let λ be a 2-column partition, and let M be a λ-injectively decorated map. Note that this implies that all black vertices of M have degree 1 or 2.We call a black-leaf edge of M an edge incident to a black vertex of degree 1.We define the labelled bipartite map M_∘ obtained by forgetting the black vertices v of M as follows: * If v has degree 1, then we delete v and the edge incident to it. Hence, we delete all the black-leaf edges of M. * If v has degree 2, then we forget the vertex v and consider the two edges incident to it as one edge (hence this edge separates two vertices of color ∘^1 and ∘^2). Note that the map M_∘ hence obtained has only white vertices, and is bipartite with respect to the colors ∘^1 and ∘^2. Moreover, we have an injection from the edges ofM_∘ to the rows of size 2 of λ. Hence M_∘ comes with a natural labelling inherited from the row indices associated to the edges via this injection. We now define a statistic of non-orientability on λ-injectively decorated maps. [A statistic on λ-injectively decorated maps] Fix a 2-column partition λ. For every MON ρ, we associate to each λ-injectively decorated map M the statistic ϑ^ρ(M):=(M_∘), whereis the statistic on labelled bipartite maps defined in <ref>, and M_∘ is the labelled bipartite map obtained from M by forgetting the black vertices as explained above.Conversely, if M_∘ is a labelled side-marked map of size j≤ r which is bipartite in the colors ∘^1 and ∘^2, we obtain a λ-injectively decorated side-marked map of size |λ| by realizing the following steps: * We choose a set I of j rows of λ of size 2 (we have rj ways to choose such a set), and we associate to each edge of M a row in I, using the labelling of M. * We add a black vertex in the middle of each edge.Hence, we transform an edge e to two edges e_1 and e_2, that connects respectively a black vertex to white vertices of color ∘^1 and ∘^2. Notice that we multiply then the size of the map by 2.* If e is associated to a row R_e of size 2 in λ then we decorate the edge e_1 (resp. e_2) by the box of R_e which is in the first column (resp. the second column). * For each boxof λ which is not used in the decoration of the map, we add a black-leaf edge connected to a white vertex of color i (possibly a new white vertex), where i∈{1,2} is the column containing . Moreover, we add these edges successively in a increasing order of the decorating boxes. Note that each time we add a black-leaf edge connected to an existing white vertex then there are two different ways to mark one of its sides. Note that each connected component of M inherits a root from M_∘ which is given by a an oriented corner of color ∘^1.§.§ Adding black leavesWe consider a second alphabet :=(y_1,y_2,..) and we define the product alphabet :=(x_1y_1,x_2y_2,...).We introduce the two following operators X_+:=∑_i≥1(x_i-x_i^2∂/∂ x_i), Y_+:=∑_i≥1(y_i-y_i^2∂/∂ y_i).Let (M,𝒞) be a colored λ-injectively decorated side-marked bipartite map. We define the marking of(M,𝒞) by:κ(M,𝒞):=(-1)^n-|(M)|/2^|λ|-(M)α^|(M)|-(M)b^ϑ^ρ(M)∏_f[x_𝒞(f)^_∘^1(f)y_𝒞(f)^_∘^2(f)],where the product is taken over all the faces of M.Hence the alphabet(resp. ) is a marking for ∘^1 (resp. ∘^2) colored corners. Fix two λ-injectively decorated maps M and N, such that M is obtained from N by deleting some black-leaf edges.If the map N is equipped with a coloring 𝒟, then M is naturally equipped with a coloring 𝒞, where we use the convention that deleting a black vertex of degree 1 from a face does not change its color. We say that the coloring 𝒞 is inherited from the coloring 𝒟. The operator X_+ and Y_+ allow to addblack-leaf edges incident respectively to a white vertex of color ∘^1 and ∘^2. More precisely, we have the following lemma.Fix a 2-column partitionλ=2^r1^s andlet (M,𝒞) be a colored λ-injectively decorated side-marked map. Then,X_+κ(M,𝒞)=∑_(M∪e,𝒞)κ(M∪ e,𝒞). Y_+κ(M,𝒞)=∑_(M∪e,𝒞)κ(M∪ e,𝒞).where the sum in <ref> resp. <ref> is taken over all λ-injectively decorated side-marked maps obtained by adding ablack-leaf edge e to M such that * thecolored map (M∪e,𝒞) obtained is such that 𝒞 is inherited from 𝒞.* the edge e is decorated by the smallest box in the first column resp. second column, which is not yet used in the decoration of M. Let us prove <ref>. We start by noticing that adding a black-leaf edge to a map does not change its b-weight, this is straightforward from <ref>.We have two ways to add a black leaf-edge decorated by a box in the first column of λ: * by adding an isolated edge.* by adding a black leaf on a ∘^1-corner. In the first case the α-weight of the map does not change, since the size of the map, the number of white vertices and the number of connected components all increase by 1. Finally, we choose a color i≥1 for the new face, this is guaranteed by the operator ∑_i≥1 x_i. In the second case, the α-weight of the map is multiplied by -1 since we increase the number of edges without changing the number of white vertices. For i≥1, the operator x_i^2∂/∂ x_i allows to choose a ∘^1-corner in a face of color i and to increase the degree _∘^1 of this face by 1. Finally, we have two ways to distinguish a side of the added edge and this compensated by the factor 1/2^|λ|-(M).<ref> can be obtained in a similar way.We deduce the following proposition. Let λ=2^r1^s be a2-column partition. One has()=∑_0≤ j≤ r(r)_jX_+^r+s-jY_+^r-j(,-α,-α)_|=.We start by noticing that substitutingbyin <ref>, specializing u_1=u_2=-α and developing the power-sum functions as in <ref>, we get(,-α,-α)=1/j!∑_(M_∘,𝒞)(-1)^|(M_∘)|α^|(M_∘)|-(M_∘)b^(M_∘)∏_f (x_𝒞(f)y_𝒞(f))^(f),where the sum runs over colored labelled rooted bipartite maps (M_∘,𝒞) of size j which are bipartite in the colors ∘^1/∘^2. Realizing the three first steps detailed in <ref> on each map M_∘, the previous equation can be rewritten as followsrj(,-α,-α)=1/j!∑_colored rootedmaps (M,𝒞)(-1)^|(M)|α^|(M)|-(M)b^(M)κ(M,𝒞)=1/j!∑_colored side-marked maps (M,𝒞)(-1)^|(M)|/2^2j-(M)α^|(M)|-(M)b^(M)κ(M,𝒞),where the two sums run over λ-injectively decorated maps (M,𝒞) of size 2j, which are bipartite in the two colors white and black, and such that all black vertices have degree 2. In order to add black-leaf edges as explained in the last step at the end of <ref>, we apply the operators X_+ and Y_+. Fix a map (M,𝒞) as in <ref>. Using <ref>, we get thatX_+^r+s-jY_+^r-j1/2^2j-(M)κ(M,𝒞)=∑_(N,𝒟)1/2^|λ|-(N)κ(N,𝒟),where the sum runs over coloredλ-injectively decorated side-marked bipartite maps (N,𝒟) of size |λ|, such that M is obtained from N by deleting all black-leaf edges and the coloring 𝒟 is inherited from 𝒞. This finishes the proof of the proposition.§.§ End of the proof of the main resultSince the generating functionis obtained from the functions (,-α,-α) by applying the operators X_+ and Y_+ (see <ref>) and since this function has an expression in terms of 1-column monomial functions (see <ref>), we should understand the action of the operators X_+ and Y_+ on 1-column monomial functions in . This is given by the following lemma.Let r,s and j be three non-negative integers satisfying j≤ r. Then X_+^r+s-jY_+^r-jm_1^j(𝐱𝐲)_|==∑_j≤ i ≤ rij2(r-i)+sr-i(r-j)!(r+s-j)!m_2^i,1^2(r-i)+s. For any subsetβ⊂ℕ^* and an alphabet of variables 𝐱=(x_1,x_2,...) we define x^β:=∏_i∈βx_i,andx^2β:=∏_i∈βx_i^2.With this notation, we writem_1^j(𝐱𝐲)=∑_|β|=jx^βy^β. We start by noticing that for every subset β⊂ℕ^*, one hasX_+x^β=∑_i∉βx^β∪{i}.Hence, for every β of size j we haveX_+^r+s-jY_+^r-jx^β y^β=(r+s-j)!(r-j)!∑_γ,δ_1,δ_2x^γ∪δ_1y^γ∪δ_2 where the sum is taken over γ,δ_1,δ_2⊂ℕ^*, satisfying * |γ|+|δ_1|=r+s and |γ|+|δ_2|=r.* β⊂γ.* The three sets δ_1, δ_2 and γ are disjoint.Taking the sum over sets β of size j and setting = givesX_+^r+s-jY_+^r-j m_1^j(𝐱𝐲)_|= =(r+s-j)!(r-j)!∑_|β|=j∑_γ,δ_1,δ_2x^2γx^δ_1∪δ_2=(r-j)!(r+s-j)!∑_j≤ i≤ r∑_γ,δC_j(γ,δ) x^2γx^δ ,where the second sum in the last line is taken over disjoint sets γ and δ such that |γ|=i and |δ|=2(r-i)+s, and where C_j(γ,δ) is the number of triplets of sets (β,δ_1,δ_2) of respective sizesj,r+s-i and r-i and such that β⊂γ and (δ_1,δ_2) is a partition of δ. Hence C_j(γ,δ) =2(r-i)+sr-iij,and this finishes the proof of the lemma. We now deduce the proof of the main theorem. Fix a MON ρ. Using <ref> and <ref> we get that =∑_0≤ j≤ r (r)_j(α+j-1)_jX_+^r+s-jY_+^r-jm_1^j(𝐱𝐲)_|=.On the other hand, using <ref> we get that∑_0≤ j≤ r(r)_j(α+j-1)_jX_+^r+s-jY_+^r-jm_1^j(𝐱𝐲)_|y_i=x_i=∑_0≤ j≤ r(r)_j(α+j-1)_j∑_j≤ i≤ rij2(r-i)+sr-i(r-j)!(r+s-j)!m_2^i,1^2(r-i)+s()=∑_0≤ i≤ rr!2(r-i)+sr-im_2^i,1^2(r-i)+s()∑_0≤ j≤ iij(α+j-1)_j(r+s-j)!=∑_0≤ i≤ r(r)_i (2(r-i)+s)!m_2^i,1^2(r-i)+s()∑_0≤ j≤ iij(α+j-1)_j(r+s-j)!/(r+s-i)!.The second sum in the last line can be rewritten as follows(-1)^ii!∑_0≤ j≤ i-αj-(r+s-i+1)i-j =(-1)^ii!-(α+r+s-i+1)i=(α+r+s)_i. Hence, we obtain that =∑_i=1^r(r)_i(α+r+s)_i(2(r-i)+s)!m_2^i1^2(r-i)+s(𝐱).Using <ref>, we get that=J_λ^(α). § APPLICATION:FOR THE LOW DEGREE COEFFICIENTS INWe recall that α is the Jack parameter, and b is the parameter related to α by α=b+1. Let R be a field. We denote by R (α) the field of rational functions in α with coefficients in R. Forf∈ R (α) and an integer m, we write f=O(α^m) if the rational function α^-m· f has no pole in 0. The main result of this section establishes the cases of α=0 and the coefficient [α] in <ref>.Let λ be a partition of size n.Then, there exists aρ such thatJ_λ^(α)(𝐩)=∑_M∈_n(λ)(-1)^n-|(M)|α^|(M)|-(M)b^ϑ^ρ(M)p_(M)+O(α^2). Let λ^1,...,λ^r be a family of partitions. We denote by ⊕_1≤ i≤ rλ^i its entry-wise sum defined by (⊕_1≤ i≤ rλ^i)_j=∑_1≤ i≤ rλ^i_j, for every j≥ 1. In particular, a partition λ can be written as the entry-wise sum of its columns; λ=⊕_1≤ i≤λ_11^λ^t_i, where λ^t denotes the conjugate partition of λ.The following theorem is a particular case of the strong factorization property of Jack polynomials due to Dołega and Féray <cit.>. <cit.> Let λ^1, λ^2 and λ^3 be three partitions. Then J^(α)_λ^1⊕λ^2⊕λ^3-J^(α)_λ^1⊕λ^2J^(α)_λ^3-J^(α)_λ^1⊕λ^3J^(α)_λ^2-J^(α)_λ^2⊕λ^3J^(α)_λ^1+2J^(α)_λ^1J^(α)_λ^2J^(α)_λ^3=O(α^2). For α=0, we have the following expression for Jack polynomials , see <cit.>. J_λ^(0)=∏ _1≤ i≤λ_1J_λ^t_i^(0).Using the strong factorization property we can generalize this result as follows; the coefficient of α^r-1 in the expansion of a Jack polynomial as a polynomial in α, can be obtained using only Jack polynomials indexed by partitions with less than r columns. We prove here this result for r=2 using <ref>. Let λ be partition with 3 columns or more. Then, [α]J_λ^(α)=∑_1≤ i<j≤λ_1([α]J_1^λ^t_i⊕1^λ^t_j^(α))∏_k≠,i,jJ_1^λ^t_k^(0).We prove the result by induction on the number of columns of λ. If λ has 3 columns then the result is direct consequence of <ref> and the fact that Jack polynomials indexed by 1-column partitions are independent from α (see <ref>). Let λ be a partition containing more than 3 columns. We denote by 1^λ^t_1 and 1^λ^t_2 the two first columns of λ and by μ the partition obtained by taking the entry-wise sum of the other columns. Hence we have λ=1^λ^t_1⊕1^λ^t_2⊕μ. From <ref> we get that [α]J^(α)_λ =[α][J^(α)_1^λ^t_1⊕1^λ^t_2J^(α)_μ+J^(α)_1^λ^t_1⊕μJ^(0)_1^λ^t_1+J^(α)_1^λ^t_2⊕μJ^(0)_1^λ^t_1-2J^(0)_1^λ^t_1J^(0)_1^λ^t_2J^(α)_μ]=([α]J^(α)_1^λ^t_1⊕1^λ^t_2)J^(0)_μ+J^(0)_1^λ^t_1⊕1^λ^t_2([α]J^(α)_μ)+([α]J^(α)_1^λ^t_1⊕μ)J^(0)_1^λ^t_1+([α]J^(α)_1^λ^t_2⊕μ)J^(0)_1^λ^t_1-2J^(0)_1^λ^t_1J^(0)_1^λ^t_2([α]J^(α)_μ).For each 1≤ i< j≤λ_1, we prove that the term([α]J_1^λ^t_i⊕1^λ^t_j^(α))∏_k≠,i,jJ_1^λ^t_k^(0)appears with coefficient 1 in the last sum by applying the induction hypothesis on the partitions 1^λ^t_1⊕μ, 1^λ^t_2⊕μ and μ and then using<ref>; we distinguish the three cases* (i,j)=(1,2),* i≤ 2 and j>2,* 2<i<j.The following lemma will be useful in the proof of <ref>.Fix a partition λ and a λ-injectively decorated map M, such that |𝒱_∘(M)|-(M)=0. Then, M is orientable and then ϑ(M)=0 for every statistic of non-orientability ϑ. Since the number of white vertices of M is equal to the number of connected component, then M has exactly one white vertex in each connected component. On the other hand, since M is λ-injectively decorated then the underlying graph does not have multiple edges. Hence, all black vertices of M has degree 1, and M is orientable.It can be shown that the right-hand side of <ref> also satisfies the strong factorization of Féray and Dołega. We prove here the two first equations of this property which will be useful for the proof of <ref>.For every non-orientability statistic ϑ, we have F^(0)_λ,ϑ=∏ _1≤ i≤λ_1F_λ^t_i,ϑ^(0), and [α]=∑_1≤ i<j≤λ_1([α]F_1^λ^t_i⊕1^λ^t_j,ϑ^(α))∏_k≠,i,jF_1^λ^t_k,ϑ^(0). When α=0, only the maps M such that |𝒱_∘(M)|-(M)=0 appear in the sum defining , i.e maps having exactly one white vertex in each connected component. Hence, in such a map the edges of one connected component are labelled by boxes in the same column of λ. We deduce that for such a map M (necessarily orientable by <ref>), we have (-1)^n-|(M)|p_(M)=∏_1≤ i≤λ_1(-1)^|M_i|-|(M_i)|p_(M_i),where for each i, M_i denotes the collection of connected components of M, whose edges are decorated by boxes in the column i of λ. This finishes the proof of <ref>. When we consider the coefficient[α], only maps M such that |(M)|-(M)=1 contribute to the sum (this is a consequence of <ref>).Each connected component of such a map contains exactly one white vertex, except for one that contains two white vertices. It is easy to check that the edges of such a connected component are labelled by boxes in two different columns. Using multiplicativity arguments as in <ref>, we deduce <ref>.We deduce the proof of <ref>.We use the fact that the coefficients [α^0] and [α] of Jack polynomials and of the generating seriessatisfy the same equations (<ref> on one hand and <ref> on the other hand) and that in these equations only functions J^(α)_λ andindexed by 1- and 2- column partitions are involved. But we know from <ref> that there exist a MON ρ such that <ref> holds in these cases. § PROOF OFFollowing <cit.>, we introduce the differential operatorsA_1:=p_1/α, A_2:=∑_i≥1p_i+1i∂/∂ p_i,A_3:=(1+b)∑_i,j≥1p_i+j+1ij∂^2/∂ p_i∂ p_j+∑_i,j≥1p_i p_j(i+j)∂/∂ p_i+j-1+b∑_i≥1p_i+1i∂/∂ p_i,on the algebra ℚ(b)[p_1,p_2,..]. In the following proposition, we give a combinatorial interpretation of these operators. Fix aρ and a labelled side-marked bipartite map M of size n.Then,A_i [b^(M)/2^|M|-(M)α^(M)p_(M)]=∑_Mb^(M)/2^|M|-(M)α^(M)p_(M), for1≤ i≤3, where the sum is taken over labelled side-marked maps M=M∪{e} obtained by adding a side-marked edge e of label n+1 to the map M, such that: * if i=1, then e is a disconnected edge.* if i=2, then e is a leaf-edge connecting a white corner of M to an isolated black vertex (or equivalently a black corner of M to an isolated white vertex).* if i=3, then e connects two cornersof different colors in M.We start by observing that the action of the operator i∂/∂ p_i on p_(M) can be interpreted as choosing a white corner (or equivalently a black corner) in a face of degree i in M (we recall that such a face contain i corners of each color).In the case of item (1), we add a face of size 1 and the number of connected components increases by 1, hence the multiplication by p_1/α.In the case of item (2), we choose a white corner in a face f and we increase the degree of f by 1. In these two cases, the b-weight of the map does not change by adding the edge e.We now focus on item (3). We distinguish three cases. * We add the edge e between two corners c_1 and c_2 which are incident to two different faces f_1 and f_2 of respective sizes i and j, to form a face of size i+j+1. Let us show that this case corresponds to the first term of the operator A_3. Let ẽ denote the edge obtained by twisting e, see <ref>. If the two faces lie in the same connected component of M, then e is a handle andby definition of a MON ρ(M∪{e},e)+ρ(M∪{ẽ},ẽ)=1+b,and this explains the factor 1+b in the first terms of A_3.If the two faces lie in two different connected components, then e is bridge andρ(M∪{e},e)=ρ(M∪{ẽ},ẽ).In this case, the factor 1+b=α in the first term of A_3 is related to the fact that the number of connected components of M decreases by 1. * The added edge e is a diagonal between two corners incident to a face of degree i+j-1, which splits the face into two faces of respective degrees i and j. Then ρ(M∪{e},e)=1 and both the b- and the α-weight of the map are unchanged. * The edge e a twist added on a face of degree i≥1 to obtain a face of degree i+1. Thenρ(M∪{e},e)=b. Note that in the cases of items (2) and (3), we have each time two ways to distinguish a side on the added edge, so that the map obtained is side-marked. This explains the factor 2^|M|-(M) which appears in the denominator. We deduce the following theorem. Fix aρ. For every n≥ 0, we have(n+1)B^(α)_n+1,ρ(,u_1,u_2)=(A_3+(u_1+u_2)A_2+u_1u_2A_1)B^(α)_n,ρ(,u_1,u_2). We recall that by definition of the functionhas the following expression (,u_1,u_2)= ∑_M1/n!2^n-(M)u_1^|(M)|u_2^|(M)|α^-(M)b^(M)p_(M)(),where the sum is taken over labelled side-marked maps of size n. The theorem is then a direct consequence of <ref>.The previous theorem is a variant of the decomposition equation of k-constellation established in <cit.>; the case of bipartite maps that we consider here corresponds to k=2 and to the specialization 𝐪:=(1,0,0,..) with the notation of <cit.> (see also the proof of <ref> below). We now deduce the proof of <ref>. Let 𝐪=(q_1,q_2,...) be an additional sequence of variables. We consider the following function introduced in <cit.>.τ_b^(2)(t,𝐩,𝐪,u_1,u_2)=∑_j≥0t^n∑_ξ⊢ jJ^(α)_ξ(𝐩)J^(α)_ξ(𝐪)J^(α)_ξ(u_1)J^(α)_ξ(u_2)/j^(α)_ξ.From <ref>, we obtain that ∑_n≥ 0 t^n(,u_1,u_2) satisfies the differential equation <cit.> for k=2, m=1 and 𝐪=δ_1:=(1,0,0,...). Since this equation fully characterizes the functions , then using <cit.>, we get that ∑_n≥ 0 t^n(,u_1,u_2)=τ_b^(2)(t,𝐩,δ_1,u_1,u_2).We conclude using the fact that J^(α)_λ(δ_1)=1 (see <ref>).Acknowledgements. The author is very grateful to his advisors Valentin Féray and Guillaume Chapuy for several interesting discussions about Jack polynomials and maps enumeration.alpha | http://arxiv.org/abs/2310.17756v1 | {
"authors": [
"Houcine Ben Dali"
],
"categories": [
"math.CO",
"05E05"
],
"primary_category": "math.CO",
"published": "20231026194832",
"title": "A note on the map expansion of Jack polynomials"
} |
^1Center for Intelligent & Interactive Robotics Research, Korea Institute of Science and Technology, Seoul, 02792, Korea.Email: ([email protected]), ([email protected]) ^* Corresponding author This work was supported by Korea Institute of Science and Technology (KIST), under Grant 2E32302. This paper proposes a new methodology for deriving a point-based dimensionally homogeneous Jacobian, intended for performance evaluation and optimization of parallel manipulators with mixed degrees of freedom. Optimal manipulator often rely on performance indices obtained from the Jacobian matrix. However, when manipulators exhibit mixed translational and rotational freedoms, the conventional Jacobian's inconsistency of units lead to unbalanced optimal result.Addressing this issue, a point-based dimensionally homogeneous Jacobian has appeared as a prominent solution. However, existing point-based approaches for formulating dimensionally homogeneous Jacobian are applicable to a limited variety of parallel manipulators. Moreover, they are complicated and less intuitive. This paper introduces an extended selection matrix that combines component velocities from different points to describe the entire motion ofmoving plate. This proposed approach enables us to formulate an intuitive point-based, dimensionally homogeneous Jacobian, which can be applied to a wide variety of constrained parallel manipulators.To prove the validity of proposed method, a numerical example is provided utilizing a four-degree-of-freedom parallel manipulator.Dimensionally Homogeneous Jacobian using Extended Selection Matrix for Performance Evaluation and Optimization of Parallel ManipulatorsHassen Nigatu^1 and Doik Kim^1*January 14, 2024 ========================================================================================================================================§ INTRODUCTION Performance evaluation and obtaining optimized architectural parameters are a vital step in the design of parallel manipulators (PMs), as these significantly influences the effectiveness and accuracy of a robot's movements. The challenge lies in performing the tasks when the manipulators degree of freedom (DoFs) is a combination of rotational and translational types. This is primarily due to the inconsistency in the units or dimensions of the Jacobian, a factor that significantly affects the performance measuring indices of parallel manipulators <cit.>. Several approaches have been suggested to address this problem <cit.>, and among them, Jacobian-based methods have been widely used. This popularity can be attributed to their capability to effectively translate the inherent mapping process from joint velocities to end-effector velocities, providing a good intuitive framework <cit.>. There are also variety of Jacobian-based approaches of homogenizing the units of the Jacobian matrix <cit.>. Among these approaches, point-based approach is more intuitive <cit.>. Despite its advantage, the firsly introduced point-based dimensionally homogeneous Jacobian (DHJ) formulations comprise dependent motions in their entries, resulting in a condition number with unclear physical meaning and potential erroneous results <cit.>. In response to this problem, Pond et al. <cit.> proposed a method to eliminate the undesired dependent motions from the system. However, the method is quite complicated to comprehend and involved a tedious derivative procedures which leads to higher computation cost <cit.>. To overcome this issue, the selection matrix with the shifting property and conventional Jacobian is used to formulate a point-based dimensionally homogeneous Jacobian matrix <cit.>. However, this previous paper by the authors focused on a specific scenario where each component's velocity encompassed the desired motion of the moving plate, such as 1T2R PMs with TzRxRy type of motion. Considering the aforementioned limitations, this paper formulates an f × f, with f representing the DoF of the mechanism, point-based DHJ matrix using an extended selection matrix. This Jacobian matrix maps the platform's nominal linear velocity to the joint rate. Here, nominal linear velocity refers to the velocity obtained by combining component velocities from different points, which can represent the entire motion of the moving plate. This approach integrates the extended selection matrix, the linear velocity of points on the moving plate and the manipulator's conventional Jacobian, resulting in a square DHJ. The dimensionally homogeneity of the resulting Jacobian is analytically proven. To validate the correctness of the proposed method, a numerical comparison is carried out using a four-degree-of-freedom parallel manipulator as an example. First, the distribution of the condition number is evaluated across the manipulator's rotational workspace, highlighting the disparity in the condition number values of the conventional and dimensionally homogeneous Jacobian. Then, the unit of geometric parameters are changed from millimeters to meters, and the condition number is reassessed to determine if it is invariant under unit changes. § FORMULATION OF THE DIMENSIONALLY HOMOGENEOUS JACOBIAN The derivation method of DHJ involves the following steps. First, the screw-based constraint embedded inverse Jacobian of the manipulator is formulated and inverted to get the constraint compatible forward relation. Then, points that might adequately represent the motion of the moving plate are chosen and related to the Cartesian velocity. Next, the extended selection matrix is derived and applied to the points' linear velocity. This will combine components from different points to effectively describe the moving plate's motion, while also eliminating unwanted or dependent components from the equation. The resulting velocity is termed as the nominal linear velocity. Finally, the nominal linear velocity of the moving plate and the forward velocity equation are related with an f × f dimensionally homogeneous Jacobian matrix. §.§ Constraint-Embedded Velocity Relation The screw-based Jacobian of the manipulator can be analytically obtained using the method introduced in <cit.>. Given the task velocity, 𝓍̇, of the moving moving plate, the general inverse velocity equation of the parallel manipulator has the following form. [ q̇;0 ]= [ G_a^T; G_c^T ]𝓍̇ = [ G_av^T G_aw^T; G_cv^T G_cw^T ][ v; ω ] The units of the entries in G in Eq. (<ref>), are dependent on the type of actuators employed in the manipulator. This paper focus exclusively on scenarios where the manipulator employs only linear or rotational actuators, not considering situations involving a combination of these actuator types. Inverting Eq. (<ref>) yields a constraint compatible forward velocity relation as 𝓍̇ = Jq̇ = [ J_a J_c ][ q̇_a;0 ]In Eq. (<ref>), J∈ℝ^6 × 6 is the inverse of G^T and its sub-matrix J_c is related to the constraint. Thus, we can explicitly describe the relation of 𝓍̇ and q̇_a as𝓍̇ = J_aq̇_a, where J_a = [ J_a1; J_a2 ] The Cartesian velocity, 𝓍̇∈ℝ^6 × 1, in Eq. (<ref>) is constraint compatible.When the manipulator employs linear actuators, J_a1 is dimensionless, while J_a2 has a unit of 1/length. Conversely, if the manipulator utilizes rotational actuators,J_a1 has a unit of length and J_a2 dimensionless. Considering these distinctions, the point's linear velocity and selection matrix are established to ensure consistency or removal of units in the Jacobian. §.§ Linear Velocity of PointsAccording to the well known shifting property <cit.> in the rigid body kinematics, any points velocity on the moving plate can be related to the Cartesian velocity of the moving plate as v_i = v + ω×a_i where v and ω denotes the linear and angular velocity of the moving plate, while a_i corresponds to a constant vector extending from the origin of the Cartesian reference frame to the i^th point on the moving plate. Expanding Eq. (<ref>) reveals the motion of the moving plate that each component of v_i encompasses. v_ix = v_x + ω_y a_iz -ω_z a_iy v_iy = v_y - ω_x a_iz +ω_z a_ix v_iz = v_z +ω_x a_iy -ω_y a_ix By distributing these points on the moving plate in a noncollinear manner, it is possible to satisfy the minimum requirement of points needed to fully represent the motion of the moving plate. Theoretically, the translations of three noncollinear points on the moving plate are sufficient to uniquely identify the motion of the body in terms of translation and rotation, but more points may be required depending on the DoF of the mechanism. Hence, Eq. (<ref>) can be generalized as v_p =[ v_1; ⋮; v_i ] = [I -[a_1]_×;⋮⋮;I -[a_i]_× ][ v; ω ]= V_p𝓍̇ where V_p ∈ℝ^3f × 6 maps the moving plate cartesian velocity to the points velocity on the moving plate. Vector v_i in Eq. (<ref>) has three components and hence from v_p ∈ℝ^3f × 1, we need to determine the components that can appropriately describe the motion of the moving plate via a selection matrix <cit.> as follows Sv_p = SV_p𝓍̇, where S∈ℝ^f × 3f is a selectionmatrix that extracts the components from v_p. v_ps= V_ps𝓍̇ where V_ps∈ℝ^f × 6 However, deriving the selection matrix S is not always straightforward. This is because only manipulators whose moving plates exhibit T_xR_yR_z, T_yR_xR_z and T_zR_xR_y types of motion can be uniquely represented with the component velocities shown in Eq. (<ref>). For a comprehensive understanding of the establishment of selection matrices for these groups of PMs, readers are encouraged to refer to <cit.>. PMs falling outside of these categories will need to utilize a combination of components from different points, an approach that is covered in this paper. §.§ Dimensionally Homogeneous Jacobian In this paper, we derive the dimensionally homogeneous Jacobian by representing the motion of the moving plate using linear velocity, ensuring uniform units across its entries. However, it is important to note that the linear velocities used here are not merely the component velocities of individual points on the moving plate. Instead, they are a combination of components from various points. This approach is used to encompass all desired motion of the moving plate into a representative velocity equation, which we call it the nominal velocity. To derive the dimensionally homogeneous Jacobian, relations, Eq. (<ref>) and Eq. (<ref>)are combined as follows v_ps = V_ps𝓍̇= V_psJ_aq̇_a = J_dhq̇_aIn Eq. (<ref>),J_dh∈ℝ^f × f is a Jacobian that relates the nominal linear velocity (v_ps) of the moving plate to the actuated joint rate (q̇_a ∈ℝ^f × 1). To demonstrate the consistency of units in its entries, we considered the following two generic cases. Case 1: PMs with linear actuators. In this case, q̇_a has unit of length/time while S is dimensionless. Referring to Eq. (<ref>), we can observe that the first term is dimensionless, while the second term has a unit of length. Furthermore, in Eq. (<ref>), we know Block matrix J_a1 is dimensionless and J_a2 has a unit of 1/length. As a result, we conclude that the Jacobian for this particular group of manipulators is dimensionless. Case 2: PMs with rotational actuators. For PMs with rotational actuators q̇_a has unit of angle/time while the unit of V_p is unchanged. Furthermore, the matrix J_a1 has a unit of length and J_a2 is dimensionless for this group of PMs. Consequently, the resulting Jacobian J_dh to has a unit of length which is consistent. Because entries of J_dh are either dimensionless or dimensionally homogeneous, its condition number or singular values have physical significance and can be used to measure the dexterity of the manipulator.The next section demonstrates how to derive it by considering a relevant example: a four DoF (degrees of freedom) T_yT_zR_xR_y type Parallel Manipulator (PM). § EXAMPLEThe mechanism depicted in Fig. <ref> is a T_y T_z R_x R_y type 4 DoF PM <cit.> with a PUS joint order in the first and third limbs, and a PRS type joint sequence in the second and fourth limbs. The P joint is parallel to the z-axis, while the R joint in the PRS limb is parallel to the x-axis, and the U joint in the PUS limb has axes parallel to the x and y axes, respectively. The mechanism is capable of rotating about the x and y axes, as well as translating along the y and z directions. However, due to the presence of revolute joints in the second and fourth limbs, the mechanism is constrained in terms of x-axis translation and z-axis rotations, making it a zero-torsion type PM mechanism.The DoF of the mechanism can also be determined by employing Tsai's DoF formula, which is expressed as follows: F= λ(n-j-1) + ∑_i=1^j f_i 4 = 6(10-12-1)+22 Point A_i at the based is location of limbs while B_i is the center of spherical joints. Point C_i is the center universal joints for the first and third limbs while it is the center of revolute joints for limbs 2 and 4. The position vector a_i is extended from origin moving frame to the i^th spherical joint while b_i is extended from the fixed frame to the point A_i. The direction vector s_ji∥ is associated to each joint axis. To appropriately represent the motion of the moving plate, we need four points and for convenience these points are chosen to be the center of the spherical joints. Expanding Eq. (<ref>) to the four points located at the center of spherical joints at the moving plate, we get a 12 × 6 matrix that relates the i^th points linear velocity to the moving plate center velocity (𝓍̇) as [ v_1x; v_1y; v_1z;⋮; v_4x; v_4y; v_4z ] = [ 1 0 0 0a_1z -a_1y; 0 1 0 -a_1z 0a_1x; 0 0 1a_1y -a_1x 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 1 0 0 0a_4z -a_4y; 0 1 0 -a_4z 0a_4x; 0 0 1a_4y -a_4x 0 ][ v_x; v_y; v_z; ω_x; ω_y; ω_z ] The points' linear velocity in Eq. (<ref>) includes 12 components, three for each point. Therefore, we have many dependent motions, yet we only require four components. Referring to Eq. (<ref>), none of the components encompass the desired motion of the moving plate. Hence, we need to formulate a selection matrix that can combine components from different points and obtain a nominal velocity that describes the motion of the moving plate. As the independent motion of the moving plate for this manipulator are v_y, v_z, ω_x and ω_y, combining v_iy and v_iz components can sufficiently describe the manipulator's motion. However, a combination of component is not unique and one can freely choose one of the following pairs. Limb 1: (v_1y, v_2z), (v_1y, v_3z), (v_1y,v_4z)Limb 2: (v_2y, v_1z), (v_2y, v_3z), (v_2y,v_4z)Limb 3: (v_3y, v_1z), (v_3y, v_2z), (v_3y,v_4z)Limb 4: (v_4y, v_1z), (v_4y, v_2z), (v_4y,v_3z) For this particularly case, we selected the following combination from Eq. (<ref>). Limb 1: (v_1y, v_2z) Limb 2: (v_2y, v_3z) Limb 3: (v_3y,v_4z)Limb 4: (v_4y, v_1z) By utilizing Eq. (<ref>), we can establish the extended selection matrix as [b] S = [ 0 -a_2xa_1x - a_2x1 0a_1xa_1x - a_2x0 0 0 0 0 -a_3xa_2x - a_3x1 0 0 0 0 0 0 0 -a_4xa_1x - a_4x1 0 0 0.. 0 0 0 0 0 0 0a_2xa_2x - a_3x0 0 0 0 0 -a_4xa_3x - a_4x1 0a_3xa_3x - a_4x0 0 0 0 0a_1xa_1x - a_4x0 ] It is quite important to note that S in Eq. (<ref>) is dimensionless and same as to that the usual selection matrix in terms of units. However, the usual selection matrices <cit.> have entries of 1s and 0s unlike the extended selection matrix derived in the paper. Then, multiplying Eq. (<ref>) with Eq. (<ref>),matrixV_ps∈ℝ^4 × 4 relates the nominal velocity (v_ps) and independent Cartesian velocity of the moving plate as in Eq. (<ref>). [b] [ v_1; v_2; v_3; v_4 ] = [ a_1xa_1x - a_2x - a_2xa_1x - a_2x1 a_2xa_2x - a_3x - a_3xa_2x - a_3x1 a_3xa_3x - a_4x - a_4xa_3x - a_4x1 a_1xa_1x - a_4x - a_4xa_1x - a_4x1 .. a_1ya_1x - a_2xa_1z + a_1xa_3za_1x - a_2x-a_1x a_3ya_2x + a_2xa_3z - a_3xa_3za_2x - a_3x-a_2x a_3ya_3x + a_3xa_4z - a_4xa_3za_3x - a_4x-a_3x a_1ya_1x + a_1xa_4z - a_4xa_1za_1x - a_4x-a_1x] [ vy; vz; wx; wy ] where, [ v_1; v_2; v_3; v_4 ] =[ (v_2y + v_1z)a_1x - (v_1y + v_1z)a_2xa_1x - a_2x; (v_3y + v_2z)a_2x - (v_2y + v_2z)a_3xa_2x - a_3x; (v_4y + v_3z)a_3x - (v_3y + v_3z)a_4xa_3x - a_4x; (v_4y + v_1z)a_1x - (v_1y + v_1z)a_4xa_1x - a_4x;] The inverse Jacobian of the manipulator is obtained through the analytic screw theory method and is given as shown in Eq. (<ref>). The first four row of G^T represents the motion Jacobian while the last two depicts the structural constraints. Hence, G_c^T𝓍̇ =0 is always satisfied. q̇ = [ G_a^T; G_c^T ]𝓍̇= [ n_1^Tn_1^Ts_11∥ (n_1×a_1)^Tn_1^Ts_11∥; l_2^Tl_2^Ts_12∥ (l_2×a_2)^Tl_2^Ts_12∥; n_3^Tn_3^Ts_13∥ (n_3×a_3)^Tn_3^Ts_13∥; l_4^Tl_4^Ts_14∥ (l_4×a_4)^Tl_4^Ts_14∥; s_22∥^T (s_22∥×a_2)^T; s_24∥^T (s_24∥×a_4)^T; ][ v; ω ] where G_a^T ∈ℝ^4 × 6 andG_c^T ∈ℝ^2 × 6 and n_i = s_3i∥×s_2i∥. l_i is a vector extending from C_i to B_i. The first term of G^T is dimensionless while the second term has a unit of length. Hence, units of theinverse Jacobian of this manipulator is inconsistent and must be changed to dimensionless or consistent unit. The forward Jacobian J_a ∈ℝ^6 × 3 is analytically obtained by inverting G^-T as in Eq. (<ref>). J_a= [G_av^-TG_aw^T(G_cw^T-G_cv^TG_av^-T× -(G_cw^T - G_cv^TG_av^-TG_aw)^-1×..G_av^-T+ G_aw^T)^-1G_cv^TG_av^-T G_cv^TG_av^-T] By substituting Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>), and subsequently replacing 𝓍̇ with J_aq̇ in Eq. (<ref>), we derive the 4 × 4 dimensionless Jacobian as discussed in case 1. §.§ Numerical Evaluation In order to verify the correctness of the derived dimensionally homogeneous Jacobian, the distribution of the condition number (k) for the manipulator over the entire workspace is evaluated using geometric and motion parameters outlined in Table <ref>.It is known that in parallel manipulator design, the condition number (k) of the Jacobian matrix can be used as a performance measure to evaluate the quality of motion, precision, and stability of the manipulator. The best value of k is 1 which is the minimum possible value and it indicates that all columns (or rows) of the Jacobian matrix matrix are orthogonal to each other. This implies that the system of equations is well-conditioned and the solution will not be overly sensitive to errors in the data or to small changes in the input. This can be interpreted as that the manipulator is isotropic <cit.>. As k increases beyond 1, the system of equations becomes increasingly ill-conditioned. This means that the solution may be very sensitive to errors in the data or to small changes in the input and hence the manipulator is approaching to singularity. Contrary to this, if k values is small enough or remains closer to 1, it can be interpreted the manipulator is away from singular configuration.Accordingly, k=cond(G^T) is first computed and the result is shown in Fig. <ref> over the rotational workspace. The simulation result indicate a substantial increase in the condition number, which do not to adequately reflect the physical properties of the manipulator. Consequently, cond(J_dh) was determined, as shown in Fig. <ref>. In the rotational workspace, the value of k remained low and near 1. This value of k can properly indicate whether the manipulator is far or approaching a singular configuration.Additionally, the sensitivity of k of both Jacobians to unit changes is evaluated over the rotational workspace. The results shows significant discrepancy in the condition number of the conventional Jacobian when units are changed from millimeter to meters, as depicted in Fig. <ref>. Comparing this result with Fig. <ref> can be considered more optimized even-though nothing is changed but unit. However, the value of cond(J_dh), when measured in meters, remained unchanged. The result of cond(J_dh) in meter is not provided here because it is the same as to that of shown in Fig. <ref>. Hence, cond(J_dh) is invariant under the change of units. As previously mentioned, the choice of component combinations is not unique. Hence, we have the flexibility to choose various pairs of v_iy and v_iz from the provided candidates. For instance, by selecting (v_1y, v_3z), (v_2y, v_4z), (v_3y, v_1z), and (v_4y, v_2z), we can derive the following selection matrix. [b] S = [ 0 -a_2xa_3x - a_2x1 0a_3xa_3x - a_2x0 0 00 0 -a_4xa_2x - a_4x1 0 00 0 0 0 0 -a_4xa_2x - a_4x1 0 0 0.. 0 0 0 0 0 0 0a_2xa_2x - a_4x0 0 0 0 0 -a_1xa_3x - a_1x1 0a_3xa_3x - a_1x0 0 0 0 0a_2xa_2x - a_4x0 ] Then, with this selection matrix, we establish the dimensionally homogeneous Jacobian as shown in Eq. (<ref>) and the condition number distribution over the workspace is evaluated. The simulation has shown the same result to that of the dimensionally homogeneous Jacobian obtained using Eq. (<ref>).This consistent property of the Jacobian matrix is quite important while using the condition number as a measure of performance or computing the dexterity for parameter optimization of PMs.§ CONCLUSIONThis paper introduces an extended selection matrix to formulate a point-based, dimensionally homogeneous Jacobian of various constrained parallel manipulators. The proposed method allows for the derived Jacobian's condition number and singular values to be utilized as a performance index and optimization with unit independence.To validate the proposed approach, the condition number (k) for both the conventional Jacobian (G) and the dimensionally homogeneous Jacobian (J_dh) across the rotational workspace were compared. Simulation results indicated a large value of k for G and a remarkably stable value of k for J_dh.Further, we reassessed the distribution of the k value for the two Jacobians by changing the units from millimeters to meters. The results confirmed that k of G varied significantly, while k of J_dh remained consistent, irrespective of the unit change. This phenomenon proves the dimensional homogeneity of the proposed Jacobian, where both the linear and angular parts exhibit similar value distributions and are not unit-dependent. As a result, our method allows for the correct optimization of the manipulators with mixed DoFs. By employing the proposed approach for different manipulators with mixed DoFs, we can confidently assess and optimize their performance.§ ACKNOWLEDGMENT This work was supported by Korea Institute of Science and Technology (KIST), under Grant 2E32302. IEEEtran | http://arxiv.org/abs/2310.17863v1 | {
"authors": [
"Hassen Nigatu",
"Doik Kim"
],
"categories": [
"cs.RO",
"cs.SC"
],
"primary_category": "cs.RO",
"published": "20231027023715",
"title": "Dimensionally Homogeneous Jacobian using Extended Selection Matrix for Performance Evaluation and Optimization of Parallel Manipulators"
} |
A Chebyshev Confidence Guided Source-Free Domain Adaptation Framework for Medical Image Segmentation Jiesi Hu, Yanwu Yang, Xutao Guo, Jinghua Wang*, Ting Ma* This work was supported in part by grants from the National Natural Science Foundation of P.R. China (62276081, 62106113), Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine (NO:ZYYCXTD-C-202004), Basic Research Foundation of Shenzhen Science and Technology Stable Support Program (GXWD20201230155427003-20200822115709001) and The Major Key Project of PCL (PCL2021A06). (Corresponding author: Jinghua Wang, Ting Ma.) Jiesi Hu , Yanwu Yang, and Xutao Guo are with School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, and The Peng Cheng Laboratory.(e-mail: [email protected], [email protected], [email protected]) Jinghua Wang is with School of Computer Science and Technology, Harbin Institute of Technology at Shenzhen. (e-mail: [email protected]) Ting Ma is with School of Electronics and Information Engineering, Harbin Institute of Technology at Shenzhen, The Peng Cheng Laboratory, Guangdong Provincial Key Laboratory of Aerospace Communication and Networking Technology, Harbin Institute of Technology, Shenzhen, and International Research Institute for Artifcial Intelligence, Harbin Institute of Technology, Shenzhen. (e-mail: [email protected])Received XXX; accepted YYY ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ The ubiquity of complex machine learning has raised the importance of model-agnostic explanation algorithms. These methods create artificial instances by slightly perturbing real instances, capturing shifts in model decisions. However, such methods rely on initial data and only provide explanations of the decision for these. To tackle these problems, we propose Therapy, the first global and model-agnostic explanation method adapted to text which requires no input dataset. Therapy generates texts following the distribution learned by a classifier through cooperative generation. Because it does not rely on initial samples, it allows to generate explanations even when data is absent (e.g., for confidentiality reasons). Moreover, conversely to existing methods that combine multiple local explanations into a global one, Therapy offers a global overview of the model behavior on the input space. Our experiments show that although using no input data to generate samples, Therapy provides insightful information about features used by the classifier that is competitive with the ones from methods relying on input samples and outperforms them when input samples are not specific to the studied model.*Equal contribution. § INTRODUCTIONThe emergence of machine learning models has led to their adoption in domains spanning from mere recommendations to critical areas such as healthcare <cit.> and law <cit.>. These already complex models keep becoming larger, emphasizing their black-box denomination. This lack of transparency however slows their adoption in various areas since we witness a notable rise of deployed models suffering from bias. For example, some chatbots biased toward religious <cit.> and gender <cit.> minorities have been released and explaining their inner mechanisms is still an ongoing problem. Among the methods proposed to tackle these problems, model-agnostic approaches are favored since applicable to any machine learning model. Among these, local explanations have obtained strong success by maintaining a good trade-off between accuracy and transparency. These explanations are generated in the proximity of a target instance by tampering this input to create neighbors and study how the model reacts to these changes. This allows them to highlight which features are important for the model and to provide explanations on the decision for this input (e.g., the most important words for each class). According to a recent study <cit.>, LIME <cit.>, while being the first model-agnostic local explanation method is still the most widely used. However, local explanations have three main flaws when trying to explain a model. First, it obviously requires to have inputs to explain, which might not be possible due to confidentiality or privacy reasons <cit.>. Second, selecting inputs that are representative of the model or the downstream data distribution is difficult. Finally, it will explain the decision for this input and for this input only. This only provides very local information on the model behavior, which represents only a very small piece of the input domain of the model. Therefore, LIME and other local explanation methods have proposed to aggregate the information from multiple samples to provide global explanations. However, these explanations are strongly tied to the input samples and only provide cues about the samples' neighborhood. These methods thus require samples that cover as much of the space as possible.To relax this sample dependency and generate global explanations of the model, we propose Therapy, a method that leverages cooperative generation <cit.> to generate texts following the distribution of a classifier. The distribution of the resulting samples can then be used to study which features are important for the model, providing global information on its behavior.In this paper, we first introduce the related work in Section <ref> and cooperative text generation in Section <ref>. We then present Therapy in Section <ref> and the experiments conducted to compare its performance to standard explanation methods in Section <ref>.§ RELATED WORK Generating explanations for textual data is challenging since it requires considering both the text semantics and task domains. Moreover, it is frequent that models are already deployed and further evaluations are required (e.g., fairness, bias detection) but the training data is not accessible. This may be caused by data privacy, security, or simply because the dataset is too large to be analyzed. Thus, to fulfil this objective, researchers have focused on post-hoc explanations <cit.>. Following the categorization by Bodria et al. <cit.>, we distinguish between example-based and feature-attribution explanations. §.§ Example-Based ExplanationsTaking roots from social science <cit.>, the example-based explanations indicate either the minimum change required to modify the prediction –counterfactual– or illustrate class by showing representative instances –prototypes–. Counterfactual methods answer "what if" questions and have gained interest since being close to human reasoning, perturbing document until the model prediction differs <cit.>. Conversely, prototype methods select or generate representative instances for the target class. Among the example-based methods, some leverage on control codes to perturb the input text while others generate realistic sentences based on perturbation in a latent space. Polyjuice <cit.> and GYC <cit.> belong to the former and propose control codes varying from changing the sentiment and tense of the sentence to adding or replacing words. On the other hand, xSPELLS <cit.> and CounterfactualGAN <cit.> are methods that train respectively a Variational Autoencoder and a Generative Adversarial Network to convert input text to a latent space and return realistic sentences from this latent space. These methods hence convert the input document into a latent space and slightly perturb it until the closest counterfactual is found. §.§ Feature-Attribution ExplanationsFeature-attribution methods assign weights to input words, indicating the positive or negative impact on the final prediction. Methods such as SHAP <cit.>, LIME <cit.>, and their variants <cit.> are the most commonly used <cit.>. They are local since they perturb an input instance by slightly modifying it and studying the complex model in a given locality. For textual data, LIME randomly masks the words of the input document and trains a linear model on the collection of perturbed documents to predict the decisions of the complex model. The most important coefficients of the linear model associated with the input words are then returned as the explanation. While most explainability surveys <cit.> differentiated between local and global explanations, LIME also introduced LIME-SP (for submodular pick), a global method that generates n local explanations for a set of individual instances. These n instances are selected to cover as much of the input domain as possible and avoid redundancy.§ TEXT GENERATION§.§ Cooperative GenerationLanguage Models (LM) such as the GPT family <cit.> learn the probability distribution of sequences of symbols x_1, x_2, ⋯, x_T (most often tokens) taken from a vocabulary 𝒱, with variable lengths T. The probability of one sample x (also called likelihood) is defined as the joint probabilities over each of its tokens, which can be factorized using the chain rule: p(x_1:T)=∏_t=1^T p(x_t| x_1:t-1). The LM is trained to output a probability distribution over the dictionary for the next token given the input ones i.e.p(x_t | x_1:t-1) at a given time step t. This results in an auto-regressive LM that can generate sequences by iteratively using those distributions to emit a token x_t, and append it to the context x_1:t-1 for the next iteration. The generation process –or decoding– is often started using a small initial sequence: the prompt.Large LMs learn an excellent approximation of the true distribution of their training data, so generating samples that maximize the model likelihood p(x) allows to generate plausible texts. However, this approach offers very little control over the text being generated besides the initial prompt.Cooperative generation approaches <cit.>, where discriminative models are used to guide the LM during the generation, offer more control. They use the information from the external model to guide the LM to generate texts that have a property it recognizes. In situations where the model is a classifier which learns to output the probability D(c | x) of a sequence x to belong to a class c, the goal is to generate text that maximizes the probability of belonging to the target class. Evaluating D(c | x) for every sequence possible is intractable due to the size of the space (|𝒱|^n for a sequence of length n). Thus, these methods leverage the distribution of the LM to restrict the exploration to plausible sequences. This results in a sequence that is both well written and belongs to the target class since the produced sequence maximizes p(x)*D(c | x) ∝ p(x | c). §.§ Monte Carlo Tree Seach Guided Decoding Among cooperative approaches, the ones that leverage theMonte Carlo Tree Search (MCTS) to guide the decoding of the LM exhibited very strong results <cit.>.MCTS is an iterative algorithm that seeks solutions in a tree space too large to be exhaustively searched. It is applicable to text generation because the search space created during decoding corresponds to a tree: the prompt is the root and the children of a node are its parents' sequence with one additional token. MCTS loop is composed of four steps: selection, expansion, simulation and back-propagation.* Selection An exploration from the root of the tree to an unexplored leaf. The path to the leaf is defined by selecting, at each node, the children that maximize the Polynomial Upper Confidence Trees (PUCT) <cit.>, which is, for a node i: PUCT(i) = s_i/n_i + c_puctp(x_i | x_1:t-1)√(N_i)/1+n_i with n_i the number of simulations played after the node i, s_i its aggregated score, N_i the number of simulations played after its parent, and c_puct a constant defining the compromise between exploitation (focusing on nodes with already good scores) and exploration (exploring promising nodes). * Expansion. The creation of the selected node children if it is not terminal (i.e., corresponding to the end-of-sequence token). * Simulation (roll-out). The sampling of additional tokens (using the LM distribution) until a terminal node. * Back-propagation. The evaluation of the sequence x associated with the terminal node and aggregation of its score to each parent until root. In order to guide the generation towards texts that belong to a given class according to a classifier, the score of the sequence x associated with a given leaf can be defined as D(c | x) given by the classifier. Different aggregation strategies can be used, such as computing the average of the actual score of the node and the terminal node one as in <cit.> or taking the maximum of the two as in <cit.>. This loop is repeated a given number of times (defining the compute budget) and the tree produced is then used to select the token to add for the current decoding step. It can be selected as the most played node among the root’s children nodes, or the one with the highest aggregated score. Since we are interested in generating sequences that are as stereotypical of classes of the discriminative model as possible, we choose the node with the highest score. The selected node then becomes the new root and the process is repeated until the final sequence is produced. Contrary to traditional left-to-right decoding strategies that can miss sequences that gets better after some steps or be trapped in sub-optimal sequences, MCTS breaks the myopic decoding by defining the score of a token based on possible continuations of the sequence. In addition to being plug-and-play, i.e, any type of (auto-regressive) language model can be guided during decoding by any type of classifier using MCTS, this approach exhibited state-of-the-art results in the task of constraint generation, that is, generating texts that maximize D(c | x) while maintaining a high quality of writing. We thus experiment with MCTS decoding for Therapy, but the proposed method is compatible with any cooperative generation approach.§ METHOD In this paper, we introduce Therapy, a global and model-agnostic explanation method that does not require input data. In place of these input data, Therapy employs an LM guided by the model to explain. This cooperation generates texts that are representative of the classes learned by the studied discriminative model. To do so, Therapy extracts the most important words for the classifier by employing it to steer an LM through cooperative generation. Texts generated using cooperative generation follow the distribution p(x) * D(c | x). Their distribution can thus be used to study the classifier D: words with high frequencies are likely to be important for the classifier. A logistic regression is then learned on tf-idf representations of generated samples and the weights associated with each term are returned as the explanation. An illustration of the method is proposed in Figure <ref>. Because p(x) is the same for every class, by using tf-idf on the whole corpus (i.e., samples from every class), words that are frequent because of p(x) or in multiple classes will be filtered out. Hence, the logistic regression model learned on the tf-idf score of each feature allows Therapy to study their relative importance and to extract the most important ones for each class. The method thus offers the level of explainability of n-grams based on logistic regression models to any classifier. Indeed, since any type of (auto-regressive) LM can be guided during decoding by any classifier using MCTS, the proposed approach is totally model-agnostic. We call this approach Therapy because its functioning is similar to that of a therapist. This therapist (the LM) queries its patient (the classifier) to understand its behavior and eventually discover pathologic behaviors (some biases). In essence, the method is similar to using LIME jointly with a masked LM to generate neighbors when the number of replaced tokens grows a lot but with two benefits. First, the method does not rely on input examples but creates samples out of nothing using the LM. This is useful for cases where the data cannot be shared because it contains confidential information <cit.>. Moreover, rather than exploring the neighborhood of these examples (and so conditioning the explanations on these examples' context), the domain of the exploration is defined by the domain of the LM, which is significantly broader. Besides, either a general LM can be used to study the model behavior on generic data or an LM specific to the downstream domain to make sure it works well on this specific type of data. Second, the method does not generate before classifying the text but employs the classifier during the generation. Hence, instead of "randomly" generating texts and hoping for important features to appear, we explicitly query the model for stereotypic features by maximizing D(c | x). This makes the method more efficient and reduces the probability of generating rare features that are not important for the model while reducing the odds of generating "in the middle" texts containing features from various classes that are misleading. Besides, our method directly relies on the distribution learned by the studied model to guide the generation, unlike methods like Polyjuice and GYC, which, in addition to requiring input data, count on a distribution learned by the LM to bias the generation towards the desired property (using control codes).Finally, Therapy is distinctive from methods analyzing the frequency of input terms in the training data such as sensitivity analysis since it does not require access to (training) data and directly exploits the distribution effectively learned by the model, whereas nothing guarantees that a model is actually using the terms extracted from training data to make a prediction. Furthermore, our method differs from existing example-based and feature attribution methods since to the best of our knowledge, there exists no global and model-agnostic explanation methods that do not require any input data. § EXPERIMENTS In this section, we first give technical details on the experiments conducted to evaluate Therapy (Section <ref>). We then evaluate Therapy through three experiments. The first one (Section <ref>), measures the Spearman correlation of the explanations and the weights of a glass box and studies the influence of the number of generated texts on the quality of the explanation returned by the linear model. We then compare the capacity of the method to correctly identify the most important words of the glass box to the one of LIME and SHAP using precision/recall curves in Section <ref>. Finally, we test whether the terms returned by the different approaches are sufficient to modify the prediction of the classifier in Section <ref>. The code of Therapy and our experiments will be made available upon acceptance.§.§ Experimental setup Glass-box explanation Since there are no ground truth explanations available to be used as a goal for evaluated methods, we use a glass-box model, that is, a model explainable by design but used as a black box (i.e., without being able to use its inner workings to generate explanations). Following prior work <cit.>, we train a logistic regression using https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.htmlsklearn <cit.> and use its weights as tokens importance scores.Therapy implementation To evaluate the proposed method, we use the available implementation of https://github.com/NohTow/PPL-MCTSPPL-MCTS <cit.> and simply plug the glass-box by defining the function that takes a sequence and returns its score. The choice of the LM to guide defines the domain on which we want to explain the behavior of the model. Thus, it is best to choose a language model that is as close as the domain on which the discriminator will be used. However, to show that the proposed approach works well, even on a general domain, we use https://huggingface.co/facebook/opt-125mOPT-125m <cit.>. A logistic regression is then learned on generated texts and its scores are used as token importance. Datasets Experiments are conducted on two different classification datasets from <cit.>. The first one, https://huggingface.co/datasets/amazon_polarityamazon_polarity is a binary classification dataset of Amazon reviews labelled as positive or negative. The reviews are rather small and have highly caricatural lexical fields. The second one, https://huggingface.co/datasets/ag_newsag_news, is a thematic classification dataset with 4 classes: (, ,and ). Texts in this dataset are longer and more diverse but include distinctive indicators because they are extracted from online news articles. Samples generated by Therapy along with top words returned by the method for each class of both datasets are given in Appendix <ref>. Compared methodsIn our experiments, we compare the results of Therapy to the two most widely used post-hoc methods: https://github.com/marcotcr/limeLIME <cit.> and https://shap.readthedocs.io/en/latest/SHAP <cit.>. We employed publicly available implementations of these traditional methods instead of their extensions mentioned in Section <ref>. This decision was made because, to the best of our knowledge, these extensions either do not prioritize the generation of global explanations or do not enhance the textual versions of these methods. The main difference between LIME and SHAP is that the former generates samples by modifying input data and then learns a linear regression model whereas the latter benefits from game theory to compute the weight of each term. We use the global version of these methods on 500 texts of the datasets test set. For SHAP, we keep the 10 000 most important words for each dataset whereas, for LIME, we computed 500 local explanations with the 35 most important words and merged every term-weights pair into dictionaries of length 4592 for amazon_polarity and 5770 for ag_news. Finally, to highlight the benefits of cooperative generation in Therapy, we also report the results obtained by a simple baseline. Rather than using cooperatively generated texts to train the logistic regression, the baseline generates texts without constraining the LM and uses the glass-box after the generation is done to get the target labels. §.§ Spearman correlation A good explanation of the glass box is a list of features that contains both its important features (i.e., has good coverage) and links them to a similar relative weight. Hence, we compute the Spearman correlation between the top words of the glass box (having a weight >1) and their scores attributed by the explainer.We selected Spearman correlation over Pearson because the score returned by LIME and SHAP can be very different from logistic regression weights and so rank correlation results in a fairer comparison.§.§.§ Influence of the number of generated texts One critical parameter of the proposed method is the number of texts to generate since more tokens allow a larger coverage but require more computation. We report the Spearman correlation against the number of generated texts per class in Figure <ref>. We observe that the correlation quickly rises until plateauing, meaning that only a small amount of text offers a great overview of the model behavior and that the method does not require a lot of computing to perform. We thus fixed the number of generated texts for Therapy to 3000 for each class for the rest of our experiments.§.§.§ Importance of the classifier guidanceCooperative generation allows Therapy to guide the LM during the decoding process and to move away from its distribution toward that of the model studied. To study the importance of this guidance, we report, in addition to the baseline, the results obtained when selecting the most played token during MCTS generation. As mentioned in Section <ref>, the token added to the current context can be selected as the most played node or the one obtaining the highest score. Selecting the highest-scored node generates texts that are the most stereotypical of the studied model, while the most played node is closer to the LM a priori. Results reported in Table <ref> show that both the baseline and using the most played node exhibit competitive results on amazon_polarity but struggle more on ag_news. This can be explained by the fact that the LM tends to not generate positive and negative terms at the same time, so the classes are clearly defined even in unconstrained samples. On ag_news, however, there is more overlap between classes, and so using cooperative generation helps to generate texts that are more distinctive of a given class. These results both highlight the contribution of the cooperative generation and motivate the token selection method. §.§.§ Comparison with other methodsThe Spearman correlations of all the evaluated approaches can be found in Table <ref>. Results yielded by Therapy are better than those of LIME on ag_news but worse on amazon_polarity whereas SHAP yields better results than both methods on both datasets. Counterintuitively, these are positive results for Therapy because other methods have access to the test set of the studied dataset, ensuring that the target features are found in the input examples. To test the performance when this assumption no longer holds, we resort to two variants of LIME and SHAP, denoted by -other. The key distinction between these methods lies in the dataset employed as input data. We use amazon_polarity texts as input to find features in ag_news and vice-versa. The findings from these experiments reveal that existing methods fail to find important features, leading to a significant drop in correlations, substantially lower than those of Therapy. §.§ Precision RecallBesides assigning correct scores to important features of the model, we also want to make sure that Therapy gives an informative output in practice. That is, making sure that most features returned by the explainer (i.e., its highest-scored features) are indeed important features of the original model and that most of its important features are found. Thus, we report precision/recall curves averaged over every class in Figure <ref>. Precision is obtained by computing, for different numbers of words returned, the proportion that is in the most important features of the original model. Conversely, recall is the proportion of the original model's top words retrieved. The number of words returned ranges from 10 to 1500.Therapy yields worse results than LIME (although achieving better recall on ag_news) and SHAP on both datasets. Again, when the input data does not necessarily contain the important features for the model (-other), the results collapse and Therapy outperforms both approaches. This limitation is visible by the plateau in recall scores for these methods: they indeed find the important features present in the data, but are limited to these only, setting the upper limit of features that can be found. In practice, biases contained in the model can be subtle enough not to be present in the available data, in which case LIME and SHAP will not be able to detect it. Therapy, on the other hand, obtains good results while using the same generic LM for both datasets, without using any a priori. The method thus provides a very good overview of the model's behavior when no data, or more broadly, when no data representative of the important features of the model is available. In the latter case, Therapy offers a broader search than the one based on existing texts, offering higher recalls. Again, the baseline is competitive against Therapy on amazon_polarity but is significantly worse on ag_news. This illustrates that the cooperative generation allows Therapy to better highlight distinct classes when they are more mixed in the LM.§.§ Insertion/deletion of keywords A strategy to validate the correctness of the explanation is to remove the features that the explanation method found important and see how the prediction of the model evolves. The intuition behind deletion is that removing the “cause” will force the model to change its decision <cit.>. Similarly, adding a word returned by the explanation as important for another class should lower the confidence of the model. Thus, we compute an insertion/deletion metric that measures the proportion of texts whose glass-box decision changes when a word listed as important for the original class is removed and replaced by an important word from another class. Figure <ref> shows the results on both datasets for Therapy, the baseline method, LIME, SHAP, and their version using the other dataset as input (-other) on 1000 texts. Replacements are done by iterating over the list of the top 250 words returned by each method for the original class until the decision of the model changes. Replacement can only occur if the word is present within the text and multiple replacements of the same word in a given text are counted as multiple replacements. This explains why each method has a different maximum number of words replaced. Methods that leverage generative models seem to achieve more replacements. We hypothesize that this is because they are designed to globally explain the model on the input domain, unlike local methods that can return words that are specific to a given input and not generalize well.We observe that Therapy achieves very similar results to those of LIME and SHAP on amazon_polarity but significantly worse than both on ag_news. However, when compared to the -other versions, Therapy achieves very convincing results showing once again that these methods require very specific data while Therapy is able to find important words without accessing any data nor using any a priori on the model.In this experiment as well, Therapy outperforms the baseline on both datasets, although the difference is more noticeable on ag_news. § CONCLUSIONUsual explainability methods heavily rely on input data, which is not necessarily available and might not contain model biases or important features. We propose Therapy, a method that leverages cooperative textual generation to create synthetic data that follow the studied model distribution. Thus, the search is driven by a pre-trained LM rather than input samples. The pre-trained LM allows a broader exploration than being restricted to input data neighborhood, relaxing most of the constraints and a priori induced by examples-driven methods. In the extreme case where extremely representative data (such as the test set of a given dataset) of important features of the model is available, Therapy lacks a bit behind state-of-the-art SHAP while being competitive. However, when considering more realistic cases where we do not explicitly give the important features to the explainer or do not have any available data, its performances are very good whereas the other methods are collapsing when even applicable. Comparisons with a generate-then-classify baseline highlight the benefits of the cooperative generation when the LM does not generate texts that are representative of a single specific class by itself.Therefore, Therapy is a useful tool to explore the model behavior on a large domain when collecting data that exactly match the downstream distribution is not feasible.Finally, we opposed the proposed approach to LIME and SHAP to highlight the interest of generating representative texts using cooperative generation when input data is lacking. However, an interesting avenue of research would be to use these established explainability methods on cooperatively generated texts, replacing the proposed logistic regression on the tf-idf representations. This potential combination might allow to leverage their performance while alleviating the input data dependency. acl_natbib§ QUALITATIVE RESULTS In this appendix, we provide samples generated by Therapy as well as the first 20 top words returned by the method for the different classes of both datasets. Please note that some "words" correspond to sub-words, due to the breakdown into unigrams (ve, ll, ...). The proposed approach allows Therapy to study the impact of n-grams, but this is not possible with LIME and SHAP (using available code), so we restricted the study to unigrams. §.§ amazon_polarity, "positive" classSamples: ** Top-words: great, love, good, ve, years, people, lot, friends, fun, life, world, works, easy, things, happy, heard, including, awesome, nice, family §.§ amazon_polarity, "negative" class Samples: * *Top-words: don, money, bad, doesn, didn, idea, work, device, isn, thing, guess, wrong, back, buy, fact, time, phone, point, problem, thought §.§ ag_news, "world" classSamples: * * Top-words: people, man, country, city, party, killed, family, agree, wrong, general, children, sex, president, police, working, military, dead, missing, woman, days §.§ ag_news, "sport" classSamples:* *Top-words: time, game, back, season, play, didn, team, guy, field, night, games, left, 12, title, won, saturday, playing, great, day, wasn §.§ ag_news, "business" classSamples: * *Top-words: money, buy, care, doesn, things, deal, pay, worth, business, car, biggest, interested, month, trade, don, compagny, happened, store, kind, price§.§ ag_news, "sci/tech" classSamples: * * Top-words: ve, ll, idea, phone, internet, make, system, video, online, life, understand, version, pc, found, 13, thing, computer, lot, hard, issue, people, work, information, future | http://arxiv.org/abs/2310.18063v1 | {
"authors": [
"Antoine Chaffin",
"Julien Delaunay"
],
"categories": [
"cs.CL",
"cs.LG",
"I.2.7"
],
"primary_category": "cs.CL",
"published": "20231027112627",
"title": "\"Honey, Tell Me What's Wrong\", Global Explanation of Textual Discriminative Models through Cooperative Generation"
} |
1]A. Jariescor [email protected] 1]M. Stryjczykcor [email protected] 1]A. Kankainencor [email protected] 1]T. Eronen 1]Z. Ge 1]M. Mougeot 1]A. Raggio 1]J. Ruotsalainen[1]organization=University of Jyvaskyla, Department of Physics, Accelerator laboratory, addressline=P.O. Box 35(YFL), postcode=FI-40014, city=University of Jyvaskyla, country=Finland[cor]Corresponding authors We report on the precise mass measurements of the ^91Sr and ^95Y isotopes performed using the JYFLTRAP double Penning trap mass spectrometer. The mass-excess values from this work, ME(^91Sr) = -83645.5(13) keV and ME(^95Y) = -81226.4(10) keV, deviate by 6.5(52) keV and -18(7) keV from the Atomic Mass Evaluation 2020 (AME20). In the case of ^91Sr the new result disagrees with the ISOLTRAP value, while for ^95Y, it agrees with the older JYFLTRAP value.Binding energies and masses Mass spectrometers Penning trap § INTRODUCTION The mass is one of the most fundamental properties of a nucleus as it is a reflection of all the interactions between the constituent nucleons. In addition, changes in mass trends along an isotopic or isotonic chain can reveal information about the structure of the ground states <cit.>. Masses are necessary for accessing other experimental information, such as the determination of log(ft) values in β-decay spectroscopy <cit.> or differences in charge radii in laser spectroscopy <cit.>. They also have an influence on astrophysical calculations, for instance the r-process abundance predictions <cit.>. Because of the influence of masses on nuclear physics, a review of available data, the Atomic Mass Evaluation (AME) <cit.>, is prepared periodically, most recently in 2020, where all pieces of information which can be used for mass determination are summarized and critically evaluated. The AME also points to discrepancies in the literature and, if needed, rejects data points deemed unreliable.There are several experimental approaches which enable the extraction of atomic masses, see Ref. <cit.> for an overview. The method which provides the best accuracy and resolving power is the Penning-trap mass spectrometry <cit.>. With the recent advent of the phase-imaging ion-cyclotron-resonance (PI-ICR) technique <cit.>, the improvement in resolving power is such that states as close as 10 keV can now be separated <cit.>.Because of its high reliability, the rejection of the Penning-trap measurements in favor of results from other experimental techniques is not common. Nevertheless, it is the case for ^91Sr and ^95Y <cit.>. While their masses were measured with Penning traps using the Time-of-Flight Ion Cyclotron Resonance (TOF-ICR) technique <cit.> at ISOLTRAP <cit.> and JYFLTRAP <cit.>, respectively, the reported values were rejected from the AME and the results were labeled as 'Well-documented data, or data from regular reviewed journals, which disagree with other well-documented values.' <cit.>. Currently, the mass of ^91Sr is determined exclusively from decay measurements <cit.>, mostly the β decay of ^91Sr to ^91Y (81%) <cit.>, with the remaining 12% from the β and β-delayed-neutron studies of ^91,92Rb <cit.>. At the same time, the mass of ^95Y is extracted about 88% from β decays <cit.> (56% from ^95Y→^95Zr <cit.>, 32% from ^95Sr→^95Y <cit.>) and 12% from the ^96Zr(t,α)^95Y transfer reaction <cit.>. While β-decay studies are known to be unreliable, especially for nuclei far from stability where measurements can suffer from the pandemonium effect <cit.>, for ^91Sr and ^95Y there are several measurements agreeing with each other but differing from the Penning-trap values by 3.0 and 1.8 standard deviations (σ), respectively <cit.>. The neutron-rich nuclei in the A=90 region are abundantly produced in fission and, as a result, they contribute to the decay heat generated in nuclear reactors <cit.>. In addition, three reactions, ^88Kr(α,n)^91Sr, ^91Sr(α,n)^94Zr and ^95Y(α,n)^98Nb, were identified to play an important role in the production of lighter heavy elements between Sr and Ag in neutrino-driven, neutron-rich ejecta of core-collapse supernovae <cit.>. Thus, it is important to have reliable mass values for the nuclei of interest. To resolve the discrepancy in existing literature, in this work we report on the results of the ^91Sr and ^95Ymass measurements performed using the JYFLTRAP double Penning trap mass spectrometer <cit.> at the Ion Guide Isotope Separator On-Line (IGISOL) facility <cit.> in the JYFL Accelerator Laboratory at the University of Jyväskylä, Finland.§ EXPERIMENTAL METHOD AND RESULTS The radioactive species were produced in a proton-induced fission of a 15 mg/cm^2 thick ^natU target by impinging a 25-MeV primary proton beam delivered by the K130 cyclotron. The primary beam current was about 1 μA to produce ^91Sr and about 5 μA for ^95Y. The fission products were first stopped in a gas cell filled with helium gas at a pressure of about 280 mbar from which they were extracted following gas flow and guided to the high-vacuum region of the mass separator using a sextupole ion guide <cit.>. Subsequently, the ions were accelerated to 30q kV energy. The beam was separated with respect to their mass-over-charge ratio by a 55^∘ dipole magnet with a mass resolving power of m/Δ m ≈ 500 and injected into a radio-frequency quadrupole cooler-buncher <cit.>. From there, the cooled and bunched radioactive ion beam was finally delivered to the JYFLTRAP double Penning trap <cit.>. At JYFLTRAP, the ions were first cooled, purified and centered in the first (preparation) trap by using a mass-selective buffer-gas cooling technique <cit.>. A mass resolving power of m/Δ m > 10^4 was reached which enabled removal of the vast majority of isobaric contaminants. They were subsequently transferred to the second (measurement) trap through a 1.5-mm diameter diaphragm and, after about 600 μs, the purified ions of interest were transferred back to the purifying trap for additional cooling. Finally, the singly-charged ions of interest were sent to the measurement trap, where their cyclotron frequency ν_c = qB/(2 π m) in the magnetic field B was determined using the PI-ICR technique <cit.>. In this technique, the cyclotron frequency of an ion is obtained from the angular difference α_c = α_+ - α_- between the projections of the cyclotron (α_+) and magnetron (α_-) radial in-trap motion images, see Fig. <ref>. They are measured on the position-sensitive detector with respect to the trap center during a phase accumulation time t_acc. In the present case, t_acc was set to 627.4 ms for ^91Sr while for ^95Y two measurements were performed, with 694 and 713 ms accumulation times. The magnetic field strength B was determined using ^85Rb reference ions (mass excess ME_lit. = -82167.341(5) keV <cit.>) produced by an offline surface ion source <cit.>. The atomic mass M is determined from the cyclotron frequency ratio r=ν_c,ref/ν_c of the reference ion and the ion of interest: M = (M_ref - m_e) r + m_e, where M_ref and m_e are the atomic mass of the reference ion the electron mass, respectively. As the binding energy of the missing electron is of the order of a few eV, its contribution was neglected. The measurements of the ion of interest and the reference ion were done alternately to reduce systematic effects due to magnetic field fluctuations. To assess the ion-ion interactions in the measurement trap, the count-rate class analysis <cit.> was performed for ^95Y, however, no significant dependency was observed. For ^91Sr this analysis was not statistically feasible, thus, the data was limited to one detected ion per bunch. A mass-dependent uncertainty of δ r/r = -2.35(81) × 10^-10 / u× (M_ref - M) and a residual systematic uncertainty of δ r/r=9× 10^-9 were added to the cyclotron frequency ratio <cit.>. In addition, the systematic uncertainties due to the temporal magnetic field fluctuation (δ B/B = 2.01(25) × 10^-12 min^-1×δ t, where δ t is the time between the measurements), the magnetron phase advancement and the angle error were also included in the uncertainty estimation <cit.>. A summary of experimental results as well as a comparison with AME20 <cit.> and the rejected Penning-trap values <cit.> is presented in Table <ref>. We note that the latter were recalculated using the reported frequency ratios r and the masses of the reference isotopes from AME20 <cit.>. In addition, a comparison between this work and input data used in AME20 is shown in Fig. <ref>. The mass-excess value of ^91Sr from this work, ME = -83645.5(13) keV differs by 6.5(52) keV (1.3σ) from the AME20 value <cit.> but it is four times more precise. At the same time it is -25(9) keV (2.7σ) away from the ISOLTRAP result <cit.>, indicating that the decision of the AME20 evaluators to exclude this data point was correct. The exact reason why this value is incorrect remains unknown. However, we note that Ref. <cit.>is one of the earlier publications from ISOLTRAP. At the time, the systematic effects had not yet been studied in detail but published later in Ref. <cit.>. In addition, the preparation trap consisted of a 0.7 T electromagnet and its limited resolving power could have led to a presence of the ^91Rb contamination. This hypothesis is supported by the fact that the ISOLTRAP value is shifted towards heavier masses. The updated Q_β n(^92Rb) value from this work, 802(6) keV, is closer to 785(15) keV reported in Ref. <cit.> compared to 808(8) keV from AME20 <cit.>. The new Q_β(^91Rb) value, 5901(8) keV, is larger than the three results from the β-decay studies taken into account in AME20: 5857(8) keV from Ref. <cit.>, 5850(20) keV from Ref. <cit.> and 5860(10) keV from Ref. <cit.>, see Fig. <ref>a. However, these data points were adjusted by the evaluators <cit.>, as indicated with blue bars in Fig. <ref>a, to include the fact that the 94-keV state in ^91Sr is fed significantly stronger than the ground state in the β decay of ^91Rb <cit.>. A deviation between the results from this work and the AME20-adjusted values might be related to the fact that the β feedings to the two low-lying states are actually similar, as indicated by the recent total absorption spectroscopy study <cit.>. We note that the Q_β(^91Rb) value from Ref. <cit.>, 5760(40) keV, was not included in the AME20 evaluation due to a large uncertainty. The updated Q_β(^91Sr) value, 2705.8(22) keV, agrees very well with the results of two β-decay studies, 2705(5) keV reported in Ref. <cit.> and 2709(15) keV from Ref. <cit.>. However, it disagrees with 2684(4) keV and 2665(10) keV from Refs. <cit.> by 4.8 and 4.0σ, respectively, see Fig. <ref>a. We note that the uncertainty of the value reported in Ref. <cit.> was increased by the AME20 evaluators from 4 to 10 keV <cit.> while the result from Ref. <cit.> was rejected.The mass-excess value of ^95Y from this work, ME = -81226.4(10) keV differs by -18(7) keV (2.6σ) from AME20 <cit.>. However, it is in a perfect agreement with the previous JYFLTRAP measurement performed using the TOF-ICR technique and ^97Zr as a reference nucleus <cit.> and it is four times more precise. During the measurement a second cyclotron spot was observed, see Fig. <ref>. The extracted mass-excess value, ME = -80804.9(12) keV, allowed us to identify it as an isobaric molecular contamination of ^79Br^16O (ME_lit. = -80805.1(10) keV <cit.>).The updated Q_β(^95Sr) value, 6109(6) keV, is 27 keV (2.3σ) larger than 6082(10) keV reported in Ref. <cit.>. By discarding it, the consistency in the region can be restored. This is justified since this result 'should be considered preliminary, until the detector response is evaluated in detail', as it is stated in Ref. <cit.>. We note that our Q_β(^95Sr) value differs by more than 2σ from 6052(25) keV reported in Ref. <cit.>, which was not taken into account in AME20 due to a large uncertainty <cit.>. The Q_β(^95Y) value from Ref. <cit.> (4445(9) keV) and the Q value of the ^96Zr(t,α) reaction from Ref. <cit.> (8294(20) keV) agree relatively well with the updated results from this work (4433.5(13) keV and 8312.4(10) keV, respectively), as can be seen in Fig. <ref>b. § CONCLUSIONS Masses of ^91Sr and ^95Y were measured using the JYFLTRAP double Penning trap. The extracted mass-excess value of ^91Sr agrees with AME20 but it does not match the previous ISOLTRAP measurement. For ^95Y the mass-excess value differs by -18(7) keV from AME20 but it perfectly agrees with the previous JYFLTRAP measurement. Our study shows an importance of critical mass evaluation and cross checks of different experimental results.§ ACKNOWLEDGMENTS This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreements No. 771036 (ERC CoG MAIDEN) and No. 861198–LISA–H2020-MSCA-ITN-2019, from the European Union’s Horizon Europe Research and Innovation Programme under Grant Agreement No. 101057511 (EURO-LABS) and from the Academy of Finland projects No. 295207, 306980, 327629, 354589 and 354968. J.R. acknowledges financial support from the Vilho, Yrjö and Kalle Väisälä Foundation. | http://arxiv.org/abs/2310.18065v2 | {
"authors": [
"A. Jaries",
"M. Stryjczyk",
"A. Kankainen",
"T. Eronen",
"Z. Ge",
"M. Mougeot",
"A. Raggio",
"J. Ruotsalainen"
],
"categories": [
"nucl-ex"
],
"primary_category": "nucl-ex",
"published": "20231027112743",
"title": "Reinvestigation of $^{91}$Sr and $^{95}$Y atomic masses using the JYFLTRAP Penning trap"
} |
Asymmetric Geometry of Total Grassmannians André L. G. Mandolesi Instituto de Matemtica e Estatstica, Universidade Federal da Bahia, Av. Milton Santos s/n, 40170-110, Salvador - BA, Brazil. E-mail:January 14, 2024v2.2 - notao G_p^n=============================================================================================================================================================================== Metrics in Grassmannians, or distances between subspaces of same dimension, have many applications. However, usual extensions to the Total Grassmannian of subspaces of different dimensions lack useful properties or give little information. Dimensional asymmetries call for the use of asymmetric metrics, which arise naturally in this space. Their geometry reflects containment relations of subspaces, and minimal geodesics link subspaces of distinct dimensions. In particular, the Fubini-Study metric extends as an asymmetric angle with useful properties, that can be computed in arbitrary bases or via Grassmann algebra.It has a nice geometric interpretation, as does its sine, an asymmetric Binet-Cauchy metric.Keywords: Grassmannian, Grassmann manifold, asymmetric metric, distance between subspaces, angle between subspaces, Fubini-Study. MSC 2020: Primary 14M15; Secondary 15A75, 51K99 § INTRODUCTION Various metrics in Grassmannians, sets of subspaces of a given dimension <cit.>, Kobayashi inclui caso complexo are used to measure the separation of such subspaces <cit.>: geodesic, gap, projection Frobenius, Fubini-Study, etc. They are important in geometry, linear algebra, functional analysis, and applications where subspaces are used to represent data:wireless communication <cit.>, Du2018,Pereira2022 coding theory <cit.>, Ashikhmin2010 machine learning <cit.>, Zhang2018 computer vision <cit.>, etc. language processing Hall2000 recommender systems Boumal2015Some problems <cit.>require the Total Grassmannian of subspaces of different dimensions, but distances used in it have various shortcomings. image recognition Basri2011,Draper2014,Sun2007,Wang2006,Wang2008; numerical linear algebra Beattie2005,Sorensen2002; information retrieval Zuccon2009; EEG signal analysis Figueiredo2010; wireless communication Pereira2022, Pereira2021 Distances <cit.> Pereira2022 from the smaller subspace to its projection on the larger one fail the triangle inequality (take two lines and their plane).When dimensions differ the gap <cit.> is always 1, not giving any information, and the symmetric distance <cit.> Bagherinia2011, Figueiredo2010, Sharafuddin2010, Zuccon2009 is at least 1. It can be best to have d(V,W)=0 if V⊂ W <cit.>, which usual metrics do not allow.The containment gap <cit.> satisfies this, but gives little information (only the largest principal angle). Other metrics in <cit.> have similar issues.Any (symmetric) metric in the Total Grassmannian is bound to have problems,as subspaces of different dimensions have inherently asymmetric relations (a plane contains lines, not vice versa; areas projected on a line vanish, lengths projected on a plane tend not to). And though its usual topology, as a disjoint union of Grassmannians, arises naturally in some models, it is sometimes inadequate, separating a plane from its lines, for example. As it is disconnected,processes with dimension changes (data compression) are discontinuous. A better topology should reflect the idea that a small region around a line contains no planes, but near a plane we have lines; and a line can move continuously into a given plane (or the plane move to contain the line),not vice versa.The symmetry d(x,y)=d(y,x) of metrics has long been recognized as an overly restrictive simplifying assumption, as often the path, time or cost to go from x to y is not the same as from y to x: one-way streets, rush-hour traffic, uphill or downhill, etc. The separation condition d(x,y)=0 ⇔ x=y is also too strong sometimes (for subspaces).The triangle inequality, d(x,z) ≤ d(x,y)+d(y,z), is what matters most: without it open balls do not generate a topology; and if d(x,z) measures an optimal way to go from x to z, it can not be worse than going through y. Some generalizations use alternative inequalities. Asymmetric metrics or quasi-metrics or quasi-distances or T_0-quasi-pseudometrics <cit.> Kazeem2014occur intopology <cit.>, Finsler geometry <cit.>, Flores2013category theory <cit.>, computer science <cit.>, Wang2022 Appendix A has examples of quasi-metrics in probability and information graph theory <cit.>, biology <cit.>, Stojmirovic2005,Stojmirovic2009 etc. AlgomKfir2011,Anguelov2016,Chenchiah2009 materials science Mainik2005,Mielke2003,Rieger2008 Richer than metrics, they carry data in d(x,y) and d(y,x),and distances to or from a point split usual concepts into backward and forward ones, a common duality in asymmetric structures <cit.>. A weaker separation condition lets them generalize partial orders ≤, with d(x,y) measuring the failure of x ≤ y. They induce two non-Hausdorff topologies, linked to ≤ and ≥, which complement each other and combine into a metric topology. Many metric results have asymmetric versions,while some asymmetric ones have only trivial metric analogues <cit.>.The containment gap <cit.> is an example in the Total Grassmannian, but its asymmetry has not been explored. As we show, Grassmannian metrics extend naturallyas asymmetric metrics in the Total Grassmannian. Theseare similar to the original metrics,and should be just as practical. They measure how far a subspace is from being contained in another (as far as possible, if the first one is larger), and give topologies reflecting the relations ⊂, ⊃, and = (the usual topology). We describe their geometry, obtain minimal geodesics connecting subspaces of any dimensions, and determine when geodesics are segments and the triangle inequality attains equality.Special attention is given to the Fubini-Study metric, used with complex spaces in wireless communication <cit.> Pereira2022 and quantum theory <cit.>. S Fubini-Study em (H). Also Ortega2002, Brody2001, Yu2019It extends as an asymmetric angle<cit.> whose cosine (squared, in the complex case) measures volume contraction in orthogonal projections. Links to Grassmann and Clifford algebras <cit.>give useful properties <cit.> and easy ways to compute it. It has led to complex Pythagorean theorems <cit.> with interesting implications for quantum theory <cit.>. Its sine, an asymmetric Binet-Cauchy metric, also has a nice geometric interpretation.<Ref> sets up notation and reviews concepts.<Ref> obtains asymmetric metrics in the Total Grassmannian. <Ref> describes their geometries. <Ref> studies the asymmetric Fubini-Study and Binet-Cauchy metrics.<Ref> closes with some remarks. <Ref> reviews Grassmann exterior algebra. <Ref> reviews and organizes themain Grassmannian metrics. § PRELIMINARIESWe will use ^n, for = or , with inner product ·,· (Hermitian product if =, in which case its real part Re·,· is an inner product in the underlying real space ^2n).A p-dimensional subspace is a p-subspace, or line if p=1. For a subspace V, G_p(V) = {p-subspaces of V} is a Grassmannian, =∅ if p> W Muitos usam Gr, Kozlov e Wong usam G and G(V) = {subspaces of V} is a Total Grassmannian. We also write G_p^n = G_p(^n), G^n = G(^n), and P_V: ^n→ V is an orthogonal projection. For nonzero v,w∈^n let θ_v,w = cos^-1Rev,w/vw and θ_v,0 = π/2,and for any v∈^n let θ_0,v = 0(note the asymmetry θ_v,0≠θ_0,v if v≠ 0). due to {0}≠ ({v}) §.§ Asymmetric metrics We define asymmetric metrics as follows <cit.>. Some authors call them quasi-metrics, a term often used for a closely related concept <cit.> with a T_1 separation condition (d(x,y)=0 ⇔ x=y). An asymmetric metric is a function d:M× M→ [0,∞] Incluir ∞ para a intrinsic metric, se no houver path on a set M satisfying, for all x,y,z∈ M: * d(x,y)=d(y,x)=0 ⇔ x=y(T_0 separation condition). * d(x,z) ≤ d(x,y)+d(y,z)(Oriented triangle inequality). The T_0 condition lets d induce a partial order by x≤ y ⇔ d(x,y)=0, Anguelov2016: further sense of proximity, d(x,y)=d(y,z)=0 ⇒ x≤ y ≤ z ⇒ x is no closer to z than y and any partial order can be represented by some d. [ex.1.4]Mennucci2013, [p. 21]Stojmirovic2005, Anguelov2016Any permutation-invariant monotone Topics Matrix Anal., Horn, Johnson p.169; Matrix Anal., Horn, Johnson p.285 norm[This means (a,b) = (b,a), and |a'|≥ |a|, |b'| ≥ |b| ⇒(a',b')≥(a,b).]· in ^2 symmetrizes d (with some loss of information) into a metric ρ(x,y) = (d(x,y),d(y,x)). The max-symmetrized metric is d̂(x,y)=max{d(x,y),d(y,x)}. Gives gap (Kato1995) and Hausdorff distance (Huttenlocher1993, Knauer2011). Blumenthal1970 p. 24 proves oriented triangle ineq for directed Hausdorff (inside proof for Hausdorff)Backward, forward and symmetric topologies Flores2013, Mennucci2013, Mennucci2014 τ^-, τ^+, τ are generated by backward balls B^-_r(x) = {y∈ M:d(y,x)<r}, forward balls B^+_r(x) = {y∈ M:d(x,y)<r}, and symmetric balls B_r(x) = B^+_r(x) ∩ B^-_r(x). = {y∈ M:d̂(x,y)<r}In general, τ^± are only T_0, have d(x,y)=0 for x≠ y but well behaved, complementing each other and combining into τ, which is the metric topology of d̂, hence Hausdorff.Analysis in M requires some care. With T_0, limits are not unique. By <ref>,d(x_k,y) - d(x,y) ≤ d(x_k,x) and d(x,y_k) - d(x,y) ≤ d(y,y_k), so in τ^- we have that d(x,y) is upper semicontinuous in x and lower semicontinuous in y,in τ^+ it is the opposite, and in τ it is continuous. Chenchiah2009 p.5823, Cobzas2012 p.8. Stojmirovic2005 p. 23 errado A sequence x_k is left (right) K-Cauchy if given ϵ>0 there is n ∈ such that d(x_k,x_l)<ϵ (d(x_l,x_k)<ϵ) whenever n ≤ k ≤ l. If any such sequence has d̂(x_k,x) → 0 for some x∈ M then (M,d) is left (right) Smyth complete. Cobzas2012 It is Smyth bicomplete if both. A map f:(M_1,d_1) → (M_2,d_2) of asymmetric metric spaces is an isometry if d_1(x,y) = d_2(f(x),f(y)) for allx,y∈ M_1,an anti-isometry if d_1(x,y) = d_2(f(y),f(x)). A bijective isometry gives homeomorphisms τ_1 ≃τ_2 and τ_1^±≃τ_2^± of the corresponding topologies of M_1 and M_2. A bijective anti-isometry gives τ_1 ≃τ_2 and τ_1^±≃τ_2^∓ (so, switching backward and forward topologies).We write (M) for the group of isometries of M, and (M) for that of its isometries and anti-isometries. A curve in M (for τ or τ^±) is a continuous φ:I→ M, for an interval I⊂. If I=[a,b], it is a path from φ(a) to φ(b). Being coarser, τ^± have more curvesthan τ: in τ^-, φ is continuous at t if and only iflim_s → t d(φ(s),φ(t)) = 0; in τ^+, if lim_s → t d(φ(t),φ(s)) = 0; and τ needs both. A curve is rectifiable if it has finite length ℓ_φ = sup∑_i=1^N-1 d(φ(t_i),φ(t_i+1)), with the sup over all finite sequences t_1 < ⋯ < t_N in I. The reversed path φ(-t) can be non-rectifiable or have another length It is null if ℓ_φ = 0. constant in τ, not τ^± The reversed curve φ(-t) can be non-rectifiable or have a different length. If f is an anti-isometry, f∘φ and φ(-t) have the same length. In τ^±, the length of a restricted curve φ|_[a,t] can vary discontinuously with t. Mennucci2014 uses run-continuous paths (in τ, so they are continuous). In τ, not τ^±, a rectifiable continuous path is run-continuous. The infimum of lengths of paths from x to y gives a (topology dependent) intrinsic asymmetric metric D(x,y), D≥ d, and D(x,y) = ∞ if there is no rect path x to y and d is intrinsic if D=d. and M is a length space A path φ from x to y is a minimal geodesic if ℓ_φ = D(x,y), and a segment if ℓ_φ = d(x,y). Busemann1944 M is a geodesic space if any two points are linked by a minimal geodesic. not necessarily unique If d(x,z) = d(x,y)+d(y,z) Some authors require y ≠ x,z, or all distinct then y is a between-point from x to z (is between them, if both ways). Jiang1996, Blumenthal1970 p. 33 for metric spaces This holds for y in a segment. d(x,z) = ℓ_φ = ℓ_φ|_x→ y + ℓ_φ|_y→ z≥ d(x,y) + d(y,z) ≥ d(x,z) ⇒ equalities M is convex if any distinct x,z∈ M have a between-point y ≠ x,z. Menger convex is stronger The asymmetric metric d(x,y) = max{0,x-y} ininduces the usual order, and measures how much x≤ y fails. The metric d̂(x,y) = |x-y| measures the failure in x=y, and loses information about which is bigger. While τ is the usual topology, τ^- is the upper semicontinuous one, and τ^+ the lower one. The anti-isometry f:→, f(x)=-x, gives a self-homeomorphism of τ (the usual reflection symmetry of ), and switches the asymmetric topologies τ^+ and τ^- (which, in a sense, split the symmetry of τ). As x_k = k (x_k=-k) is left (right) K-Cauchy, (,d) is neither left nor right Smyth complete. But (-∞,0] is left Smyth complete, as any left K-Cauchy sequence in it is Cauchy for d̂ (any sequence has d(x_k,0)=0, which explains why Smyth completeness uses d̂). Likewise, [0,∞) is right Smyth complete. The next example is a toy model for some features we will encounter in the Total Grassmannian. In M={0,1,…,n}, an asymmetric metric is given by d(p,q)=0 if p≤ q, 1 otherwise. It is Smith bicomplete, as any left or right K-Cauchy sequence eventually becomes constant. Open sets in M^-=(M,τ^-) are {0,1,…,p} for p∈ M, in M^+=(M,τ^+) they are {p,p+1,…,n}, and τ is discrete. The anti-isometry f:M → M, f(p)=n-p, switches M^- and M^+. In M^-, φ(t) = pfort ∈ [0,1), qfort ∈ [1,2], is a null minimal geodesic from p to q>p, and φ(-t) is a minimal geodesic from q to p, of length 1. In M^+ we have the same with [0,1] and (1,2]. So M^± aregeodesic spaces with d intrinsic. We can also construct minimal geodesics from p to q passing through p, p+1,…,q, whose reversed paths are not minimal geodesics. For r<p<q<s, we can construct minimal geodesics from q to p passing through q,q+1,…,s,r,r+1,…,p.§.§ Principal anglesThe principal angles <cit.> Afriat1957,Galantai2006,Golub2013of V∈ G_p^n and W∈ G_q^n, with m =min{p,q}≠ 0, are 0≤θ_1≤⋯≤θ_m≤π/2 if there are orthonormal bases (e_1,…,e_p) of V and (f_1,…,f_q) of W with e_i ⊥ f_j for i≠ j, and θ_e_i,f_i = θ_i for i ≤ m. θ_i = θ_e_i,f_i = γ_e_i,f_i Such principal bases are formed by principal vectors, orthonormal eigenvectors of P^*P and PP^*, where P:V→ W is the orthogonal projection and P^* its adjoint. The eigenvalues of P^*P if p≤ q, or PP^* if p>q, cosθ_i = σ_i for the singular values σ_1≥⋯≥σ_m of P are the cos^2 θ_i's, and θ_i =θ_e_i,P_W e_i = min{θ_v,P_W v : 0≠ v∈ V, v ⊥ e_j ∀ j<i}= min_U∈ G_i(V)max_v ∈ Uθ_v,P_W v= max_U∈ G_p+1-i(V)min_0≠ v ∈ Uθ_v,P_W v. The number of null θ_i's is (V∩ W). Let V∈ G_p^n, W∈ G_q^n, W' ∈ G_r(W), m = min{p,q}≠ 0, m' = min{p,r}≠ 0, θ_1≤⋯≤θ_m be the principal angles of V and W, θ_1'≤⋯≤θ_m'' be those of V and W', and set θ_i = π/2 for i>m. Then θ_i ≤θ_i' ≤θ_i+q-r for all 1≤ i ≤ m'. Cauchy interlacing adapted Horn1991 Cor. 3.1.3 Also: * θ_i' = θ_i for all 1≤ i ≤ m' ⇔ W'= {f_1,…,f_r} for a principal basis (f_1,…,f_q) of W V. * θ_i' = θ_i+q-r for all 1≤ i ≤ m' ⇔ W'= {f_1+q-r,…,f_q} for a principal basis (f_1,…,f_q) of W V. (W')^⊥∩ W = {f_1,…, f_p-p'} θ_i' = min_U∈ G_i(V)max_v ∈ Uθ_v,P_W' v≥min_U∈ G_i(V)max_v ∈ Uθ_v,P_W v = θ_i, and also θ_i' = max_U∈ G_r+1-i(W')min_0≠ w ∈ Uθ_w,P_V w≤max_U∈ G_r+1-i(W)min_0≠ w ∈ Uθ_w,P_V w = θ_i+q-r if i+q-r ≤ m. Let (e_1',…,e_p') and (f_1',…,f_r') be principal bases of V and W'. (<ref>) If θ_e_i',f_i' = θ_i' = θ_i∀ i ≤ m' then as e_i' ⊥ f_j'∀ j≠ i (e_1',…,e'_m') and (f_1',…,f_r') extend to principal bases of V and W. (<ref>) Let j(i) = i+q-r and suppose θ_i' = θ_j(i) for i ≤ m'. If q≤ p then m'=r, j(m') = q = m and (θ_1',…,θ_m'') = (θ_1+q-r,…,θ_m), so f_1',…,f_r' can be used as last vectors of a principal basis of W V. If q>p, the last q-p vectors of such basis are orthogonal to V, and f_1',…,f_r' can be used as some of them if q≥ p+r, as j(i) > p = m and θ_i' = π/2 for i ≤ m', and f_i' ⊥ V for i>m'. If p<q<p+r then 1≤ p+r-q ≤ m' p-q<0, r-q≤ 0 and j(p+r-q) = m, so (θ_1',…,θ_p+r-q') = (θ_1+q-r,…,θ_m) and θ_i' = π/2 for p+r-q<i≤ m', if any and again we obtain the result. §.§ GrassmanniansThe Grassmannian G_p^n is a connected compact manifold of dimension p(n-p) <cit.>. See <Ref> for a classification of its metrics. Curve lengths coincide for l^2 and ∧ metrics, which converge asymptotically for small θ_i's <cit.>. For them, any minimal geodesic <cit.>: d_g has triangle equality in direct rotations, d_cF, d_pF, d_c2, d_p2 only trivially from V to W is given by φ(t) = {v_1(t),…,v_p(t)}, where t∈[0,1] and v_i(t) = cos(tθ_i) e_i + sin (tθ_i) f_i - P_V f_i/f_i - P_V f_i (= e_i if θ_i=0) for principal bases (e_1,…,e_p) and (f_1,…,f_p) and principal angles θ_1≤⋯≤θ_p, so that the e_i's rotate at constant speeds towards the f_i's. Its length is the geodesic metric d_g(V,W) = √(∑_i=1^p θ_i^2). If X ⊂ (V+W)^⊥, φ(t) ⊕ X is a minimal geodesic from V⊕ X to W⊕ X, of same length. For d_g, U is between V and W ⇔ U is in a minimal geodesic from V to W. The Total Grassmannian G^n=⋃_p=0^n G_p^n usually has the disjoint union topology (we will obtain other topologies). So it is compact Let V∈ G_p^n, W ∈ G_q^n, m=min{p,q}≠ 0, θ_1≤⋯≤θ_m be their principal angles, and (e_1,…,e_p) be a principal basis of V W.We have the following distances between V and W (using ·_F and · for Frobenius and operator norms): * The Fubini-Study metric extends viaembedding (see <Ref>)as d̂_FS = cos^-1(∏_i=1^p cosθ_i) if p=q, or π/2 if p≠ q. as distinct ⋀^p ^n's are ⊥ * d_pF = 1/√(2)P_V-P_W_F = √(|p-q|/2+∑_i=1^m sin^2 θ_i) and the symmetric distance <cit.> Zuccon2009,Bagherinia2011, Figueiredo2010, Sharafuddin2010 d_s = √(|p-q|+∑_i=1^m sin^2 θ_i) = max{d⃗(V,W), d⃗(W,V)} = √(max(p,q)-∑_i,je_i,f_j^2) = 1/√(2)√(|p-q| + P_V-P_W_F^2) For fixed p, q, min is √(|p-q|), if V⊂ W or W⊂ V. Max is √(max{p,q}), if V ⊥ W. extend d_pF as metrics. Karami2023 gives it with P_V = 𝐄𝐄^T, but tries to use arbitrary bases, in which case d is dist between bases, not subspaces The triangle inequality fails for ď_pF = √(∑_i=1^m sin^2 θ_i) <cit.>, 2 lines and plane; Pereira2022 and is unknown for the asymmetric directional distance <cit.> d⃗(V,W) = √(∑_i=1^p e_i-P_W e_i^2) = √(max{0,p-q} + ∑_i=1^m sin^2 θ_i). = P_W^⊥ P_V_F. Can use any orthon basis of V. min_W'∈Ω_p^-(W)d_pF(V,W')^2 if p≤ q, max_W'∈Ω_p^+(W)d_pF(V,W')^2 if p>q. Min is √(max{0,p-q}) (fixed p, q), if V⊂ W or W ⊂ V. * The containment gap <cit.> δ(U,W) = max_u=1u-P_W u≤max_u=1u-P_W P_V u≤max_u=1 (u-P_V u + P_V u-P_W P_V u) ≤δ(U,V) + max_u=1P_V u/P_V u-P_W P_V u/P_V u≤δ(U,V) + max_v=1v-P_W v = δ(U,V) + δ(V,W) extends d_p2 as an asymmetric metric δ(V,W) = max_v∈ V, v=1v-P_W v = sinθ_p if p ≤ q, or 1 if p>q. δ(V,W) =0 ⇔ V⊂ W. Max occurs when V W. The gap <cit.> Stewart1990 no, s equal dim is a metric δ̂ = max{δ(V,W),δ(W,V)} = P_V - P_W = sinθ_p if p = q, or 1 if p ≠ q. T=sup{Tx/x}. To prove let p=q so (P_V - P_W)e_i = e_i - P_W e_i≤δ(V,W) and (P_V - P_W)e_i^⊥ = P_W e_i^⊥ = sinθ_i ≤δ(V,W) Lack of a triangle inequality limits the utility of ď_pF and (maybe) d⃗. As δ and δ̂ use only θ_p, they give little information, and δ̂ and d̂_FS give none if p≠ q. As d_pF, d_s, δ̂ and d̂_FS never approach 0 when p ≠ q, subspaces of distinct dimensions are kept apart, making it harder to detect when one is almost contained in the other <cit.>. inconvenient to have this given by d_s(V,W) < √(|p-q|) + ε, specially if p or q are unknown (subspaces obtained truncating spectrum of operator) The Infinite Grassmannian G_p^∞ = ⋃_n G_p^n of p-subspaces in all ^n's, and Infinite Total Grassmannian [Called Doubly Infinite Grassmannian in <cit.>.] G^∞ = ⋃_n G^n = ⋃_p G_p^∞ of all subspaces in all ^n's, are defined using the natural inclusion G_p^n ⊂ G_p^n+1. In <cit.>, metrics d_p in G_p^∞ are extended to G^∞: for V∈ G_p^∞ and W∈ G_q^∞, with p≤ q, an ad hoc inclusion of θ_p+1 = ⋯ = θ_q = π/2 Can skip δ and include θ's in d_p, both have same formula turns min{d_p(V,U):U∈ G_p(W)} = min{d_q(Y,W):Y∈ G_q^∞, Y⊃ V} Minimal distances occur at W_p and V⊕ W_⊥, which have with V the same nonzero θ_i's as W into a metric d(V,W) = max{d_q(Y,W):Y∈ G_q^∞, Y⊃ V}. If a q-subspace can contain V and q-p extra dimensions in W^⊥, what explains use of G_q^∞ But the metrics obtained have the same problems as those above.§ ASYMMETRIC METRICS IN G^∞ AND G^N We will obtain an asymmetric metric d: G^∞× G^∞→ [0,∞) extending naturally a family of metrics d_p: G_p^∞× G_p^∞→ [0,∞) such that, for p > 0: * d_p(V,W) = f_p(θ_1,…,θ_p) for a nondecreasing function f_p of the principal angles θ_1 ≤⋯≤θ_p of V,W∈ G_p^∞; * f_q(0,…,0,θ_1,…,θ_p) = f_p(θ_1,…,θ_p) for q>p.All metrics of <Ref> satisfy these conditions. Δ_p = G_p^∞ = sup{d_p(V,W):V,W∈ G_p^∞} is a non-decreasing function of p. Pode fazer com G_p^N, N≥ 2p, mas parece menos natural As G_p^∞, p≠ 0, has orthogonal subspaces, for p<q we haveΔ_p = f_p(π/2,…,π/2) = f_q(0,…,0,π/2,…,π/2) ≤ f_q(π/2,…,π/2) = Δ_q. <Ref> has Δ_p for the metrics of <Ref>. Note that G_p^n would decrease for p > n/2, as subspaces intersect non-trivially. A projection subspace of W∈ G_q^∞ V∈ G_p^∞ is any W' ∈ G_min{p,q}(W) such that P_W(V) ⊂ W'. If VW then W_P=P_W(V) and W_⊥ = V^⊥∩ W, otherwise W_P ⊃ P_W(V) and W_⊥⊂ V^⊥∩ W, strict inclusions if p ≤ q For p,q ≠ 0, V and W' have the same principal angles as V and W. and the same nonzero principal angles as V⊕ W_⊥ and W. Let V,W∈ G_p^∞ and V' ∈ G_r(V). If W' ∈ G_r(W) is a projection subspace of W V' then d_r(V',W') ≤ d_p(V,W). Let V and W have principal angles θ_1 ≤⋯≤θ_p, and V' and W (or W') have θ_1' ≤⋯≤θ_r'. By <Ref>, d_r(V',W') = f_r(θ_1',…,θ_r') = f_p(0,…,0,θ_1',…,θ_r') ≤ f_p(θ_1,…,θ_p) = d_p(V,W).Recall <cit.> that inf∅ = M in an ordered set 𝒪 with greatest element M(the infimum of a subset is its greatest lower bound in 𝒪, and M is a lower bound of ∅ since ∅ has no element smaller than M). An asymmetric metric in G^∞ is given by d(V,W) = inf{d_p(V,U):U∈ G_p(W)}, for V∈ G_p^∞, W∈ G_q^∞, and with inf taken in the interval [0,Δ_p]. Also, for any projection subspace W' of W V, For l^2 metrics, d(V,W) = d(V,W') for W' ∈ G_p(W) ⇔ W' is projection subspace. For ∧ andmax ones it also happens when V W d(V,W) = d(V,W') = d_p(V,W') = f_p(θ_1,…,θ_p) ≤Δ_pif0<p≤ q, Δ_potherwise, where θ_1 ≤⋯≤θ_p are the principal angles of V and W. If 0<p≤ q, d_p(V,W') = f_p(θ_1,…,θ_p) ≤ d_p(V,U)∀ U∈ G_p(W), by <Ref>, so d(V,W) = d_p(V,W'). If p=0, d(V,W) = 0 = Δ_0. If p>q then G_p(W) = ∅, so d(V,W) = inf∅ = Δ_p. W'=W if p>q As d(V,W)=0 ⇒ d_p(V,W')=0 ⇒ V=W' ⊂ W, <Ref><ref> holds. p=0 trivial, p≠ 0 has Δ_p ≠ 0 We now prove d(U,W) ≤ d(U,V) + d(V,W) for U∈ G_r^∞. If r>p then d(U,V) = Δ_r ≥ d(U,W). If r≤ p and p>q, d(U,W) ≤Δ_r ≤Δ_p = d(V,W). <ref> If r≤ p ≤ q, r=0 is trivial given projection subspaces V' of V U, and W” of W' V', V'∈ G_r(V), W'∈ G_p(W), W”∈ G_r(W') using <Ref> we find d(U,W) ≤ d_r(U,W”) ≤ d_r(U,V') + d_r(V',W”) ≤ d_r(U,V') + d_p(V,W') = d(U,V) + d(V,W). d=inf d_r d_r metric previous line<Ref> has the asymmetric metric in G^∞ (which restricts to G^n) extending each metric of <Ref>. We use the same symbol for both. Note that d_p2 is the containment gap <cit.>.The partial order induced by d is ⊂, as d(V,W)=0 ⇔ V ⊂ W, and so d(V,W) measures how far V is from being contained in W.If p>q this is never any closer to happening, and d(V,W) remains constant at its maximum Δ_p (for a given p). Such maximum is attained for asymmetric l^2 metrics when p>q or V ⊥ W. For ∧ and max ones, when p>q or θ_p = π/2, when VW, for the following asymmetric relation: V is partially orthogonal () to W if W^⊥∩ V ≠{0}. d(V',W) ≤ d(V,W) ≤ d(V,W'), for V,V',W,W'∈ G^∞ with V'⊂ V and W'⊂ W. <ref> Let p= V, q= W and r= V'. Then d(V,W) = inf{d_p(V,U):U∈ G_p(W)}≤inf{d_p(V,U):U∈ G_p(W')} = d(V,W'). G_p(W')⊂ G_p(W) If p>q, d(V',W) ≤Δ_r ≤Δ_p = d(V,W). <ref> If 0<p≤ q, for projection subspaces W' of W V, and W” of W' V', <Ref> gives d(V',W) ≤ d_r(V',W”) ≤ d_p(V,W') = d(V,W). So moving V into W is easier (d decreases) the smaller V is, or the larger W is. We obtain equality conditions for d_g and d_FS, for later use. Let V ∈ G_p^∞, W ∈ G_q^∞, V' = G_p'(V), W' ∈ G_q'(W). * d_g(V,W') = d_g(V,W) ⇔ p≤ q' and P_W(V)⊂ W', or q'<p≤ q and V⊥ W, or p>q. * d_g(V',W) = d_g(V,W) ⇔ p≤ q and V'^⊥∩ V ⊂ W, or p'=p>q. (<ref>) For p=0 or p>q it is trivial. If 0≠ p ≤ q', V and W have principal angles θ_1≤⋯≤θ_p, and V and W' haveθ_1'≤⋯≤θ_p', <Ref> <ref> gives ∑_i=1^p θ_i'^2 = ∑_i=1^p θ_i^2 ⇔θ_i' = θ_i∀ i ⇔ P_W(V) ⊂ W'. If q'<p≤ q then π/2√(p) = √(∑_i=1^p θ_i^2)⇔θ_i=π/2 ∀ i. <ref>, <Ref> (<ref>) For p'=0 or p>q it is trivial. If 0≠ p' ≤ p ≤ q, and V' and W have principal angles θ_1'≤⋯≤θ_p'', by <Ref> <ref> we have ∑_i=1^p'θ_i'^2 = ∑_i=1^p θ_i^2 ⇔ θ_i = 0 fori≤ p-p',θ'_i-p+p'fori>p-p' ⇔ V'^⊥∩ V ⊂ W. Let V ∈ G_p^∞, W ∈ G_q^∞ and W' ∈ G_min{p,q}(W). Then d_g(V,W') = d_g(V,W) ⇔ W' is a projection subspace of W V. Let V,W∈ G^∞, V'∈ G(V) and W'∈ G(W). d_FS(V,W) = d_FS(V,W_P) = d_FS(V⊕ W_⊥,W) = min{d_FS(V,U):U ∈ G(W)} = min{d_FS(Y,W):Y ∈ G^n, Y⊃ V}. * d_FS(V,W') = d_FS(V,W) ⇔ V W or P_W(V)⊂ W'. * d_FS(V',W) = d_FS(V,W) ⇔ V' W or V'^⊥∩ V ⊂ W. (<ref>) Let 0<p= V ≤ W', otherwise it is trivial V and W have principal angles θ_1≤⋯≤θ_p, and V and W' have θ_1'≤⋯≤θ_p'. By <Ref>, <ref> ∏_i=1^p cosθ'_i = ∏_i=1^p cosθ_i ⇔ P_W(V)⊂ W' or θ_p=π/2. (<ref>) Let 0< p'= V' < p ≤ W, otherwise it is trivial and V' and W have principal angles θ_1'≤⋯≤θ_p''. By <Ref>, <ref> ∏_i=1^p'cosθ'_i = ∏_i=1^p cosθ_i ⇔θ'_p' = π/2 or V'^⊥∩ V and W have principal angles θ_1=⋯=θ_p-p'=0. (V')^⊥∩ V = {e_1,…, e_p-p'} for princ basis (e_1,…,e_p) of V W § ASYMMETRIC GEOMETRY OF G^N± Let d be any asymmetric metric of <Ref>. They all give G^n the same τ^± and τ topologies, the balls may be different as <Ref> still hold [If p≤ q then d(V,W) = d(V,W') for a projection subspace W'∈ G_p(W). If p>q then d(V,W) = Δ_p and we just have to compare the values in <Ref>.] for V∈ G_p^n and W∈ G_q^n, except that some >'s become ='s if p>q. equalities if p>q We write G^n± when the topology is τ^±, and G^n when it is τ or does not matter.The relations ⊂, ⊃, = are reflected in τ^-, τ^+, τ: for small r>0,r<Δ_p, otherwise the balls have all subspaces of all dimensions B^-_r(V) has all subspaces almost contained in V, of smaller or equal dimension and forming small principal angles with V (Fig. <ref>); B^+_r(V) has those almost containing V; Unions of Schubert varieties of Ye2016. Closed balls (in sense of ≤ r, not topological) are B^-_0[V] = {U∈ G^n:U⊂ V} = G(V), B^+_0[V] = {U∈ G^n:U⊃ V} and B_0[V]={V} and B_r(V) has those almost equal to V. Note that τ^± are only T_0: if V⊂ W then d(V,W)=0, so any τ^- neighborhood of W has V, and any τ^+ one of V has W. All τ^- open sets have {0}, and all τ^+ ones have ^n. In τ^-, any neighborhood of V contains {U∈ G^n:U⊂ V}; = G(V) the closure of {V} is {U∈ G^n:U⊃ V}; {U∈ G^n:U⊂ V} in G^n+ each G_p^n has empty interior (except for G_0^n,G_n^n in G^n+ which is open),the smallest open set containing it is ⋃_q≤ p G_q^n, and its closure is ⋃_q≥ p G_q^n. ⋃_q≤ p G_q^n in G^n+ Similar results for τ^+ have inclusions and inequalities switched.The subspace topology of G_p^n (for τ^± or τ) is its usual one,as in it d is the original metric. The G_p^n's are intricately connected to each other in τ^±, while τ is the usual topology of their disjoint union, since d̂(V,W) = Δ_max{p,q}≥ 1 for V∈ G_p^n and W∈ G_q^n with p≠ q. G^n± are compact, Smyth bicomplete and contractible. is locally const in τ, discontin in τ^±. In τ^±, d(V,W) is discontin in V and W (if V ⊊ W, d(V,V) = d(W,W) = 0 and d(W,V)≠ 0, though V ∈ B^-_r(W) and W ∈ B^+_r(V)∀ r) τ^± are coarser than τ, which is compact. For small ϵ>0 we have d(V_k,V_l)<ϵ⇒ V_k ≤ V_l, so any left or right K-Cauchy sequence V_k eventually becomes a Cauchy sequence in some G_p^n, which is complete. A contraction of G^n- is given by h(V,t) = Vift ∈ [0,1/2],{0} ift ∈ (1/2,1], and in G^n+ we have the same with [0,1/2) and [1/2,1]. in τ^-/τ^+ strict ineq in smaller/larger subspace In particular, this means G^n± are path connected. Indeed, a path in G^n- from V∈ G_p^n to W∈ G_q^n, with p≤ q, isφ(t) = ϕ(t)ift ∈ [0,1/2), Wift ∈ [1/2,1], where ϕ:[0,1/2]→ G_p^n is a path from V to any W'∈ G_p(W). The reversed path φ(-t) links W to V. In G^n+, use [0,1/2] and (1/2,1]. α:G^n → G^n no funciona em G^∞ given by α(V)=V^⊥ is a bijective anti-isometry for asymmetric ∧ or max metrics. For V,W ≠{0} or ^n, otherwise trivial the nonzero principal angles of V^⊥ and W^⊥ equal those of V and W <cit.>. Also, V = p > q =W ⇔ W^⊥ = n-q > n-p =V^⊥, and for these metrics Δ_n-q = Δ_p. p≠ 0, q≠ n So α gives a self-homeomorphism of G^n, with G_p^n ≃ G_n-p^n as usual, and a homeomorphism G^n+≃ G^n-. The asymmetric G^n± split, in a sense, the usual symmetry of G^n (see Fig. <ref>, and compare with <Ref>). For asymmetric l^2 metrics and n≥ 2, α is not an anti-isometry, as Δ_n-q≠Δ_p but gives the same homeomorphisms.In what follows, ^n-1 is the projective space of ^n, 1 is the identity map (in ^n or G^n), α is as above, <ref> and M^±are as in <Ref>. (G^n) ≅(^n-1). Any f∈(G^n) preserves the partial order ⊂ of subspaces, so it preserves dimensions and restricts to an isometry of G_1^n = ^n-1. And any f ∈(G_1^n) extends uniquely to an f:G^n → G^n preserving ⊂, and as it preserves angles between lines, this extension preserves principal angles between subspaces, hence is an isometry of G^n. Recall that (^n-1) = PO(n) = O(n)/{±1}, ≅ SO(n) if n odd <https://en.wikipedia.org/wiki/Projective_orthogonal_group> while (^n-1) is generated by PU(n) = U(n)/U(1) ≅ PSU(n) ≅ SU(n)/_n and complex conjugation <cit.>. p.310, (9.3.5) para q=1: I(M) = (^n-1), I_0(M) = U(n)/U(1) (p.309), α = complex conj (p.307). H erro no caso 2q=n, pois se n=2 temos β∈α I_0. De fato, (^1) = (S^2) = O(2) e PU(2) = PSU(2) = SU(2)/_2 = SO(3), e nessa identificao αreflexo de S^2 por um plano e β = -1 (G^n) ≅(^n-1) ⋊{1,α} for asymmetric ∧ or max metrics. For l^2 ones, (G^n) ≅(^n-1) if n≥ 2, and (G^n)= {1,α} if n=1. In the ∧ or max cases, the composition of α and any anti-isometry is an isometry. In the l^2 case with n≥ 2, as an anti-isometry f would reverse ⊂, and so f({0}) = ^n and f(L) ∈ G_n-1^n for L ∈ G_1^n, we would haved(L,{0}) = Δ_1 ≠Δ_n = d(^n,f(L)), a contradiction. G^n±/(G^n) ≃ M^±. The orbits are the G_p^n's, and π:G^n-→ M^- given by π(V) =V is a quotient map, with π^-1(p) = G_p^n since both π^-1({0,1,…,p}) = ⋃_q≤ p G_q^n and π(B^-_r(V)) = {0,1,…,V} are open. The proof for G^n+ is similar. §.§ Geodesics for asymmetric l^2 or ∧ metrics Let d be an asymmetric l^2 or ∧ metric (remember that geodesics of G_p^n coincide for these), For ∧ ones, φ(t) is min geodesic of G^n- from V to W ⇔φ(-t)^⊥ is min geodesic of G^n+ from W^⊥ to V^⊥ and φ:I→ G^n± be a rectifiable curve. c∈ I is a critical point of φ ifif there is no δ>0 such that φ|_(c-δ,c+δ)∩ I is a curve in some G_p^n for any δ>0 there is t∈(c-δ,c+δ)∩ I such that φ(t) ≠φ(c). φ has finitely many critical points. Assuming otherwise, there are t_0 < t_1 < t_2 < ⋯ in I such that φ(t_i) ≠φ(t_i+1), and passing to a subsequence we can assume φ(t_2i) > φ(t_2i+1). as < ∞ But then ℓ_φ≥∑_i d(φ(t_2i),φ(t_2i+1)) = ∑_i Δ_φ(t_2i) = ∞, contradicting the rectifiability of φ. So any c∈ I has a δ>0 such that φ restricted [If c=inf I, extend the curve defining φ(t)=φ(c) for t<c.] to (c-δ,c) is a curve in some G_p^n, where lim_t→ c^-φ(t) is well defined. G_p^n Hausdorff complete, φ rectifiable Likewise, lim_t→ c^+φ(t) is well defined. in some other G_q^n Continuity implies lim_t→ c^±φ(t) ⊂φ(c) in G^n-, and lim_t→ c^±φ(t) ⊃φ(c) in G^n+. φ expands (contracts) from U to V at c ∈ I if U ⊊ V (U ⊋ V) and either only one alternative is possible, depending on G^± lim_t→ c^-φ(t) = U and φ(c) = V, or φ(c) = U and lim_t→ c^+φ(t) = V. Critical points are those with expansions or contractions(possibly both, as in φ:→ G^n- with φ(0) = ^n, φ(t)={0} for t≠ 0). If φ expands (contracts) from U to V at c, there is a partition I=I_1 ∪ I_2 with sup I_1 = inf I_2 = c such that ℓ_φ = ℓ_φ|_I_1 + ℓ_φ|_I_2 (ℓ_φ = ℓ_φ|_I_1 + ℓ_φ|_I_2 + Δℓ_φ,c, where Δℓ_φ,c = Δ_ U). If φ(c) = V, expansion in G^n- or contraction in G^n+ for I_1=I∩ (-∞,c) and I_2 = I ∩ [c,∞) we have ℓ_φ = ℓ_φ|_I_1 + ℓ_φ|_I_2 + lim_t→ c^- d(φ(t),V). As φ(t) converges to U in a G_p^n, this limit is d(U,V), so 0 for an expansion, Δ_p for a contraction. If φ(c) = U, use I_1 = I∩ (-∞,c], I_2 = I ∩ (c,∞) and lim_t→ c^+ d(U,φ(t)). The length discontinuity Δℓ_φ,c at contractions might be useful in some applications as a sort of penalty for information loss. As its value depends on d, curve lengths and geodesics no longer coincide for all asymmetric l^2 or ∧ metrics in G^n±. A curve continuous so expansions are closed at the correct side η:I→ G^n± is null ⇔ it is piecewise constant, changing at most by a finite number of expansions. So, there are subspaces V_1 ⊂⋯⊂ V_N and a partition I=⋃_i=1^N I_i into consecutive intervals such that η(t) = V_i for t∈ I_i. In G^-, left-closed intervals for i≠ 1. In G^+, right-closed for i≠ N (⇐) Immediate. (⇒) η can have no contraction, and in any interval with no expansion it is a null curve in some G_p^n, hence constant. A path φ:[a,b]→ G^n± from V∈ G_p^n to W ∈ G_q^n is: * type I if φ(t) = ϕ(t) ⊕η(t) for a minimal geodesic of G_p^n ϕ:[a,b] → G_p^n from V to a projection subspace W' of W V, <Ref> and a null path η:[a,b]→ G^n± from {0} to W'^⊥∩ W. * type II if it has a contraction at some c ∈ I, R⊂ S ∩ W with Δℓ_φ,c = Δ_p, S=V for l^2 metrics. If V=0 then S={0} can not contract and is null in [a,c) and (c,b]. Type I paths exist ⇔ p ≤ q (so W' ∈ G_p^n), and have length d_g(V,W) (the asymmetric geodesic distance). Type II paths exist ⇔ V≠{0}Δ_0=0 (so φ can contract from V to {0}, then expand to W), and have length Δ_p (this Δ_p is d). See Fig. <ref> for examples of such paths. For an asymmetric l^2 or ∧ metric d, a path in G^n± from V∈ G_p^n to W ∈ G_q^n is a minimal geodesic ⇔ it is type I and d_g(V,W) ≤Δ_p, or it is type II and Δ_p ≤ d_g(V,W). If d_g(V,W) = Δ_p both types (if exist) are geodesics. In the class of run-continuous geodesics, there is none from V to W if p>q, and if p≤ q all geodesics are type I. If V = {0}, the minimal geodesics are null paths from it to W (type I with ϕ={0}). If p > q, any path φ has at least one contraction, with ℓ_φ≥Δℓ_φ,c≥Δ_p, <ref>. NoΔℓ_φ,c = Δ_p pois pode ter expandido so the minimal geodesics are the type II paths, and Δ_p ≤π/2√(p) = d_g(V,W). If 0<p≤ q, any path φ with ℓ_φ≤Δ_p has at most one contraction, being type II if it has one. Suppose it has none. If p=q, it is type I with η={0}. If p<q, assume it has a single expansion (the general case follows via induction), from U' ∈ G_p^n to U ∈ G_q^n at c. By <Ref>, <ref> ℓ_φ≥ d_g(V,U') + d_g(U,W) ≥ d_g(V,U') + d_g(U',W) ≥ d_g(V,W). Provei Igeod, vou provar geodI If ℓ_φ = d_g(V,W), these are equalities, so φ|_[a,c) and φ|_(c,b] extend to minimal geodesics φ_1:[a,c]→ G_p^n from V to U', and φ_2:[c,b]→ G_q^n from U to W, <Ref> gives U” = U'^⊥∩ U ⊂ W, and d_g(V,W) = d_g(V,U') + d_g(U',W). By the characterization of geodesics in <Ref>, φ_2 = φ_2' ⊕ U” for a minimal geodesic φ_2':[c,b] → G_p^n from U' to W' = U”^⊥∩ W, which is a projection subspace of W U'. By Propositions <ref> and <ref>, <ref> d_g(V,W) = d_g(V,U') + d_g(U',W') ≥ d_g(V,W') is an equality, W' is also a projection subspace of W V, and φ_1 and φ_2' form a minimal geodesic ϕ:[a,b]→ G_p^n from V to W'. With η(t) = {0} for t ∈ [a,c), η(t) = U” = W'^⊥∩ W for t ∈ (c,b], and η(c) = {0} in G^n+ or U” in G^n-, we have that φ=ϕ⊕η is type I. G^n± are geodesic spaces for d, and the intrinsic asymmetric metric is D(V,W) = min{d_g(V,W),Δ_ V}. For 0<p≤ q, both types of path from V ∈ G_p^n to W ∈ G_q^n exist, and whichever is shorter is a minimal geodesic. If p>q, type II ones exist, and Δ_p ≤ d_g(V,W). If p=0 we have null paths. §.§ Between-points for d_g By <Ref>, d_g is intrinsic in G^n±, and so its minimal geodesics are segments. and ℓ_I≤ℓ_II ∀ V,W Thus any U in a minimal geodesic from V to W is a between-point. As we will show, the converse also holds. For distinct U∈ G_r^n, V∈ G_p^n and W∈ G_q^n, V ≠ U = W in <ref>; V=U ≤ W in <ref>; U betw V=W ⇒ V=U=W; but V=U > W would need new case r>q we have d_g(V,W) = d_g(V,U) + d_g(U,W) in the following cases and no other:<ref> and <ref> intersect if V⊂ U ⊂ W * r≤ q < p and U⊂ W; * r<p ≤ q, U⊂ W and V⊥ W; * p≤ r ≤ q and there are projection subspaces U',W' of U,W V such that U' is in a minimal geodesic of G_p^n from V to W' and U'^⊥∩ U ⊂ W'^⊥∩ W. If q<r, the equality gives d_g(V,U) + π/2√(r) = d_g(V,W) ≤π/2√(p), so either r<p and √(p) + √(r) = √(p), or r = p and V=U, both impossible. If r≤ q <p, it gives d_g(U,W)=0, so U⊂ W. If r<p≤ q, π/2√(p) + d_g(U,W) = d_g(V,W) ≤π/2√(p), so U ⊂ W and V⊥ W. If p≤ r≤ q, for any projection p-subspaces U' of U V, and W' of W U', <Ref> gives d_g(V,W) ≤ d_g(V,W') ≤ d_g(V,U') + d_g(U',W') = d_g(V,U) + d_g(U',W) ≤ d_g(V,U) + d_g(U,W) = d_g(V,W). These are equalities, so, by Propositions <ref> and <ref>, W' is also a projection subspace of W V, U' is in a minimal geodesic of G_p^n from V to W', and U'^⊥∩ U ⊂ W. Thus P_W(U') ⊥ U'^⊥∩ U, and as W' ≤ q - (r-p)=W -(U'^⊥∩ U) we can assume W' was chosen orthogonal to U'^⊥∩ U. The converses are immediate. For d_g, U is a between-point from V to W ⇔ U is in a minimal geodesic of G^n± from V to W. (⇐) Minimal geodesics of d_g are segments. (⇒) For distinct U,V,W, a type II geodesic is given in <ref> and <ref> above <ref> by contraction from V to V∩ U, then expansions to U and W. ℓ_I = d_g(V,W) = π/2√(p) = ℓ_II In <ref>, a type I is given by a geodesic of G_p^n from V to U', expansion to U, a geodesic of G_r^n from U to W' ⊕ (U'^⊥∩ U), then expansion to W. ℓ_I = d_g(V,W) ≤π/2√(p) = ℓ_II Still, G^n is not convex for d_g, as a minimal geodesic from V to a distinct W might have no other element. But this only happens when V is a hyperplane of W, or when V=^n and W={0}. § ASYMMETRIC ∧ METRICS Using Grassmann algebra (<Ref>), we show the asymmetric ∧ metrics (Fubini-Study d_FS, chordal-∧ d_c∧ and Binet-Cauchy d_BC) have nice geometric interpretations, useful properties, and are easy to compute. Let V,W∈ G^n, and A∈⋀^n be a blade with [A]=V. The asymmetric anglefrom V to W is Θ_V,W = cos^-1P_W A/A. Θ_{0},W=0 for any W, Θ_V,{0}=π/2 for V≠{0}.Formerly called Grassmann angle <cit.>, its links to products of Grassmann and Clifford algebras <cit.> give easy ways to compute it and nice properties <cit.>. For the contraction of nonzero blades A,B ∈⋀^n, cosΘ_[A],[B] = A B/AB(= |A,B|/AB if A,B ∈⋀^p ^n).We can interpret it via projection factors <cit.>. Let V_ (=V if =) be the underlying real space of V, and _k be the k-dimensional volume. Let V,W ∈ G^n, k= V_, and S⊂ V be a set with _k(S) ≠ 0. The projection factor of V on W is π_V,W=_k(P_W(S))/_k(S).As A and P_W A (squared, if =) are volumes of parallelotopes, cosΘ_V,W=π_V,W if =,√(π_V,W) if =.In other words, cosΘ_V,W (squared, if =) measures the contraction of volumes orthogonally projected from V to W (Fig. <ref>). The asymmetric Fubini-Study metric (<Ref>) is just this angle: For V∈ G_p^n and W∈ G_q^n, d_FS(V,W) = Θ_V,W = 0ifp=0,cos^-1(∏_i=1^p cosθ_i)if0<p≤ q,π/2 ifp>q, where the θ_i's are the principal angles. d_c∧(V,W) = 2sinΘ_V,W/2 If V and W have principal bases (e_1,…,e_p) and (f_1,…,f_q), for A=e_1∧⋯∧ e_p we have P_W A = 0 if p>q, otherwise P_W e_i = f_i cosθ_i∀ i and so P_W A = cosθ_1⋯cosθ_pf_1∧⋯∧ f_p. If V≠{0} and W={0} then P_WA=0. If V={0} = [1] we have P_W 1 = 1.As Θ_V,W=π/2⇔ VW (θ_p=π/2 or p>q), As all ∧ and max ones; l^2 ones attain maximum Δ_p (fixed p) if V⊥ W or p>q; In G^n, need p+q ≤ n for V ⊥ W the angle carries some dimensional data (Θ_V,W≠π/2⇒ P_W(V) =V ≤ W).The asymmetry Θ_V,W≠Θ_W,V for p≠ q reflects the dimensional asymmetry of subspaces, and is crucial for the oriented triangle inequality (Fig. <ref>) and even for the trivial Θ_V,W=Θ_V,P_W(V): <ref> usual angles between the smaller subspace and its projection on the larger one <cit.> give π/2 for perpendicular planes V,W⊂^3, but 0 for V and P_W(V). P_W(V)⊂ V Extend an orthogonal basis (w_1,…,w_q) of W∈ G_q^n to another β = (w_1,…,w_n) of ^n. For V∈ G_p^n, If V={0} or W=^n there is no ∈_p^n with [w_]⊄W, and the sum is 0. If V≠{0} and W={0} then [w_]⊄W∀∈_p^n, and the sum is 1 d_BC(V,W) = sinΘ_V,W = (1 - π_V,W^2)^1/2 = (∑π_V,U^2)^1/2 if =,(1 - π_V,W)^1/2 = (∑π_V,U)^1/2 if =, where the sums run over all coordinate p-subspaces [A coordinate p-subspace is a subspace spanned by p elements of a basis.] U of β with U⊄W. Assume β is orthonormal. For a unit A∈⋀^p V, <Ref> and d_BC in <Ref> give <ref> d_BC^2(V,W) =1 - cos^2 Θ_V,W = A^2 - P_W A^2 = ∑_1≤ i_1 < ⋯<i_p≤ n |w_i_1∧⋯∧ w_i_p,A|^2 - ∑_1≤ i_1 < ⋯<i_p≤ q |w_i_1∧⋯∧ w_i_p,A|^2 = ∑_1≤ i_1 < ⋯<i_p≤ n, i_j>qfor somejP_[w_i_1∧⋯∧ w_i_p] A^2. So d_BC^2(V,W) is the sum of volumes (squared if =) of projections of a unit volume of V on all coordinate p-subspaces not contained in W (Fig. <ref>). If p > q, no p-subspace is contained in W, and d_BC(V,W) = 1 corresponds to volumetric Pythagorean theorems <cit.>.We also have d_c∧(V,W) = 2 sinΘ_V,W/2= √(2 - 2π_V,W) if =,√(2 - 2 √(π_V,W)) if =. cos^2Θ_V,W=(𝐏^†𝐏), where 𝐏 is a matrix for the orthogonal projection V→ W in orthonormal bases of V and W, and ^† is the conjugate transpose. If V= W, cosΘ_V,W=|𝐏|. Follows from <Ref>, as in principal bases 𝐏 is a q× p diagonal matrix with the cosθ_i's.If p>q the diagonal of 𝐏̅^T 𝐏 has 0's Given any bases (v_1,…,v_p) of V and (w_1,…,w_q) of W, let 𝐀=(v_i,v_j), 𝐁=(w_i,w_j) and 𝐂=(w_i,v_j). Then cos^2 Θ_V,W = (𝐂^†𝐁^-1𝐂)/𝐀. When p=q, cosΘ_V,W = |𝐂 |/√(𝐀·𝐁). If p>q then Θ_V,W = π/2, and (𝐂^†𝐁^-1𝐂) = 0 as it is a p× p matrix with at most rank q. If p≤ q, Laplace expansion citeMuir2003, p.80 columns q+1,…,q+p of the (q+p)×(q+p) block matrix 𝐌=[ 𝐁 𝐂; 𝐂^† 0 ] gives Como J d as ltimas colunas, das q+p linhas de 𝐌 s as q primeiras no do 0, por isso pode usar _p^q ao invs de _p^q+p 𝐌 = ∑_ (-1)^pq+p(p+1)/2 + i_1+⋯+i_p·𝐂_·𝐍_', where the sum runs over all = (i_1,…,i_p) with 1≤ i_1 < ⋯<i_p≤ q, 𝐂_ is the p× p submatrix of 𝐌 formed by lines of 𝐂 with indices in , and 𝐍_'=[ 𝐁_'; 𝐂^† ] is its q× q complementary submatrix, formed by lines of 𝐁 with indices not inand all of 𝐂^†. For A=v_1∧⋯∧ v_p and B=w_1∧⋯∧ w_q, <Ref> gives A B^2 = A∧(A B),B = ∑_ϵ_' B_,A A∧ B_',B. As 𝐂_=B_,A and 𝐍_' = B_'∧ A,B = (-1)^pq+pA∧ B_',B, (-1)^-p^2=(-1)^p ϵ_' = (-1)^+p(p+1)/2 we obtain A B^2 = (-1)^p 𝐌, so that cos^2 Θ_V,W = (-1)^p 𝐌/𝐀·𝐁, by (<ref>). The result follows from Schur's determinant identity. citeBrualdi1983 (-𝐂𝐀^-1𝐂^†) = (-1)^p (𝐂𝐀^-1𝐂^†) Let (f_1,…,f_5) be the canonical basis of ^5, e_1=√(3)f_1+f_4/2, e_2=f_2+f_5/√(2), V={e_1,e_2} and W={f_1,f_2,f_3}. Principal angles are 30^∘ and 45^∘, so Θ_V,W= cos^-1(√(3)/2·√(2)/2) ≅ 52^∘ and areas in V shrink by √(6)/4 if orthogonally projected on W. Volumes in W vanish if projected on V, so Θ_W,V=90^∘. Also, π_V,[f_1 ∧ f_5] = √(6)/4, π_V,[f_2 ∧ f_4] = π_V,[f_4 ∧ f_5] = √(2)/4 and π_V,[f_1 ∧ f_4] = π_V,[f_2 ∧ f_5] = π_V,[f_3 ∧ f_4] = π_V,[f_3 ∧ f_5] = 0, so d_BC(V,W) = sinΘ_V,W = √(5/8) = √(∑_1≤ i < j ≤ 5, {i,j}⊄{1,2,3}π_V,[f_i ∧ f_j]^2). If (u_1,…,u_5) is the canonical basis of ^5, v_1=2u_1-u_2, v_2 = 2u_1+u_3, w_1=u_2+u_5, w_2=u_3-u_4, w_3=u_4, V={v_1,v_2} and W={w_1,w_2,w_3}, then 𝐀=[ 5 4; 4 5 ], 𝐁=[200;02 -1;0 -11 ], 𝐂=[ -10;01;00 ] and (<ref>) gives Θ_V,W= cos^-11/3√(2)≅ 76.4^∘. Switching V and W, we have 𝐀=[200;02 -1;0 -11 ], 𝐁=[ 5 4; 4 5 ], 𝐂=[ -100;010 ], and so Θ_W,V= 90^∘, as expected. Calculations with Grassmann algebra are easier: writing u_13 = u_1 ∧ u_3 for example, we have V=[A] for A = v_12 = 2u_12+2u_13-u_23 and W=[B] for B= w_123 = u_234+u_345, so A = 3, B = √(2), A B = -u_4 and BA = 0, and (<ref>) gives the same results. In ^3, let v=(1,0,), w_1=(1,0,0), w_2=(,1,0), V={v} and W={w_1,w_2}. Then 𝐀=(2), 𝐁=[ 1; - 2 ], 𝐂=[ 1; - ], and we obtainΘ_V,W= 45^∘. Identifying ^3 with ^6, we have V_=_{v, v} and W_=_{w_1, w_1,w_2, w_2}, with v =(1,0,0,0,0,1), w_1 =(1,0,0,0,0,0), w_2 =(0,1,1,0,0,0), v =(0,1,0,0,-1,0), w_1 =(0,1,0,0,0,0), w_2 =(-1,0,0,1,0,0), so now 𝐀=[ 2 0; 0 2 ], 𝐁=[100 -1;0110;0120; -1002 ], 𝐂=[10;01;01; -10 ] and Θ_V_,W_= 60^∘. Note that Θ_V_,W_≠Θ_V,W as they convey in different ways that areas in V contract by π_V_,W_ = π_V,W = 1/2 if orthogonally projected on W. §.§ Between-points for d_FS Here we show, for d_FS, that between-points are in minimal geodesics, and determine when these are segments. d_FS(V,W) = d_FS(V,U) + d_FS(U,W) for U,V,W∈ G^n in the following cases and no other: (<ref>), (<ref>) intersect if V⊂ U ⊂ W * U⊂ W, with V W or P_W(V)⊂ U; * V⊂ U, with V W or V^⊥∩ U⊂ W; * V = [v]⊕ R, U = [u]⊕ S and W = [w]⊕ T for subspaces R⊂ S⊂ T and aligned [We say u and v are aligned if u,v≥ 0, so that θ_u,v = θ_u,P_[v] u.] nonzero u,v,w∈ T^⊥ with u=κ v+λ w for κ, λ >0. Com ≥ teria interseo com (<ref>) and (<ref>), sem englobar nenhuma parte delas <ref> By <Ref>, case <ref> corresponds to Θ_U,W=0 and Θ_V,W = Θ_V,U, and <ref> to Θ_V,U=0 and Θ_V,W = Θ_U,W. In <ref>, <Ref> gives Θ_V,W = θ_v,w = θ_v,u + θ_u,w = Θ_V,U + Θ_U,W. If Θ_V,W = Θ_V,U + Θ_U,W with Θ_V,U,Θ_U,W≠ 0 then Θ_V,U,Θ_U,W≠π/2, and so V, U', W' ∈ G_p^n for some p, where U' = P_U(V) and W' = P_W (U'). Θ_U',W≠π/2 By <Ref>, Θ_V,W≤Θ_V,W'≤Θ_V,U' + Θ_U',W' = Θ_V,U + Θ_U',W≤Θ_V,U + Θ_U,W. These are equalities, so Θ_U',W' = Θ_U,W≠ 0, Θ_V,W'≠ 0, and V, U', W' are distinct. <Ref> gives R∈ G_p-1^n and aligned u,v,w ∈ R^⊥ with u=κ v +λ w for κ,λ>0, U' = [u]⊕ R, V = [v]⊕ R and W' = [w]⊕ R. For S' = U'^⊥∩ U we have u,v ⊥ S', w = u - κ v/λ⊥ S', and S' ⊂ W by <Ref>, <ref> as Θ_U',W = Θ_U,W≠π/2. For T' = ([w]⊕ R ⊕ S')^⊥∩ W, as P_W u ∈ W' = [w]⊕ R we have u,w ⊥ T' and v = u-λ w/κ⊥ T'. So <ref> is satisfied with S = R⊕ S' and T=R ⊕ S' ⊕ T'. For d_FS, if U is a between-point from V to W then U is in a minimal geodesic of G^n± from V to W.<ref> If V W with U⊂ W or V⊂ U, one can find a type II geodesic from V to W through U. V-0-U-W or V-U-0-W If V W then P_W(V) is a projection subspace of W V, so if P_W(V) ⊂ U ⊂ W, I: V - P_W V - U - W II: V - 0 - U - W or if V ⊂ U and V^⊥∩ U ⊂ W, I: V - U - (P_W V) ⊕ (V^⊥∩ U) - W II: V - 0 - U - W one can find both types of path (unless V = {0}), and the shorter one is a geodesic. In case <ref> above one can find a type I geodesic. also II if v ⊥ w The converse does not hold. as one can easily check in G_2^4 In fact, G^n is not convex for d_FS, and between-points exist only in the following cases: For d_FS, there is a distinct between-point from V ∈ G_p^n to W ∈ G_q^n ⇔ VW with V ≠^n or W ≠{0}, or W V with V⊄W, or p ≤ q-2, or p = q = (V∩ W) +1. <ref> (⇐) Each case holds with: any U W or UV; U=P_W(V) (≠ W as WV); P_W(V) ⊊ U ⊊ W; U as in <Ref>. (⇒) If V W, case <ref> above gives P_W(V) ⊂ UW, so W V, and p ≤ q-2 if V ⊂ W; <ref> gives VU and V^⊥∩ U⊂ W, which imply the same; and <ref> also implies the same if R≠ T, otherwise gives the last possibility. Minimal geodesics are segments in even less cases: If (V∩ W) + 1 < p < q and V W there is a distinct between-point U from V to W, but not in a segment. For d_FS, G^n± has a segment from V ∈ G_p^n to W ∈ G_q^n ⇔ V W or p ≤(V∩ W) + 1. If (V∩ W) + 1 = p ≤ q, φ = ϕ⊕η for a min geod ϕ between lines (V∩ W)^⊥∩ V and (V∩ W)^⊥∩ W_P, and null geod η from V ∩ W to W. If (V∩ W) = p, φ is null geod By <Ref>, a minimal geodesic φ from V to W has ℓ_φ = min{d_g(V,W),π/2}≥ d_FS(V,W). We have d_FS(V,W) = π/2⇔ VW. If V W then p ≤ q and, for a projection subspace W' of W V, <ref> d_FS(V,W) = d_g(V,W) ⇔ d_FS(V,W') = d_g(V,W') ⇔(V∩ W) = (V∩ W') ≥ p -1, by <Ref>.If V = ^n and W = {0}, the segment is type II with no other points. If VW with p=q > (V∩ W) + 1, by <Ref> there is no segment in G_p^n, yet there is a type II segment in G^n±.§ FINAL REMARKS The definition of d(V,W) in <Ref> uses the infimum taken in [0,Δ_p], which is natural as it is the smallest interval with all possible values of d_p(V,U). But it can be tweaked for certain purposes: with [0,Δ_n], <Ref> extends for asymmetric l^2 metrics; with [0,∞] we have d(V,W) = ∞ if V >W, which may be useful in applications where dimensions can increase but not decrease(one might also use run-continuous paths, whose length varies continuously <cit.>).Symmetrizing the asymmetric metrics of <Ref> via different norms yields new metrics, but information is lost. l^2 norm turns d_pF into d̃_pF(V,W) = √(2∑_i=1^p sin^2 θ_i) if p = q, otherwise √(max{p,q} + ∑_i=1^min{p,q}sin^2 θ_i) Max-symmetrized ones are trivial for distinct dimensions, but can still be useful:e.g., Θ̂_V,W = max{Θ_V,W,Θ_W,V} gives|A,B| = ABcosΘ̂_[A],[B] for any blades <cit.>. A min-symmetrization ď(V,W) = min{d(V,W),d(W,V)} gives the distance from the smaller subspace to its projection on the larger one, which seems natural and is used in <cit.> and for angles <cit.>. But it fails the triangle inequality,and balls B̌_r(V) = {U∈ G^n: ď(U,V) < r}, with subspaces of all dimensions close to V, do not give a topology. intersection of such balls for 2 lines might have no line, so it contains no such ball Under some conditions, related to concentration of measure Ledoux2001 high dimensional asymmetric metric spaces with measure are nearly symmetric <cit.>, most pairs (x,y) have small asymmetry |d(x,y)-d(y,x)|. This seems to be the case with (G^n,d_FS) for large n. Each G_p^n has a natural measure given by the action of the unitary group, O(n) has Haar measure, so given V_0 and S⊂ G_p^n, μ(S) = μ{T∈ O(n): T(V_0) ∈ S} and taking their dimensions p(n-p) as relative weights we obtain a Borel measure in G^n. for its finer topology τ, hence also for τ^±Most pairs of subspaces should have a considerable number of non-negligible principal angles,and so d_FS(V,W)≅π/2≅ d_FS(W,V), as d_FS→π/2 likewise for the other asymmetric ∧ metrics quickly if several θ_i's are large, or a large number of them are not too small. d_BC, d_c∧ have same problemThis may render d_FS unsuitable for some uses, but is important for quantum decoherence <cit.>, allowing quantum states of many particles to quickly become nearly orthogonal. As decoherence is deemed responsible for the quantum-classical transition as the dimension of the quantum state space increases,this begs the question of whether this transition might be linked to a loss of asymmetry. As far as we know, there have been no attempts to relate quantum theory to asymmetric metric spaces. But it would not be so surprising to find a link, since fermionic Fock space is a Grassmann algebra, the Fubini-Study distance arises naturally in the theory, and Θ_V,W has led to complex Pythagorean theorems <cit.> with unexpected links to quantum probabilities <cit.>.§ GRASSMANN EXTERIOR ALGEBRAGrassmann algebra <cit.> is a natural formalism to use with subspaces. For X∈ G_r^n, it is a graded algebra ⋀ X = ⊕_p∈⋀^p X with ⋀^0 X =, ⋀^1 X = X and ⋀^p X = {0} if p ∉[0,r].For A∈⋀^p X and B∈⋀^q X, its bilinear associative exterior product ∧satisfies A∧ B = (-1)^pq B∧ A ∈⋀^p+q X. For u,v∈ X, u ∧ v = -v∧ u, and so v∧ v=0. For p>0, elements of ⋀^p Xare linear combinations of p-blades B=v_1∧⋯∧ v_p for v_1,…,v_p∈ X.If B≠ 0, [B] = {v∈ X: v∧ B=0} = {v_1,…,v_p} is a p-subspace. Any λ∈ is a 0-blade with [λ] = {0}. Only represents {0}, and has V=(λ), ⋀^0 V={λ}, if λ≠ 0For nonzero blades A,B∈⋀ X, if [A] and [B] are disjoint then [A∧ B] = [A]⊕ [B], otherwise A∧ B = 0. In the following known result, A and B need not be blades. Let A∈⋀ V and B∈⋀ W for disjoint V,W ∈ G^n. Then A∧ B = 0 ⇔ A=0 or B=0. (v_i),(w_j) bases V,W ⇒ (v_i,w_j) basis V⊕ W. (v_)_∈∪_i^p basis ⋀ V, (w_)_∈∪_j^q basis ⋀ W, (v_∧ w_) basis ⋀(V⊕ W). A=∑ a_ v_, B=∑ b_ w_,A∧ B=∑ a_ b_ v_∧ w_. A∧ B=0⇒ a_ b_=0∀, Some a_≠ 0 ⇒ b_=0 ∀, e vice versa. The inner product A,B = (v_i,w_j) of A=v_1∧⋯∧ v_p and B=w_1∧⋯∧ w_p and κ,λ=κ̅λ forκ,λ∈. is extended linearly (sesquilinearly, if =), with distinct ⋀^p X's beingorthogonal. If =, A=√(A,A) is the p-dimensional volume of the parallelotope spanned by v_1,…,v_p. If =, A^2 is the 2p-dimensional volume of that spanned by v_1, v_1,…,v_p,v_p. If [A] ⊥ [B], A∧ B = AB. For u,v ∈ X, u∧ v = uvsinγ_u,v. If V ∈ G_p(X) has orthonormal basis {v_1,…,v_p}, ⋀^k V has {v_i_1∧⋯∧ v_i_k:1≤ i_1 < ⋯<i_k≤ p}, and ⋀^p V = {v_1 ∧⋯∧ v_p} is a line in ⋀ X. The orthogonal projection P_V:X→ V extends to another P_V:⋀ X→⋀ V, with P_V(A∧ B) = P_V A ∧ P_V B for A,B∈⋀ X. and P_V 1 = 1 The contractionA B ∈⋀^q-p X of A∈⋀^p X and B∈⋀^q X is defined byC,AB = A∧ C,B for all C∈⋀^q-p X.It is asymmetric, with A B = 0 if p>q, and A B = A,B if p=q. AB = ∑_ϵ_' A,B_B_' for A∈⋀^p X and B= w_1∧⋯∧ w_q ∈⋀^q X with 0 < p < q, where the sum is over all = (i_1,…,i_p) with 1≤ i_1 < ⋯<i_p≤ q, B_ = w_i_1∧⋯∧ w_i_p, likewise for B_' with ' = (1,…,q) - (indices not in ), and [ϵ_' is the sign of the permutation that orders the concatenation ', so B = ϵ_'B_∧ B_'.] ϵ_' = (-1)^p(p+1)/2 + i_1 + ⋯ + i_p.§ METRICS IN GRASSMANNIANS G_P^NThe main metrics used in G_p^n <cit.> derive from distances between lines. P_{w} v= λ w for λ≥ 0 If K={u} and L={v} {u} ao invs de [u] para no confundir quandousar com lines spanned by blades for aligned [See footnote <ref>.] unit u and v, their angular, chordal and gap distances are θ_K,L = θ_u,v,c_K,L = u-v = 2sinθ_u,v/2 andg_K,L = u-P_Lu = sinθ_u,v, respectively. θ more fundamental <cit.> as c, g are concave functions of it; triangle ineq attains equal for c, g only trivially (if 2 lines coincide), (X) is convex for θ. Geodesics in (^n) are quotients of great circles of S^n-1 by antipodals; in (^n) great circles in the (^2)≅ S^2 given by 2 distinct complex lines. The metrics are organized in <Ref> and described below for V,W∈ G_p^n with principal angles θ_1≤⋯≤θ_p and principal bases (e_1,…,e_p) and (f_1,…,f_p), whose vectors are used as columns of n× p matrices 𝐄 and 𝐅. The l^2 metrics use the l^2 norm of the vector formed by the distances of lines K_i = {e_i} and L_i={f_i} for 1≤ i≤ p: * Geodesic <cit.>: Dhillon2008, Zhang2018, Zuccon2009, Lerman2011; Deza2016, Ye2016 chamam de Grassmann d_g = √(∑_i=1^p θ_i^2), geodesic distance of the unique[Up to scaling, and with an exception for G_2(^4) <cit.>.] Nonunique only in real G_2(^4); d_g still optimal (<cit.> but uses oriented case) Riemannian metric invariant by unitary maps, given by the Hilbert-Schmidt product S,T = (S^*T) Bendokat2020 com fator 1/2 in the tangent space (V,V^⊥) at V. * Chordal Frobenius or Procrustes <cit.>: ρ_b in Stewart1990 p.95,99. Paige1984, Chikuse2012, Turaga2008 embedding G_p^n in the set of n× p matrices with Frobenius norm ·_F, decomp in orthon basis d_cF= 𝐄-𝐅_F = √(∑_i=1^p e_i - f_i^2) = 2√(∑_i=1^p sin^2 θ_i/2). = √(2p-2∑_i=1^p cosθ_i) * Projection Frobenius or chordal <cit.>: chordal due to another embedding in a sphere embedding G_p^n in the set of projection matrices with ·_F, we have Dhillon2008; Deza2016: Frobenius distance is P_V-P_W_F d_pF = 1/√(2)P_V-P_W_F = √(∑_i=1^p e_i - P_W e_i^2) = √(∑_i=1^p sin^2 θ_i). Tem 1/√(2) porque repete o e_i - P_W e_i = P_V f_i - f_i The ∧ metrics use distances in ⋀^n of lines K_∧ = ⋀^p V = {A} and L_∧ = ⋀^p W = {B}, for A=e_1∧⋯∧ e_p and B=f_1∧⋯∧ f_p: * Fubini-Study <cit.>: Dhillon2008 ρ_θ in Stewart1990 p.96,99. Parece surgir em Lu1963 d_FS =θ_A,B = cos^-1(∏_i=1^p cosθ_i), geodesic distance through (⋀^p ^n)(see <Ref>). * Chordal ∧: d_c∧ = A-B = 2sinθ_A,B/2 = √(2-2∏_i=1^p cosθ_i). * Binet-Cauchy <cit.>: given by d_BC = A-P_W A = sinθ_A,B = √(1-∏_i=1^p cos^2θ_i). = Ψ_A - P_Ψ_BΨ_A, Wolf2003 Grassmann vectorΨ_A formed bycoord in orthonormal basis ρ_s in Stewart1990 p.96,99. Parece surgir em Lu1963 The max metrics use maximum distances of K_i and L_i (so, for i=p), being inadequate for applications where many small differences between subspaces can be more relevant than a single large one: * Asimov <cit.>: d_A = θ_p is the geodesic distance for a Finsler metric dada por norma no espao tangente, simtrica ou no given by the operator norm · in (V,V^⊥). Weinstein2000 Appendix largest angular dist S(V) to S(W); cos is largest semi-axis of P_W(S(V)) * Chordal 2-norm <cit.>: Deza2016,Ye2016 chamam de spectral d_c2 = 𝐄-𝐅 = e_p-f_p = 2sinθ_p/2. = max_v∈ S(V)min_w ∈ S(W)v-w; largest dist S(V) to S(W) (Hausdorff dist.); `2-norm' is norm in ^n * Projection 2-norm or gap <cit.>: Barg2002,Ye2016 d_p2 = P_V-P_W = e_p - P_W e_p = sinθ_p = max_v∈ V, v=1v-P_W v. largest dist. from S(V) to W; in normed spaces gap is not metric Kato1995 min-correlation in Hamm2008; containment gap or projection distance in Deza2016 All these metrics are topologically equivalent, as the following inequalities show (Fig. <ref>). Note that distances decrease as one moves right (if V ≠ W) or down (if (V∩ W) < p-1) in <Ref>. For distinct V,W∈ G_p^n: * π/2 d_pF≥ d_g > d_cF > d_pF. * π/2 d_BC≥ d_FS > d_c∧ > d_BC. * π/2 d_p2≥ d_A > d_c2 > d_p2. Follows from the formulas in <Ref>, as for distinct lines K and L we have π/2 g_K,L≥θ_K,L > c_K,L > g_K,L. Let V,W∈ G_p^n. The following inequalities hold if (V∩ W) < p-1, and otherwise [<cit.> incorrectly (let θ_1=⋯=θ_p-1=0, θ_p≠ 0) gives d_g>d_FS, etc. for V≠ W.] the strict >'s become ='s. no lugar de √(p) pode ser √(p-r), com r= V∩ W * √(p)d_A≥ d_g > d_FS > d_A. * √(p)d_c2≥ d_cF > d_c∧ > d_c2. * √(p)d_p2≥ d_pF > d_BC > d_p2. If (V∩ W)≥ p-1 then θ_i=0 for i ≠ p, and the distance formulas give the equalities. For (V∩ W) < p-1 we prove only the second inequality in each item, as the others are simple. (<ref>) We show cos^-1(cosθ_1 ⋯cosθ_p) ≤√(θ_1^2 + ⋯ + θ_p^2) for θ_i ∈ [0,π/2], with < if θ_p-1,θ_p≠ 0. For p=2, f(x,y)=cos^-1(cos x cos y), g(x,y) = √(x^2+y^2) and x,y∈ (0,π/2), it follows as ∂ g/∂ x = x/√(x^2+y^2) is increasing in x ∂^2 g/∂ x^2>0 and so ∂ g/∂ x > sin x/√(sin^2 x + tan^2 y) = ∂ f/∂ x. f(0,y) = g(0,y) y=π/2 direto nas frmulas Assuming it for some p≥ 2, let x = cos^-1(cosθ_1 ⋯cosθ_p) ≤√(θ_1^2 + ⋯ + θ_p^2), so cos^-1(cosθ_1 ⋯cosθ_p+1) = cos^-1(cos x cosθ_p+1) ≤√(x^2 + θ_p+1^2)≤√(θ_1^2 + ⋯ + θ_p+1^2), x ≤√(θ_1^2 + ⋯ + θ_p^2) as only θ_p≠ 0 with the first inequality being < if θ_p,θ_p+1≠ 0. so x≠ 0 (<ref>) d_c∧ = √(2-2∏_i=1^p cosθ_i) and d_cF = √(2p-2∑_i=1^p cosθ_i), so we show 1-∏_i=1^p x_i ≤ p-∑_i=1^p x_i for x_i∈ [0,1], with < if x_p-1,x_p ≠ 1. x_i=cosθ_i For p=2 we have 1-x_1x_2 ≤ 1 - x_1x_2 + (1-x_1)(1-x_2) = 2-x_1-x_2, with < if x_1,x_2 ≠ 1. Assuming it for some p≥ 2, let x = ∏_i=1^p x_i ≥ 1-p + ∑_i=1^p x_i. Then 1-∏_i=1^p+1 x_i = 1 - x x_p+1≤ 2 - x - x_p+1≤ p+1-∑_i=1^p+1 x_i, and the first inequality is < if x_p,x_p+1≠ 1. (<ref>) As in <ref>, with d_pF = √(p-∑_i=1^p cos^2θ_i) and x_i=cos^2θ_i.Lastly, we note that the following are not metrics in G_p^n: * The max-correlation or spectral distance <cit.> d = sinθ_1 smallest dist from S(V) to W does not satisfy a triangle inequality, and d = 0 ⇔ V∩ W ≠{0}. * The Martin metric for ARMA Auto-Regressive Moving Average models <cit.> is given, for certain subspaces associated to the models <cit.>,by d = √(-log∏_i=1^pcos^2θ_i). spanned by vector of the form (1,α,α^2,…) with |α|<1 It is presented in <cit.> Deza2016 as a metric for general subspaces, but does not satisfy a triangle inequality (take lines in ^2). And d_M=∞ when V W.§.§ Fubini-Study metrics in (^n) and G_p^n TheFubini-Study metric originates in the projective space (^n) = {lines of ^n}, where it is just the angular distance, d_FS(K,L) = θ_K,L. real: Reid2005 p.38 complex: Goldman1999 p.16, em termos de Fubini We prove its triangle inequality to obtain equality conditions. θ_K,L≤θ_K,J + θ_J,L for J,K,L ∈(^n), with equality if, and only if, J={u}, K={v}, L={w} for aligned u,v,w∈^n with u=κ v + λ w for κ,λ≥ 0. Assume distinct lines and θ_K,J≠π/2. For a unit v∈ K, let u = P_J v/P_J v. If θ_K,L≠π/2 let w = P_L v/P_L v, otherwise take a unit w∈ L. Then θ_K,J=θ_u,v and θ_K,L=θ_v,w. v,w≥ 0 and u,v > 0 Let u^⊥ = u-P_K u/u-P_K u and w^⊥ = w-P_K w/w-P_K w. ∈ K^⊥ As P_K w=v cosθ_v,w and w-P_K w=sinθ_v,w, w-P_K w,w-P_K w = 1+P_K w^2-2Rew,P_K w =1-P_K w^2 = 1-cos^2θ we find w = vcosθ_v,w + w^⊥sinθ_v,w. Likewise, u = vcosθ_u,v + u^⊥sinθ_u,v. So cosθ_J,L = |u,w| = |cosθ_u,vcosθ_v,w + u^⊥,w^⊥sinθ_u,vsinθ_v,w| ≤cos(θ_v,w-θ_u,v), and thus θ_J,L≥θ_v,w-θ_u,v. θ_J,L∈ [0,π/2] θ_v,w-θ_u,v∈ [-π/2,π/2] Equality holds when θ_v,w > θ_u,v and u^⊥,w^⊥ = 1, so u^⊥ = w^⊥ and u = v cosθ_u,v + w-P_K w/w-P_K wsinθ_u,v = v sin(θ_v,w-θ_u,v)/sinθ_v,w +w sinθ_u,v/sinθ_v,w = κ v+λ w with κ,λ > 0. Conversely, if u=κ v + λ w with κ,λ≥ 0 and v,w≥ 0 then θ_v,w = θ_u,v + θ_u,w D trabalho provar a partir da definio, maspadro and, as u,v = κv^2 + λw,v≥ 0 and likewise u,w≥ 0, we have θ_K,L=θ_v,w, θ_K,J=θ_u,v and θ_J,L=θ_u,w.The embedding G^n↪(⋀^n) in the projective space of the Grassmann algebra maps V ∈ G_p^n to its line ⋀^p V, and G_p^n inherits the Fubini-Study metric d_FS(V,W) = θ_⋀^p V, ⋀^p W = θ_e_1∧⋯∧ e_p,f_1∧⋯∧ f_p= cos^-1 (∏_i=1^p cosθ_i), where the e_i's, f_i's and θ_i's are principal vectors and angles. As the ⋀^p ^n's are orthogonal, the G_p^n's are separated by a distance of π/2, and G^n has its usual disjoint union topology. Let A,B,C ∈⋀^p ^n be nonzero blades with [A], [B], [C] all distinct. If A=κ B+λ C for κ,λ∈, there are u,v,w∈^n and a unit blade D∈⋀^p-1^n with u=κ v+λ w, A=u∧ D, B=v∧ D, C=w∧ D and [D] = [A]∩ [B]∩ [C]. We can choose u, v, w in any complement of [D], and if they are in [D]^⊥ then v,w=B,C. If x∈ [B]∩ [C] then x∧ A=x∧(κ B+λ C)=0, so x∈ [A]. As the spaces are distinct, κ,λ≠ 0, so B and C are also linear combinations of the other blades, and [A]∩ [B]=[A]∩ [C]=[B]∩ [C] = [A]∩ [B]∩ [C] = [D] for a unit q-blade D. Given a complement X of [D], we have A = A' ∧ D, B = B' ∧ D and C = C' ∧ D for blades A', B', C' ∈⋀^p-q X with [A'], [B'] and [C'] all disjoint. As (A'-κ B'-λ C')∧ D=0 A-κ B-λ C=0 and A'-κ B'-λ C'∈⋀ X, <Ref> Precisa pois no sabe se B'-κ A'-λ C'blade. gives A'=κ B'+λ C'. For any nonzero v'∈ [B'] and w'∈ [C'] we have A'∧ v'∧ w' = 0, and as A'∧ v'≠ 0 (by <Ref>) [A'] and [B'] are disjoint this means w'∈ [A'∧ v'] = [A']⊕ [v']. Since w'∉[A'] and v' was chosen at will in [B'], this implies [B']=1. If w=b_1+λ_1 a_1=b_2+λ_2a_2 for a_1,a_2∈ [A'], b_1,b_2∈ [B'] and λ_1,λ_2≠ 0, then λ_1a_1-λ_2a_2=b_2-b_1∈ [A']∩ [B'], so λ_1a_1=λ_2a_2. Thus A',B',C' are vectors u,v,w. If X=[D]^⊥ then B,C = v∧ D,w∧ D = v,w·D^2.Jiang1996: 1<p<n, real For distinct U,V,W∈ G_p^n, V ≠ U or W is enough, as the equality implies U ≠ W we have d_FS(V,W) = d_FS(V,U) + d_FS(U,W) ⇔ U = [u]⊕ R, V = [v]⊕ R and W = [w]⊕ R for R ∈ G_p-1^n and aligned u,v,w∈ R^⊥ with u=κ v+λ w for κ, λ > 0.For d_cF, d_pF, d_c2,d_p2, triangle ineq strict unless 2 subspaces coincide (results in <cit.>) (⇒) <Ref>, in (⋀^p ^n), gives U=[A], V=[B] and W=[C] for aligned blades with A=κ B + λ C for κ,λ > 0 (strict as U≠ V or W). <Ref> gives the result, for R=[D]. (⇐) Immediate. For distinct V, W ∈ G_p^n, the following are equivalent: * (V∩ W)=p-1. * There is a segment from V to W for d_FS. * There is a distinct point between V and W for d_FS. (<ref>⇔<ref>) By <Ref>. (<ref>⇔<ref>) By <Ref>. G_p^n is convex for d_FS⇔ p∈{0,1,n - 1, n}. This corrects <cit.>.10 Kobayashi1996 S. Kobayashi and K. Nomizu. Foundations of Differential Geometry, volume 2. Wiley, 1996. Kozlov2000I S. E. Kozlov. Geometry of real Grassmann manifolds. Parts I, II. J. Math. Sci., 100(3):2239–2253, 2000. Kozlov2000III S. E. Kozlov. Geometry of real Grassmann manifolds. Part III. J. Math. Sci., 100(3):2254–2268, 2000. Wong1967 Y. Wong. Differential geometry of Grassmann manifolds. Proc. Natl. Acad. Sci. USA, 57(3):589–594, 1967. Qiu2005 L. Qiu, Y. Zhang, and C. Li. Unitarily invariant metrics on the Grassmann space. SIAM J. Matrix Anal. Appl., 27(2):507–531, 2005. Edelman1999 A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl., 20(2):303–353, 1999. Stewart1990 G. Stewart and J. Sun. Matrix Perturbation Theory. Academic Press, 1990. Kato1995 T. Kato. Perturbation Theory for Linear Operators. Springer, 1995. Zheng2002 L. Zheng and D. N. C. Tse. Communication on the Grassmann manifold: a geometric approach to the noncoherent multiple-antenna channel. IEEE Trans. Inform. Theory, 48(2):359–383, 2002. Love2003 D. J. Love, R. W. Heath, and T. Strohmer. Grassmannian beamforming for multiple-input multiple-output wireless systems. IEEE Trans. Inform. Theory, 49(10):2735–2747, 2003. Love2005 D. J. Love and R. W. Heath. Limited feedback unitary precoding for orthogonal space-time block codes. IEEE Trans. Signal Process., 53(1):64–73, 2005. Dhillon2008 I. S. Dhillon, R. W. Heath Jr., T. Strohmer, and J. A. Tropp. Constructing packings in Grassmannian manifolds via alternating projection. Exp. Math., 17(1):9–35, 2008. Conway1996 J. H. Conway, R. H. Hardin, and N. J. A. Sloane. Packing lines, planes, etc.: Packings in Grassmannian spaces. Exp. Math., 5(2):139–159, 1996. Barg2002 A. Barg and D. Y. Nogin. Bounds on packings of spheres in the Grassmann manifold. IEEE Trans. Inform. Theory, 48(9):2450–2454, 2002. Hamm2008 J. Hamm and D. Lee. Grassmann discriminant analysis: A unifying view on subspace-based learning. In Proc. Int. Conf. Mach. Learn., pages 376–383. ACM, 2008. Huang2018 Z. Huang, J. Wu, and L. Van Gool. Building deep networks on Grassmann manifolds. In Proc. Conf. AAAI Artif. Intell., volume 32, 2018. Lerman2011 G. Lerman and T. Zhang. Robust recovery of multiple subspaces by geometric l_p minimization. Ann. Stat., 39(5):2686–2715, 2011. Lui2012 Y. M. Lui. Advances in matrix manifolds for computer vision. Image Vis. Comput., 30(6-7):380–388, 2012. Turaga2008 P. Turaga, A. Veeraraghavan, and R. Chellappa. Statistical analysis on Stiefel and Grassmann manifolds with applications in computer vision. In 2008 IEEE Conf. Comput. Vis. Pattern Recog., 2008. Vishwanathan2006 S. V. N. Vishwanathan, A. J. Smola, and R. Vidal. Binet-Cauchy kernels on dynamical systems and its application to the analysis of dynamic scenes. Int. J. Comput. Vis., 73(1):95–119, 2006. Ye2016 K. Ye and L. H. Lim. Schubert varieties and distances between subspaces of different dimensions. SIAM J. Matrix Anal. Appl., 37(3):1176–1197, 2016. Basri2011 R. Basri, T. Hassner, and L. Zelnik-Manor. Approximate nearest subspace search. IEEE Trans. Pattern Anal. Mach. Intell., 33(2):266–278, 2011. Draper2014 B. Draper, M. Kirby, J. Marks, T. Marrinan, and C. Peterson. A flag representation for finite collections of subspaces of mixed dimensions. Linear Algebra Appl., 451:15–32, 2014. Pereira2021 R. Pereira, X. Mestre, and D. Gregoratti. Subspace based hierarchical channel clustering in massive MIMO. In 2021 IEEE Globecom Workshops, pages 1–6, 2021. Gruber2009 P. Gruber, H. W. Gutch, and F. J. Theis. Hierarchical extraction of independent subspaces of unknown dimensions. In Int. Conf. Ind. Compon. Anal. Signal Separation, pages 259–266. Springer, 2009. Renard2018 E. Renard, K. A. Gallivan, and P. A. Absil. A Grassmannian minimum enclosing ball approach for common subspace extraction. In Int. Conf. Latent Variable Anal. Signal Separation, pages 69–78, 2018. Beattie2005 C. A. Beattie, M. Embree, and D. C. Sorensen. Convergence of polynomial restart Krylov methods for eigenvalue computations. SIAM Review, 47(3):492–515, 2005. Sorensen2002 D. C. Sorensen. Numerical methods for large eigenvalue problems. Acta Numer., 11:519–584, 2002. Wang2006 L. Wang, X. Wang, and J. Feng. Subspace distance analysis with application to adaptive Bayesian algorithm for face recognition. Pattern Recognit., 39(3):456–464, 2006. Sun_2007 X. Sun, L. Wang, and J. Feng. Further results on the subspace distance. Pattern Recognit., 40(1):328–329, 2007. Wilson1931 W. A. Wilson. On quasi-metric spaces. Amer. J. Math., 53(3):675, 1931. Busemann1944 H. Busemann. Local metric geometry. Trans. Amer. Math. Soc., 56:200–274, 1944. Zaustinsky1959 E. M. Zaustinsky. Spaces with non-symmetric distance, volume 34. American Mathematical Soc., 1959. Albert1941 G. E. Albert. A note on quasi-metric spaces. Bull. Amer. Math. Soc., 47(6):479–482, 1941. Mennucci2013 A. C. G. Mennucci. On asymmetric distances. Anal. Geom. Metr. Spaces, 1(1):200–231, 2013. GoubaultLarrecq2013 J. Goubault-Larrecq. Non-Hausdorff Topology and Domain Theory. Cambridge University Press, 2013. Kunzi2001 H. P. Künzi. Nonsymmetric distances and their associated topologies: about the origins of basic ideas in the area of asymmetric topology. In C. E. Aull and R. Lowen, editors, Handbook of the history of general topology, volume 3, pages 853–968. Springer, 2001. Kunzi2009 H. P. Künzi. An introduction to quasi-uniform spaces. Contemp. Math., 486:239–304, 2009. Bao2012 D. Bao, S. S. Chern, and Z. Shen. Introduction to Riemann-Finsler Geometry. Springer, 2012. Gutierres2012 G. Gutierres and D. Hofmann. Approaching metric domains. Appl. Categ. Structures, 21(6):617–650, 2012. Lawvere1973 F. W. Lawvere. Metric spaces, generalized logic, and closed categories. Rend. Sem. Mat. Fis. Milano, 43(1):135–166, 1973. Reissued in Reprints in Theory and Applications of Categories 1 (2002), 1–37. Mayor2010 G. Mayor and O. Valero. Aggregation of asymmetric distances in computer science. Inform. Sci., 180(6):803–812, 2010. Romaguera1999 S. Romaguera and M. Schellekens. Quasi-metric properties of complexity spaces. Topol. Appl., 98(1-3):311–322, 1999. Seda2008 A. K. Seda and P. Hitzler. Generalized distance functions in the theory of computation. Comput. J., 53(4):443–464, 2008. Fang2022 Y. Fang. Asymmetrically weighted graphs, asymmetric metrics and large scale geometry. Geom. Dedicata, 217(2), 2022. Stojmirovic2004 A. Stojmirović. Quasi-metric spaces with measure. Topol. Proc., 28(2):655–671, 2004. Kopperman1995 R. Kopperman. Asymmetry and duality in topology. Topol. Appl., 66(1):1–39, 1995. Kelly1963 J. C. Kelly. Bitopological spaces. Proc. Lond. Math. Soc., s3-13(1):71–89, 1963. Chenchiah2009 I. V. Chenchiah, M. O. Rieger, and J. Zimmer. Gradient flows in asymmetric metric spaces. Nonlinear Anal. Theory Methods Appl., 71(11):5820–5834, 2009. Collins2007 J. Collins and J. Zimmer. An asymmetric Arzelà–Ascoli theorem. Topol. Appl., 154(11):2312–2322, 2007. Cobzas2012 S. Cobzas. Functional Analysis in Asymmetric Normed Spaces. Springer, 2012. GarciaRaffi2003 L. M. García-Raffi, S. Romaguera, and E. A. Sanchez-Pérez. The dual space of an asymmetric normed linear space. Quaest. Math., 26(1):83–96, 2003. Romaguera2015 S. Romaguera and P. Tirado. A characterization of Smyth complete quasi-metric spaces via Caristi's fixed point theorem. Fixed Point Theory Appl., 2015(1):1–13, 2015. Mennucci2014 A. C. G. Mennucci. Geodesics in asymmetric metric spaces. Anal. Geom. Metr. Spaces, 2(1):115–153, 2014. Bengtsson2017 I. Bengtsson and K. Życzkowski. Geometry of quantum states: an introduction to quantum entanglement. Cambridge University Press, 2017. Mandolesi_Grassmann A. L. G. Mandolesi. Grassmann angles between real or complex subspaces. arXiv:1910.00147, 2019. Mandolesi_Products A. L. G. Mandolesi. Blade products and angles between subspaces. Adv. Appl. Clifford Algebras, 31(69), 2021. Rosen2019 A. Rosén. Geometric Multivector Analysis. Springer-Verlag, 2019. Mandolesi_Trigonometry A. L. G. Mandolesi. Asymmetric trigonometry of subspaces. Manuscript in preparation. Mandolesi_Pythagorean A. L. G. Mandolesi. Projection factors and generalized real and complex Pythagorean theorems. Adv. Appl. Clifford Algebras, 30(43), 2020. Mandolesi_Born A. L. G. Mandolesi. Quantum fractionalism: the Born rule as a consequence of the complex Pythagorean theorem. Phys. Lett. A, 384(28):126725, 2020. Bjorck1973 A. Bjorck and G. Golub. Numerical methods for computing angles between linear subspaces. Math. Comp., 27(123):579, 1973. Bajnok2020 B. Bajnok. An Invitation to Abstract Mathematics. Springer, 2nd edition, 2020. Wolf2011 J. A. Wolf. Spaces of constant curvature. AMS Chelsea Pub., 2011. Jiang1996 S. Jiang. Angles between Euclidean subspaces. Geom. Dedicata, 63:113–121, 1996. Schlosshauer2007 M. A. Schlosshauer. Decoherence and the quantum-to-classical transition. Springer, 2007. Mandolesi_Contractions A. L. G. Mandolesi. A review of multivector contractions, part I. arXiv:2205.07608 [math.GM], 2022. Asimov1985 D. Asimov. The grand tour: A tool for viewing multidimensional data. SIAM J. Sci. Stat. Comp., 6(1):128–143, 1985. Weinstein2000 A. Weinstein. Almost invariant submanifolds for compact group actions. J. Eur. Math. Soc., 2(1):53–86, 2000. Martin_2000 R. J. Martin. A metric for ARMA processes. IEEE Trans. Signal Process., 48(4):1164–1170, 2000. De_Cock_2002 K. De Cock and B. De Moor. Subspace angles between ARMA models. Syst. Control Lett., 46(4):265–270, 2002. | http://arxiv.org/abs/2310.17865v2 | {
"authors": [
"André L. G. Mandolesi"
],
"categories": [
"math.MG",
"14M15 (Primary) 15A75, 51K99 (Secondary)"
],
"primary_category": "math.MG",
"published": "20231027030219",
"title": "Asymmetric Geometry of Total Grassmannians"
} |
1 .001Exit Dynamics of a Square CylinderI.Ashraf et al. ]Exit Dynamics of a Square Cylinder [] [2]The financial support of the Belgian Fund for Scientific Research under research project WOLFLOW (F.R.S.-FNRS, PDR T.0021.18) is gratefully acknowledged. Part of the experimental setup was financed by Fonds Spéciaux from ULiège. SD is F.R.S–FNRS Senior Research Associate. 1]Intesaaf Ashraf[orcid=0000-0002-1405-3803] [1] [] [email protected] [1]organization=CESAM–GRASP, Physics Department B5, University of Liege, addressline=,city=Liege,postcode=4000,state=, country=Belgium 1]Stephane Dorbolo[][1]Corresponding author In this paper, we experimentally investigate the exit dynamics of a square cylinder, that is initially fully immersed in a water tank and that crosses the interface perpendicularly to its symmetry axis.The cylinder moves upwards at a constant velocity in the vertical direction till the cylinder exits out of the water into the air.The experiments were performed at different traveling speeds.The images of the cylinder crossing the interface were taken using a high-speed camera.The images were used to track the interface deformation when the cylinder approaches and crosses the interface. On top of these measurements, the force required to move the cylinder was simultaneously measured in order to estimate the drag force during the travel in the tank, the force of entrainment, and the force of crossing over the interface. Particle image velocimetry was performed to visualize the flow. Correlations between the different measurements are inspected. Exit dynamics DrainageEntrainment[ [ January 14, 2024 ====================§ INTRODUCTION The interaction of a rigid body with the free water surface, when it enters or exits a fluid, is important in numerous domains, including ocean and coastal engineering, aviation industries, naval architecture, and dip-coating research. The situation's complexity makes physical interpretation and mathematical modeling a difficult task that involves fluid-structure interaction, interfacial physics, and fluid mechanics.There have been many research studies that address the physics of the entry of an object in a fluid <cit.> and exit of an object from fluid <cit.>. Compared to the extensive investigation on the water entry problem, the water exit has been far less studied than the water entry. To expand our understanding of the water exit phenomena, validate numerical simulations, and contribute to the development of new analytical models, new experimental research is required. <cit.> provided an analytical solution of the vertical motion of a cylinder placed in a uniform stream. <cit.> took an experimental approach to the exit dynamics of a neutrally buoyant cylinder originally positioned on the bottom of a water tank and driven upward while exerting a constant force. According to their findings, the upward movement induced an elevation of the water-free surface, resulting in a bump-like formation. This bumpy pattern eventually breaks down in a chaotic way, which is referred to as “waterfall breaking”. <cit.> investigated the exit of a circular cylinder through a computational method using potential flow theory. It was found that when approaching a free surface at low- speed, the fluid behaves as if the cylinder is approaching a wall. At high speeds, however, the cylinder acts as if it was traveling through an infinite fluid. The surface wave motion was observed at intermediate speeds.Among the other research made on the exit of an object from a fluid, <cit.> employed a two-dimensional numerical simulation to study the forced motion of totally and partially submerged horizontal circular cylinders. <cit.> addressed the upwards and downwards movements of various axisymmetric cylindrical bodies at different Reynolds numbers and studied the surface surging effect. <cit.> used the VOF (Volume of Fluid Method) model, to simulate in 2D and 3D the water exit of a circular cylinder. They reported the formation of waves, wave motion in the horizontal direction, and air entrapment in the oblique exit. In the literature, we can also find<cit.> and <cit.>whose work implemented an improved VOF method to study the exit of a cylinder from water. <cit.> studied the water exit of a cylinder, in which the movement in the longitudinal direction of the cylinder was performed. In this work, they observed cavities at both ends of the cylinder. The slapping of the water was observed due to the collapse of those cavities. They performed experiments for both the accelerating and decelerating motion of the cylinder.<cit.> simulated the exit dynamics of a sphere moving at a constant velocity using LBM (Lattice Boltzmann techniques). They reported that the elevation of the water surface is strongly dependent upon the Froude number below 4.12. However, the Froude number (Fr=U^2/ga, whereas U is the vertical speed of the object, a is the characteristic length of the object and g is the acceleration due to gravity) ranging between 4.12 and 8.24, slightly affects the free surface elevation height. The waterfall breaking becomes more intense and affected as the Froude number increases. The Reynolds number (Re=Ua/ν, whereas U is the speed of the object, a is the characteristic length of the object and ν is the kinematic viscosity of the fluid) dominates the flow when the sphere moves beneath the water's surface. Through the simulation of a water exit of a fully submerged spheroid,<cit.> reported that the movement at which the free surface breaks up from the body can be delayed by making the object blunter.<cit.> conducted experiments on the behavior of a buoyant sphere that rises to the surface under the influence of buoyancy and eventually pops out of the water. They came to the conclusion that Reynolds number has a significant impact on the creation and shedding of vortices. The sphere's trajectory can be either a straight line or an oscillating path, depending on the release depth. <cit.> investigated the free surface deformation and its dependency on the velocity or Froude number. It was done for both fixed and free spheres ascending towards the water surface. <cit.> looked at free surface deformation and its correlation with velocity and Froude number. Experiments were performed for both fixed and free spheres as they ascended to the water's free surface. They discovered that the height of water elevation increases with body velocity (or equivalently Froude number) for a fully immersed sphere. In the case of a partially submerged sphere, the water detached from the sphere's surface in the form of a water column at the end of the exit stage.When we look at all of the studies in the literature, we can see that they are mostly concerned with the waterfall breaking, pop-up heights and its trajectory. Less is knwon about forces during the crossing anddrainage, both of which are important in engineering applications. The shape is certainly a crucial parameter that influences on the exit dynamics of an object. Almost all the objects in these investigations were spherical or cylindrical. In the present study, we aim to address this problem by investigating the exit dynamics of a square cylinder pulled out of the water at a constant speed. As the square cylinder was pulled out of the water at a constant speed, the interface deformation was measured in order to inspect the evolution of the thickness of the water layer on top of the cylinder.The force measurements were synchronized with the images taken by two high-speed cameras. Two non-dimensionalized numbers are used in the experimentation. i.e. Froude number and Reynolds number. The Froude number is defined as Fr=U^2/ga, where U is the vertical speed of the cylinder and a is the side of the cylinder. And, the Reynolds number is defined as Re=Ua/ν where ν is the kinematic viscosity of the water. We focus on the crossing of the interface. Indeed, several phenomena have to be taken into account to model the quantity of liquid entrained out of the bath such as the speed and the shape of the object on one side and the surface tension and the viscosity of the fluid on the other side. Here we focus on the shape of the object. § EXPERIMENTAL SETUP The experimental setup consists of a lifting system that pulls the test object, a glass water tank (length = 78.5 cm x width = 27.5 cm x depth = 72.5 cm), a force sensor, and a test object.The lifting system uses a rack and pinion Fig. <ref>, the moving part of the rack was attached to a frame made of carbon fiber tubes, which held the cylinder horizontally, parallel to the water tank's side and front walls.A right square cylinder having square bases of side a= 40mm and rectangular faces of height h=220 mm was 3D printed and used for the experiments. The cylinder was placed horizontally so that its square bases were always perpendicular to the water surface. It was attached to the mobile frame with a rod bolted to one point of its vertical symmetry plane of the cylinder.The coordinate system (xyz) is shown in Fig. <ref>. The x-axis is oriented along the length of the tank, and the z-axis is aligned along its width. The origin is set at the free water surface, which is the horizontal reference plane (xz). The pulling axis, aligned with the vertical direction, is defined as the y-axis. The coordinates are normalized by dividing them by a, the side of the cylinder’s square bases. The experiments were performed at a constant terminal velocity ranging from 0.1 to 1 m/s. The acceleration of the motion was set to 4 m/s^2. This means that the test object must travel a certain distance from rest before reaching its constant velocity. For example, to reach a constant speed of 1 m/s, the test object must travel 120 mm. The same applies to the deceleration phase.Two high-speed M-310 phantom cameras were used to acquire images at a rate of 2000 Hz. Their field view was perpendicular to allow complete visualization of the crossing. The cylinder position was tracked using Matlab code developed in-house. To eliminate parallax and allow tracking of the interface deformation, the air-water interface was placed in the middle of the image.A strain gauge SCAIME K25 (20 N) connected between the frame and the rack measures the forces exerted on the cylinder during its vertical movement at the same time as the imaging. A datalogger (Picolog) was used to record the data at a frequency of 1 kHz. The Picolog also recorded the camera trigger signal, allowing the force measurements to be synchronized with the images. This synchronization was used to determine the precise moment at which the object crosses the interface to calibrate the timeline of the force measurements.The Particle Image Velocimetry (PIV) technique was adopted for 2D planar flow quantification. An illumination laser with a wavelength of 532 nm and peak power of 4W was used for exploration together with 4W for exploration together with tracer particles having a size equal to ≃ 20μm.The PIV images were post-processed using the open-source software PIVLab <cit.>. § RESULT AND DISCUSSIONThe cylinder was first placed in the tank at a depth d = 8.15a below the free water surface, and the system was left to rest for 20 minutes. After ensuring that all surface waves were damped, the cylinder was lifted upward at a constant speed. As the cylinder moves upward, it deforms the free surface of the water. The maximum deformation of the surface is achieved when the cylinder reaches the water surface.The maximum deformation h^* (non-dimensionalized by a) can be measured to characterize the water entrainment. It is defined as the initial film thickness or the elevation of the liquid level at the movement when the cylinder starts to cross the interface, namely when y=-1/2. It is determined by measuring the height of the water-air interface from the reference plane, as shown in Fig.<ref>a. The maximum deformation h^* is related to the entrained volume of water above the free surface since it represents the height of the liquid above the cylinder when the top of the cylinder reaches the initial level of the free surface.In the first set of experiments, we investigated the effect of the cylinder's velocity or Froude number on h^*. Fig. <ref>b shows images of the maximum deformation at this movement when the position of the cylinder is y_c = - 1/2, for different Froude numbers. In addition, Figure 2(c) shows the values of h^* calculated from eight experiments for eight different Froude numbers.Fig. <ref>b and Fig. <ref>c show that the entrainment of water increases as the Froude number increases. At low velocities, h^* is very low. Then, it increases with the Froude number and reaches a value of about 0.6 at Froude number Fr = 2.54, i.e., 0.6 times the side of the cylinder. The Fig. <ref>b shows the evolution of surface elevation h defined as the distance from the center of the square cylinder to the top of the free water surface (Fig. <ref>a) as a function of y_c the position of the square cylinder. All values are non-dimensionalized by a. The surface elevation h decreases exponentially with y_c for small lengths at low Froude numbers. However, at a high Froude number, the exponential decay increases. It basically means that the drainage is exponential for a certain distance of travel of the square cylinder. The curve has a slope of -0.34 for the Froude number greater than 0.92.Further, the wake behind the square cylinder, when the top of the cylinder reaches the interface, is investigated by using PIV. The wake consists of the vortex structures attached to the side of the cylinder. No vortex shedding is observed in the case of a square cylinder. Also, the flow separation begins at the leading edge. The forces acting on the square cylinder during its upward movement and interaction with the free water surface were studied using a stain gauge sensor. As it has been carefully checked that the speed is constant during the cylinder motion, we can find the position of the cylinder by multiplying the time by the speed.According to this procedure, what happens before the acceleration period and after the deceleration period can not be properly represented. To understand the registered force signal and examine the distinct phases of the cylinder's motion, the experiment is first conducted at low speed, for Fr=0.10.Fig. <ref> shows the registered force variation as a function of depth. The cylinder motion is categorized into 5 zones or stages, as shown in the graph: (a)The acceleration phase, highlighted in orange. The cylinder moves from rest to acquire the constant velocity priory set for each experiment. At the end of this stage, the square cylinder starts moving upward at a constant speed after traveling a short distance relative to the initial depth. (b) The constant drag phase, highlighted in blue. The cylinder moves vertically in water at a constant speed. The cylinder experiences the drag force in this stage. The drag force F_d acting on the square cylinder during this upward movement is measured during this phase. The net drag force is measured by taking the average force acting on the cylinder during this stage. (c) The crossing phase, highlighted in yellow. The cylinder speed is still constant at this stage. The cylinder starts crossing the interface when its top is in first contact with the free water surface, and the crossing ends when the bottom of the cylinder leaves the water surface at y_c=-0.5a.The crossing-over force, F_c is measured when the square cylinder comes out of the water and its bottom is leaving the water surface. The crossing-over force is measured by taking the difference of force aty_c equals -0.5a and 0.5a respectively. (d) The cylinder is completely out of the water and moving in the air at a constant speed. The entrainment force F_e is calculated during this stage. It is equal to the force acting on the cylinder when it is completely out of the water minus the weight of the cylinder when it stops moving.The cylinder comes to rest after a brief deceleration period.The evolution of force acting on the cylinder as a function of y_c at different Froude numbers is shown in Fig. <ref>. The gray horizontal line shows the region in which the drag force, F_d is measured. It should be noted that at low Froude numbers, the force sensor readingsarealmost as flat as the horizontal line. Whereas at a high Froude number, it fluctuates between this horizontal line.The variation of the net drag force F_d, the crossing over force F_c, and the entrainment force F_e as a function of the Froude number are presented in Fig. <ref>. The drag force F_d is found to increase linearly with the Froude number (the slope is 2.65). This result is expected as the drag force is proportional to the speed of the cylinder in a laminar flow regime, which is the case here. The net entrained force F_e is minimum at a low Froude number and increases with the increase of the Froude number. We can notice from the plot of F_e versus Froude number presents two linear regions. In the first region, the slope is equal to 2.195 for Fr < 0.91 and it is equal to 0.7357 for Fr > 0.91. As the cylinder exits at a higher Froude number, it carries more momentum. This increased momentum creates a stronger suction effect, which, in turn, draws more surrounding fluid into the wake of the cylinder. Also, a higher Froude number indicates a regime, where inertial forces are more dominant than gravity. As a result, at a higher Froude number more liquids are sucked in and carried away by the cylinder. The behavior of the crossing force F_c is also illustrated in Fig. <ref>. Unlike F_d and F_e, F_c decreases with an increase in the Froude number. The crossing force is the net force acting on the cylinder out of the buoyancy, drag, weight and the component of surface tension in the vertical direction. All the forces except the drag force are independent of the Froude number. The drag force increases with an increase in the Froude number. Therefore, the net crossing-over decreases with an increase in the Froude number. § CONCLUSIONAs soon as the cylinder moves upward at a constant speed, the surface starts getting elevated. Also, the maximum elevation is achieved, when the top of the cylinder touches the water surface. However, this maximum surface elevation is a function of the cylinder Froude number. The maximum surface elevation h^* has logarithmic relation with the Froude number. Another stage of our experimentation process, the PIV experiments shows that as soon as the cylinder moves upward it formed clockwise and anti-clockwise vortices attached to its surface. However, no vortex shedding such as Von Karman Street is observed in the wake. The flow separation starts from the leading edge of the cylinder.We finally ended our experimental research by carrying out the synchronized force measurement and upward cylinder movement of the cylinder. We characterized different force regimes in the exit dynamics of a square cylinder. From the force measurement, the net drag force, entrained force, and cross-over force are estimated. The net drag force and entrained force increase with the increase in cylinder upward velocity. However, net cross-over force is found to decrease with an increase in velocity. The present study only investigated the exit dynamics of a square cylinder. A detailed study of different shapes could be an interesting avenue for future research. model1-num-names3pt | http://arxiv.org/abs/2310.18267v1 | {
"authors": [
"Intesaaf Ashraf",
"Stephane Dorbolo"
],
"categories": [
"physics.flu-dyn"
],
"primary_category": "physics.flu-dyn",
"published": "20231027165606",
"title": "Exit Dynamics of a Square Cylinder"
} |
We prove that the lowest free energy of a classical interacting system at temperature T with a prescribed density profile ρ(x) can be approximated by the local free energy ∫ f_T(ρ(x))x, provided that ρ varies slowly over sufficiently large length scales. A quantitative error on the difference is provided in terms of the gradient of the density. Here f_T is the free energy per unit volume of an infinite homogeneous gas of the corresponding uniform density. The proof uses quantitative Ruelle bounds (estimates on the local number of particles in a large system), which are derived in an appendix. 2023, the authors. Fast and simple unrooted dynamic forests Benjamin Aram BerendsohnFreie Universität Berlin, Germany. Email: . Supported by DFG Grant . ================================================================================================ Classical Density Functional Theory is used to describe finite and infinite classical particle systems, which have a non-uniform density profile. A typical practical situation is the interface between liquid-gas, liquid-liquid (in fluid mixtures), crystal-liquid and crystal-gas phases at bulk coexistence <cit.>. The density is then essentially constant within each phase and varies continuously in a neighborhood of the interface.The quantum theory of DFT was developed since the 1960s with the famous works by Hohenberg-Kohn-Sham <cit.>. In this theory one has a ladder of approximate models of increasing precision <cit.>. The situation is somewhat similar in classical DFT, with the additional difficulty that the interaction between the particles is only known empirically.The simplest nonlinear functional of the density is obtained by assuming that the gas is locally infinite and uniform, which is called the Local Density Approximation (LDA). In this theory, the lowest free energy G_T[ρ] at density ρ(x) and temperature T is approximated by the local functionalG_T[ρ]≈∫_^df_T(ρ(x)) ,where f_T(ρ_0) is by definition the free energy per unit volume of an infinite homogeneous gas of density ρ_0. A common strategy to gain in precision is to add gradient corrections to the LDA (<ref>), in order to account for the non-uniformity of ρ(x). Constructing efficient functionals of this type turned out to be easier for the liquid-gas transition <cit.> than for the solid-liquid transition <cit.>.In a previous paper <cit.>, we have investigated the representability of any density ρ(x) for a given interaction and proved local upper bounds on G_T[ρ]. The most difficult situation is when the interaction potential is very repulsive (non integrable) at the origin, in which case some correlations have to be inserted in the trial state to ensure that the particles never get too close to each other. Using tools from functional analysis (in particular Besicovitch covering techniques), we could construct such states and evaluate their free energy, providing thereby upper bounds on the free energy G_T[ρ] at fixed density.In this paper we continue our investigation of the fixed density problem and justify the Local Density Approximation (<ref>), in a regime where the density varies very slowly over long length scales. To be more precise, our goal is to prove a quantitative estimate in the form| G_T[ρ]-∫_^df_T(ρ(x)) |≤Err[ρ]valid for all densities, with the error Err[ρ] involving gradients of the density so that it becomes negligible compared to the LDA whenever ρ varies slowly. Although we expect the exact same result for the canonical free energy (denoted by F_T[ρ] in this article), our results will mainly concern the easier grand-canonical free energy G_T[ρ].An estimate in the form (<ref>) has recently been proved in the classical and quantum Coulomb cases in <cit.>. The Coulomb potential is long range and the interaction between subsystems in a large assembly of particles is very hard to control, unless good screening properties have been shown. The works <cit.> relied on the Graf-Schenker inequality <cit.>, following ideas from <cit.>. This inequality permits to withdraw such interactions in a lower bound, when the system is split using a tiling of space made of simplices. This is rather particular to the Coulomb case. Similar tools exist for general Riesz potentials |x|^-s <cit.> as well as other positive-definite interactions <cit.>, but not all interactions of physical interest can be handled by such a method.In this work we only consider short range interaction potentials w and do not have to face the problem of screening. Our goal is to however deal with arbitrary (stable) potentials decaying fast enough at infinity, without any assumption on the sign of the Fourier transform w nor on that of w. The interaction can be partially attractive at medium or long distances. To control the interaction between subsystems we rely on Ruelle bounds <cit.>, which provide estimates on the local moments of the number of particles in a large system. In a slowly-varying external potential, such a strategy has already been used in <cit.>. The main novelties of the present work is that we work at fixed density and provide quantitative bounds, that is, an explicit error functional Err[ρ] in (<ref>). This requires to make Ruelle bounds quantitative (as is discussed in Appendix <ref>) and to prove convergence rates for the thermodynamic limit.The paper is organized as follows. In the next section we properly define the canonical and grand-canonical free energies F_T[ρ] and G_T[ρ] for a system at a given density profile ρ(x). We also recall the definition of the free energy f_T(ρ_0) of an infinite gas of uniform density ρ_0 and state our main result about the LDA, Theorem <ref>. The rest of the paper is devoted to the proof of this theorem. In Section <ref> we gather useful a priori estimates from <cit.>, and others whose proofs are given much later in Section <ref> for convenience. In Section <ref> we prove that the thermodynamic limit at fixed uniform density is the same as the usual thermodynamic limit without any pointwise constraint on ρ. In Section <ref> we study the speed of convergence for the grand-canonical problem, which depends on the decay of the interaction potential at infinity. In Section <ref> we are finally able to provide the full proof of Theorem <ref>. Appendix <ref> contains the proof of quantitative Ruelle bounds in the spirit of <cit.>. We mainly follow the arguments of <cit.> but provide all the details for the convenience of the reader and give more explicit estimates than in <cit.>. Acknowledgement. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement MDFT No 725528 of ML). MJ also received financial support from the Ministry of Education, Youth and Sport of the Czech Republic under the Grant No. RVO 14000. § MAIN RESULT §.§ Free energies at given density In order to state our main result, we need to first give the definition of the free energy at fixed density. Although most of our results will concern the grand-canonical case, we also partially look at the canonical case.§.§.§ Interaction potential wFor convenience, we work in ^d with a general dimension d≥1. The physical cases of interest are of course d∈{1,2,3} but the proofs are the same for all d. We consider systems of indistinguishable classical particlesinteracting through a short-range pair potential w. Throughout the paper, we work with an interaction satisfying the following properties.[on the short-range potential w] Let w :^d→∪{+} be an even lower semi-continuous function satisfying the following properties, for some constant κ>0:(1) w is superstable, that is, can be written in the formw=κ^-1(|x|≤κ^-1)+w_2with w_2 stable: ∑_1 ≤ j < k ≤ N w_2 x_j - x_k≥ - κ Nfor all N ∈ and x_1, …, x_N ∈^d;(2) w is upper and lower regular, that is, (|x|<r_0)/κ(r_0/|x|)^α - κ/1 + x^s≤ w(x) ≤κ(|x|<r_0)(r_0/|x|)^α+ κ/1 + x^s for some r_0 ≥ 0, 0 ≤α < ∞ and s>d. In words, we assume that w is repulsive close to the origin and behaves like |x|^-α, for some 0≤α<. Even if some of our results are also valid when α=+ (our convention is then that (r_0/|x|)^α=(+)(|x|<r_0)), we assume α< for simplicity throughout the paper and only make some remarks about the hard core case α=+. At infinity, we assume that w decays at least polynomially and is integrable. The definition differs sligthly from the one we used in <cit.>, where no lower bound was required since we only studied upper bounds on the free energy. Assumption <ref> covers most cases of physical interest such as the Lennard-Jones potential w(x)=a|x|^-12-b|x|^-6 for a>0 or the Yukawa potential w(x)=e^-m|x|/|x|, for instance.§.§.§ Canonical free energy In this subsection we define the canonical free energy F_T[ρ] at given density ρ. Suppose that we have N particles in ^d, distributed according to some Borel probability measureon ^dN. Since the particles are indistinguishable, we demand that the measureis symmetric, that is,(A_σ1×⋯× A_σN) = (A_1×⋯× A_N)for any permutation σ of 1, …, N, and any Borel sets A_1,...,A_N⊂^d. The one-body density of such a symmetric probabilityequals N times the first marginal of , that is,ρ_ = N∫_^d N-1𝕀,x_2, …, x_N,where the integration is over x_2,…,x_N. Equivalently, ρ_(A)=N(A×(^d)^N-1) for every Borel set A. Note the normalization convention ρ_(^d) = N. For a non-symmetric probabilitywe define ρ_ as the sum of the N marginals.The pairwise average interaction energy of the particles is given by_N = ∫_^dN∑_1 ≤ j < k ≤ N w x_j - x_k𝕀x_1, …, x_N.It could in principle be equal to +, but it always satisfies _N≥ -κ N due to the stability condition on w in Assumption <ref>. When considering systems at positive temperature T>0, it is necessary to also include the entropy of the system,_N:= - ∫_^dNxlog(N!x) 𝕀 x.Ifis not absolutely continuous with respect to the Lebesgue measure on ^dN, we use the convention that _N =-. The total free energy of the system in the stateat temperature T ≥ 0 equalsℱ_T :=_N - T _N = ∫_^dN∑_j < k w x_j - x_k𝕀x + T ∫_^dNlogN!. Throughout the paper, we will mostly consider systems with a given one-body density ρ, which is absolutely continuous with respect to the Lebesgue measure. At T>0 we also assume that ∫_^dρ|logρ|<. This allows us to consider the minimal energy of N-particle classical systems with density ρ, given byF_T [ρ] := inf_ρ_ = ρℱ_Twhere the infimum is taken over N-particle stateson ^dN with one-particle density ρ_ equal to ρ. The number F_T[ρ] turns out to be finite for all ρ∈ L^1(^d,_+) of integer mass such that T∫_^dρ|logρ|<, as we have proved in <cit.> using results from optimal transport theory in <cit.>.§.§.§ Grand-canonical free energyIn the grand-canonical ensemble, the exact particle number of the system is not fixed. A stateis a family of symmetric n-particle positive measures _n on (^d)^n, so that∑_n ≥ 0_n((^d)^n)=1.Here _0 is just a number, interpreted as the probability that there is no particle at all in the system. After replacing _n by _n/_n(^dn), we can equivalently think thatis a convex combination of canonical states. The entropy ofis defined by:= ∑_n ≥ 0_n _n = -_0log(_0)- ∑_n ≥ 1∫_^dn_n logn!_n ,and the single particle density of the stateisρ_ = ∑_n ≥ 1ρ__n=∑_n≥1n∫_(^d)^n_n(,x_2,…,x_n).Its integral gives the average number of particles in the system:∫_^dρ_(x)x=∑_n ≥ 1 n _n(^dn)=:().The grand-canonical free energy of the stateat temperature T ≥ 0is𝒢_T :=- T ,wheredenotes the interaction energy in the state ,:= ∑_n ≥ 2_n_n = ∑_n ≥ 2∫_^dn∑_j<k^n w x_j - x_k𝕀_n x_1,...,x_N.From the stability of w we have_n_n≥ -κ n _n(^dn)so that, after summing over n,≥ -κ∫_^dρ_(x) .When keeping the one-particle density ρ = ρ_∈ L^1 ^d fixed, we denote the minimal grand-canonical free energy byG_T [ρ] := inf_ρ_ = ρ𝒢_T .When ∫_^dρ is an integer, it is clear that F_T[ρ]≥ G_T[ρ]. The functional ρ↦ G_T[ρ] was studied in <cit.>. It is a kind of weak–∗ convex envelope of ρ↦ F_T[ρ]. §.§ The thermodynamic limitHere we introduce the free energy per unit volume f_T, which is obtained when the system is placed at equilibrium in a large domain Ω_N⊂^d which then grows so as to cover the whole space, without any pointwise constraint on the density ρ(x).First we discuss how the domain is allowed to grow. Let Ω_N ⊆^d be a sequence of bounded, connected domains with Ω_N→∞. The sequence Ω_N is said to have a uniformly regular boundary <cit.> if there exists a t_0 > 0 such that [][]x ∈^d |x,∂Ω_N≤Ω_N^1/d t≤Ω_Nt/t_0, for all t ∈0,t_0.This assumption can be weakened in many ways but it already covers any rescaled sequence Ω_N=N^1/dΩ with ∂Ω sufficiently smooth (by parts), such as cubes or balls, or in fact any convex set <cit.>. For any sequence (Ω_N) with a uniformly regular boundary and satisfying N / Ω_N→ρ_0 ≥0, it is well known <cit.> that the thermodynamic limit exists, f_T ρ_0 := lim_N →∞ N/|Ω_N|→ρ_0 |Ω_N|^-1min_∈_s(Ω_N^N)_T()=lim_N →∞ N/|Ω_N|→ρ_0 |Ω_N|^-1min_()=N_T(), and is independent on the sequence of domains Ω_N. The number f_T(ρ_0) is interpreted as the free energy per unit volume at temperature T and density ρ_0 of an infinite gas in equilibrium. The first minimum in (<ref>) is over all the symmetric probability measureson (Ω_N)^N. Recall that the free energy _T() was defined in (<ref>). The second minimum is over all grand-canonical probabilities =(_n)_n≥0 supported on Ω_N, which have the average number of particles () equal to the given N. In other words, we get the same limit by fixing the number of particles exactly, or by only fixing its average value. Let us insist on the fact that the limit (<ref>) holds without any pointwise constraint on the density function ρ(x) of the system. One can in fact rewrite it as f_T ρ_0 = lim_N →∞ N/|Ω_N|→ρ_0 |Ω_N|^-1min_ρ∈ L^1(Ω_N) ∫_Ω_Nρ=NF_T[ρ] =lim_N →∞ N/|Ω_N|→ρ_0 |Ω_N|^-1min_ρ∈ L^1(Ω_N) ∫_Ω_Nρ=NG_T[ρ]since minimizing F_T[ρ] (resp. G_T[ρ]) over ρ is the same as minimizing _T[] (resp. _T[]) over all possible .The thermodynamic limit function ρ_0↦f_T(ρ_0) is known to be convex (and C^1 if T>0) <cit.>. Its derivative is the chemical potential μ=f'_T(ρ_0) and its Legendre transform is the grand-canonical free energy in this chemical potential, defined through the following limit g_T μ := lim_n →∞|Ω_n|^-1min_{_T()-μ ()}=lim_n →∞|Ω_n|^-1min_ρ∈ L^1(Ω_n){G_T[ρ]-μ∫_Ω_nρ}, again independent on the sequence of domains Ω_n. The thermodynamic limit function g_T is concave (strictly concave when T > 0), non-positive, and continuous in the chemical potential μ. It is the Legendre transform of f_T, that is, f_T ρ_0 = sup_μ∈μρ_0 + g_T μ, g_T μ = inf_ρ_0 ≥ 0 f_T ρ_0-μρ_0.§.§ The local density approximation The free energies F_T[ρ] and G_T[ρ] are highly nonlinear and nonlocal functionals of the density function ρ. The Local Density Approximation (LDA) consists in replacing them by the simpler local functionalρ↦∫_^df_T(ρ(x))xwhere f_T is the free energy per unit volume of an infinite gas defined previously in (<ref>) and (<ref>). Our goal is to justify this approximation when ρ varies sufficiently slowly over some large length scale. To measure the amplitude of the variations of ρ, we introduce the functionδρ_ℓ(z):=sup_x,y∈ z+C_ℓ|ρ(x)-ρ(y)|/ℓwhere C_ℓ:=[-ℓ/2,ℓ/2]^d is the cube of side length ℓ centered at the origin and `sup' means the essential supremum (that is, up to sets of zero Lebesgue measure). The function (<ref>) measures the variations of ρ at distance of order ℓ of a point z∈^d and it is a kind of smeared derivative. When ρ is constant over the whole space we just get δρ_ℓ≡0. When ρ is constant over a large domain Ω and vanishes outside, we get δρ_ℓ≡0 except at distance ℓ from the boundary ∂Ω. The following is the main result of this paper. Let M > 0, p ≥ 1, T≥0 and b > 2 - 1/2p ifp ≥ 2,3/2+1/2p if1 ≤ p < 2. Let w be a short-range interaction satisfying <ref> with s>d+1. There exists a constant C > 0 depending on M,T,w,d,p,b, such that []G_T ρ - ∫_^d f_T ρx𝕀 x≤C/√(ℓ)[]∫_^d√(ρ) + ℓ^bp∫_^dδρ_ℓ(z)^pz, for any ℓ > 0, and any density ρ≥ 0 such that √(ρ)∈ (L^1∩ L^)(^d) with ρ∞≤ M. The proof of Theorem <ref> is provided in Section <ref>. Under the stated assumptions on ρ, we also have ρlogρ∈ L^1 ^d, since∫_^dρlogρ≤√(ρ)logρ∞∫_^d√(ρ).In particular, by the a priori bounds on G_T and f_T proved in <cit.> and recalled in <ref> below, the quantities appearing on the left hand side of (<ref>) are all finite.Theorem <ref> states that the grand-canonical free energy functional can be approximated by the LDA functional ρ↦∫ f_T(ρ(x))x, whenever the variations of ρ are much smaller than the values of ρ, in the average sense∫_^dδρ_ℓ(z)^pz≪∫_^d√(ρ),for some global large length ℓ≫1. We emphasize that our estimate (<ref>) depends on the L^ norm of ρ through the parameter M. From the universal bounds proved in <cit.> and recalled in Section <ref>, it seems natural to expect a similar estimate without any constraint on ρ_L^ and involving∫_^dρ+ρ^max(2,1+α/d)+Tρ(logρ)_-in place of ∫_^d√(ρ) on the right side. Here we used the much stronger L^1 and L^ norms of √(ρ) to control errors.We crucially use the boundedness of ρ and, unfortunately, our proof provides a constant C which diverges exponentially fast in the parameter M. Nevertheless, we believe that our assumption √(ρ)∈ (L^1∩ L^)(^d) is reasonable in most situations of physical interest.Let us emphasize the condition s>d+1 on the decay at infinity of w, which we have only used to simplify the statement. The powers of ℓ and the condition on b are slightly different for d<s≤ d+1. This is all explained later in Propositions <ref> and <ref>.If we rescale a function ρ in the manner ρ'(x)=ρ(hx) then we have the scaling relation δρ'_ℓ(z)=hδρ_hℓ(hz). Hence, applying (<ref>) to ρ_N(x):=ρ(N^-1/dx) with a smooth ρ, we obtain after taking ℓ=N^1/bd and p=2G_T [ρ(N^-1/d·)]=N∫_^d f_T ρx𝕀 x+O(N^1-1/2bd),∀ b>7/4.To be more precise, we need here that lim sup_h→0∫_^dδρ_h(z)^2z<, which is the case if for instance ρ is C^1 with compact support.Next we mention two easy corollaries of Theorem <ref>. The first is when we measure the variations of ρ using derivatives instead of the function δρ_ℓ. In the Coulomb case, this point of view goes back to <cit.>. Suppose, in addition to the assumptions in <ref>, that p > d and ∇ρ∈ L^p ^d. Then we have []G_T ρ - ∫_^d f_T ρx𝕀 x≤ C (∫_^d√(ρ) + 1/^2bp∫_^d∇ρ^p )for any > 0. Morrey's inequality <cit.> in a cube Q states that ρ(x)-ρ(y)≤ K^1/px-y^1-d/p[]∫_Q∇ρ^p^1/p, x,y ∈ Q,for p>d with K independent of the location of the cubes. After scaling, this implies the pointwise bound δρ_ℓ(z)^p≤K/ℓ^d∫_z+C_ℓ∇ρ^p,∀ z∈^d.Hence we obtain after integration ∫_^dδρ_ℓ(z)^pz≤K/ℓ^d∫_^d(∫_z+C_ℓ∇ρ^p) z=K∫_^d∇ρ^p.The bound (<ref>) thus follows from (<ref>) with =ℓ^-1/2. In some practical situations, it is useful to have the variations of ρ expressed using δρ_ℓ instead of derivatives as in Corollary <ref>. An example is when ρ is constant over a large domain. Let w be a short-range interaction satisfying <ref>. For any ρ_0,>0, T≥0, and r>1 we have |G_T [ρ_0_Ω] - |Ω| f_T ρ_0|≤ C|Ω|^1-1/4dr for any large-enough domain Ω with a regular boundary as in (<ref>). The constant C > 0 depends on ρ_0,T, r, w as well as on the parameter t_0 in (<ref>).The function δρ_ℓ vanishes everywhere except at a distance √(d)ℓ/2 of ∂Ω, where it is bounded above by ρ_0/ℓ. Thus, we have from the regularity (<ref>) of the boundary∫_^dδρ_ℓ(z)^pz≤ρ_0^p/ℓ^p|{x ∈^d |x,∂Ω≤√(d)ℓ/2}|≤ C|Ω|^1-1/dℓ^1-p,provided that ℓ|Ω|^-1/d≤ 2t_0/√(d). Optimizing over ℓ leads us to the choiceℓ^bp+1-p=|Ω|^1/dwith b satisfying (<ref>), which is allowed provided that |Ω| is large enough compared with t_0/√(d). Since we can choose any p≥ 1, this can be stated as in (<ref>) by minimizing the power bp+1-p. Corollary <ref> means that we get the same thermodynamic limit as in (<ref>) when we enforce the constraint that the density is constant everywhere instead of just fixing the average total number of particles per unit volume. This is of course a consequence of the translation-invariance of the problem. For Coulomb and Riesz gases, the similar property is not at all obvious and was recently provedin <cit.>. Although we have stated (<ref>) as a corollary, our proof of Theorem <ref> in fact goes by considering the case of constant densities first. In <ref> we give a direct proof in the case of cubes, which even provides a better estimate than (<ref>), with the right hand side instead behaving like Ω^1-1/2d for large Ω. A result similar to Corollary <ref> holds in the case of several phases of different densities (e.g. over two half spaces). In the recent paper <cit.>, the dual potential V (whose equilibrium Gibbs state has density ρ) is expressed as a convergent series in ρ, under the assumption that ρ_L^ is small enough. One can then express G_T[ρ] as a convergent series in ρ, and thereby probably obtain bounds better than (<ref>) and (<ref>) for M small enough. We insist that our result is valid for all possible values of ρ_L^. The latter only appears implicitly in the constant C.We can deduce from (<ref>) a lower bound on the canonical free energy F_T[ρ] using the fact that F_T[ρ]≥ G_T[ρ] whenever ∫_^dρ∈. We expect an upper bound similar to (<ref>) but it is always harder to construct good trial states in the canonical case. Our paper will nevertheless contain several intermediate results valid for F_T[ρ]. In particular, we are able to prove the existence of the thermodynamic limit at constant density in the canonical case. Let w be a short-range interaction satisfying <ref>. Let ρ_0>0. Suppose that Ω_N ⊆^d is a sequence of bounded connected domains with uniformly regular boundaries as in (<ref>) for some t_0, and such that Ω_N→∞ and ρ_0 Ω_N∈ℕ for all N. Then we have lim_N→F_T [ρ_0_Ω_N]/|Ω_N|=f_T ρ_0for any T ≥ 0. The proof of Theorem <ref> is provided in Section <ref>.§ A PRIORI ESTIMATES In this short section, we briefly recall and discuss few a priori estimates on the free energy functionals F_T and G_T, as well as the minimal energies per unit volume for cubes. §.§ Universal boundsBy <cit.> and <cit.>, we have the universal lower energy bound, which holds for any 0 ≤ρ∈ L^1 ^d satisfying ∫_^dρlogρ < ∞,G_T [ρ] ≥ - []κ+ T∫_^dρ + T ∫_^dρlogρ,where κ is the stability constant of w in Assumption <ref>. From <cit.>, we also have for α≠ d the upper boundG_T ρ≤C ∫_^dρ^γ + C1+T∫_^dρ + T ∫_^dρlogρ,whereγ := 1 + max1, α/d,and the constant C depends only on the dimension d and the interaction w. When α = d, the bound instead takes the formG_T ρ≤C ∫_^dρ^2 logρ_+ + C1+T∫_^dρ + T ∫_^dρlogρ.In particular, when the density ρ is uniformly bounded, ρ∞≤ M, we have the simple boundG_T ρ≤C_M ∫_^dρ + T ∫_^dρlogρ,where the constant C_M now also depends on M and the temperature T. By <cit.>, this bound also holds in the hard-core case (α = ∞), provided that ρ∞≤1-^d r_0^-dρ_c d, where ρ_c d is the sphere packing density in d dimensions, and 0 << 1. The constant C_M in this case then also depends on , and behaves like log.In the canonical case, we have by <cit.> a bound depending on the local radius R x of ρ, which is defined to be the largest number satisfying∫_B x, R xρy𝕀 y = 1.For α≠ d, the bound takes the formF_T ρ≤C ∫_^dρ^γ+ C ∫_^dρ + T ∫_^dρlogρ+T ∫_^dρlog R^d,and similarly for α = d, where the ρ^γ term is replaced by ρ^2 logρ_+.Finally, we also have the following useful (but non-optimal) sub-additivity bound for the grand-canonical energy. Suppose that w satisfies Assumption <ref>. For all 0 < ≤ 1/2, there is a constant C > 0 such that for any pair of densities 0 ≤ρ_1, ρ_2 ∈ L^1 ^d∩ L^γ^d, we have for α≠ d, G_T ρ_1 + ρ_2≤G_T ρ_1 + C ∫_^dρ_1^γ + ρ_1 + C/^γ-1∫_^dρ_2^γ + C log_- ∫_^dρ_2 + T ∫_^dρ_2 logρ_2 , where γ = 1 + max1, α / d. When α = d, we get instead G_T ρ_1 + ρ_2≤G_T ρ_1 + C ∫_^dρ_1^2 logρ_1_+ + ρ_1 + C/∫_^dρ_2^2 logρ_2_+ + C log_-/∫_^dρ_2^2 + C log_- ∫_^dρ_2 + T ∫_^dρ_2 logρ_2. We use the same approach as in <cit.>. Write ρ_1 + ρ_2 = 1-ρ_1 + ρ_2/ + ρ_1, and let _1 and _2 be any grand-canonical states with densities ρ_1 and ρ_2/ + ρ_1, respectively. Then := 1-_1 + _2 is a trial state with density ρ_1 + ρ_2, so by minimizing over _1, _2, we obtain by convexity, G_T ρ_1+ρ_2≤1- G_T ρ_1 +G_T []ρ_2/ + ρ_1.The universal upper bound (<ref>) on G_T ρ provides G_T []ρ_2/ + ρ_1≤C ∫_^d[]ρ_2/ + ρ_1^γ + []ρ_2/ + ρ_1+T ∫_^d[]ρ_2/ + ρ_1log[]ρ_2/ + ρ_1 ≤C ∫_^dρ_1^γ + ρ_1 + C/^γ - 1∫_^dρ_2^γ + C ∫_^dρ_2 +T ∫_^d[]ρ_2/ + ρ_1log[]ρ_2/ + ρ_1.Writing ρ_2/ + ρ_1 = ρ_2/^2 + 1-ρ_1/1- and using the concavity of the entropy, along with ≤ 1/2, we get [4] ∫_^d[]ρ_2/ + ρ_1log[]ρ_2/ + ρ_1≤∫_^dρ_2 log[]ρ_2/^2 + ∫_^dρ_1 log[]ρ_1/1- ≤2 log_- ∫_^dρ_2 + ∫_^dρ_2 logρ_2 + log 2 ∫_^dρ_1 + ∫_^dρ_1 logρ_1. Applying the universal lower bound (<ref>), we conclude that (<ref>) holds. In the α = d case, (<ref>) is obtained in the same way, using instead the upper bound (<ref>), G_T []ρ_2/ + ρ_1≤C ∫_^d[]ρ_2/ + ρ_1^2 []log[]ρ_2/ + ρ_1_+ + []ρ_2/ + ρ_1+ T ∫_^d[]ρ_2/ + ρ_1log[]ρ_2/ + ρ_1.The first term can be estimated by [6] ∫_^d[]ρ_2/ + ρ_1^2 []log[]ρ_2/ + ρ_1_+≤ ∫_^d 4 max[]ρ_2/ , ρ_1^2 []log[]2 max[]ρ_2/ , ρ_1_+≤ ∫_^d 4 []ρ_2/^2 []log[]2 ρ_2/_+ + ∫_^d 4 ρ_1^2 log2 ρ_1_+≤ 4/^2∫_^dρ_2^2 []logρ_2_+ + log2/ + 4 ∫_^dρ_1^2 []logρ_1_+ + log 2. The bound (<ref>) now follows in the same way as before. §.§ Energy per unit volumeWe denote byG_T(μ,Ω):=min_{_T()-μ()}=min_ρ∈ L^1(Ω){G_T[ρ]-μ∫_Ωρ}the minimum free energy with a chemical potential μ in an arbitrary bounded domain Ω. The first minimum is above all possible grand-canonical statesin the domain Ω. It is well known <cit.> that the first minimum is attained at a unique , called the Gibbs state and given by _T, μ, Ω := 1/Z_T,μ,Ω∑_n ≥ 0e^-1/TH_n - μ n/n! , Z_T, μ, Ω := ∑_n ≥ 0 1/n!∫_Ω^n e^-1/TH_n x - μ n𝕀 x, where H_n x := ∑_i<j^n w x_i-x_j. The minimal free energy is then given byG_T(μ,Ω)=- T log Z_T, μ, Ω. Our goal here is to provide bounds in the case of a large cube Ω=C_L=(-L/2,L/2)^d, which are uniform with respect to the side length L. For later purposes it is convenient to divide by the volume and introduceg_T μ,L := G_T μ, C_L/L^d.We will also study its Legendre transform which is defined asf_T ρ,L := sup_μ∈μρ + g_T μ,L.Note that f_T ρ,L is different from the minimal canonical energy per unit volume in C_L. The equivalence of ensembles as in (<ref>) is only true after taking the limit L→. More precisely, one can see that f_T(ρ,L) is rather the free energy per unit volume of the grand-canonical Gibbs state in C_L which has average density ρ. Indeed, at positive temperature T > 0, the supremum (<ref>) is attained at the unique chemical potential μ_L = μ_L ρ satisfyingρ =- ∂/∂μ g_T μ_L,L =T/L^d∂/∂μlog Z_T,μ_L,C_L = T/L^d1/Z_T,μ_L,C_L∂/∂μ Z_T,μ_L,C_L =𝒩(_T,μ_L,C_L)/L^d,so the Gibbs state _T,μ_L,C_L corresponding to this μ_L has average density ρ in C_L. It now immediately follows thatf_T ρ,L =μ_L ρ + G_T μ_L, L/L^d =𝒢_T _T,μ_L,C_L/L^d,as claimed. In fact, f_T ρ,L can also be obtained by minimizing over all states in C_L with average particle number ρ L^d,f_T ρ,L = inf_⊆ C_L𝒩() = ρ L^d 𝒢_T /L^d.Indeed, ifis a minimizer for the right hand side, then𝒢_T -μ_L() =𝒢_T- μ_L ρ L^d=L^d(f_T(ρ,L)-μ_Lρ)=G_T(μ_L,L)so by uniqueness of minimizers of G_T μ_L, L we must have = _T, μ_L, C_L.Using the equivalence of ensembles (<ref>), one can see that f_T ρ,L has the infinite volume limitlim_L →∞ f_T ρ,L = sup_μ∈[]μρ + lim_L →∞ g_T μ,L = f_T ρas we have already stated in (<ref>). This is well known <cit.> and follows for instance from Corollary <ref> below.Next we state some bounds which are uniform in L and thus carry over to the infinite volume limit. Loosely speaking, those estimates state that the free energy f_T(ρ) contains a positive term ρ^γ which dominates at large densities, due to the repulsive nature of the potential w close to the origin (recall that γ=max(2,1+α/d)). At low density, the energy rather behaves like Tρlogρ+Cρ. Although we think that these bounds must be well known, we have not found them stated explicitly in the literature. The estimates are enclosed in the following three statements. Let w satisfy Assumption <ref>. There are constants C,c > 0 depending only on s and the dimension d, such that for any L > 0 and ρ≥ 0, we have f_T ρ,L≤ C κ r_0^d γ-1ρ^γ + C κρ^2 + C T ρ + T ρlogρ, α≠ d, C κ r_0^d ρ^2 log r_0^d ρ_+ + C κ1+r_0^dρ^2 + C T ρ + T ρlogρ, α = d, and f_T ρ,L≥c/κ r_0^d γ-1ρ^γ - []κ + c/κ +Tρ + T ρlogρ, α≠ d,c/κ r_0^d ρ^2 []logr_0^d ρ/2 √(d)_+ - κ+Tρ + T ρlogρ, α = d. Let w satisfy Assumption <ref>. Denote by μ_L ρ any chemical potential maximizing (<ref>) (which is unique if T>0). There are constants C,c > 0 depending only on s and the dimension d, such that for all L > 0 and ρ≥ 0, μ_L ρ≤ C κ r_0^d γ-1ρ^γ-1 + C κρ + C κ + 1/κ + T + T logρ, α≠ d, C κ r_0^d ρlog r_0^d ρ_+ + C κ1+ r_0^dρ +κ + C T + T logρ, α = d, and μ_L ρ≥ c/κ r_0^d γ-1ρ^γ-1 - κ - c/κ -T + T logρ, α≠ d,c/κ r_0^d ρ[]logr_0^d ρ/2 √(d)_+ - κ-T + T logρ, α = d. Let w satisfy Assumption <ref>. There are constants C, K > 0, depending only on d and w, such that for any μ∈ and L > 0, g_T μ,L≥ - K μ+C_+^γ/γ-1 - T e^- 1/Tμ+C_-, α≠ d, - K μ+C_+^2/log(2+μ+C_+)- T e^- 1/Tμ+C_-, α = d, and g_T μ,L≤ - 1/K(1+T)μ-C_+^γ/γ-1 - T e^- 1/Tμ-C_-, α≠ d, - 1/K(1+T)μ-C_+^2/log(2+μ-C_+)- T e^- 1/Tμ-C_- α = d, where γ = 1+max1, α / d. We defer the proofs to Section <ref>. We will prove Proposition <ref> by first noting that the upper bounds on f_T follow from (<ref>) and the universal bounds (<ref>) and (<ref>) (as they are stated in <cit.>). The lower bounds on f_T we provide by hand. The bounds on the dual variable μ_L follow as an easy corollary, using that f_T is convex with f_T 0,L=0. We include the bounds on g_T here for completeness, though we do not need them for our purposes. They can be obtained using the bounds on f_T and the fact that g_T are f_T are connected through the Legendre transform. We have gathered all the details in <ref> below.§ PROOF OF THEOREM <REF> ON THE THERMODYNAMIC LIMIT AT FIXED DENSITYIn this section we provide the proof of Theorem <ref> on the thermodynamic limit at fixed constant density (uniform gas), in the canonical case. We will also quickly explain how the grand-canonical case follows from this result, even if we prove more on this case later. To simplify the notation we denote by ρ:=ρ_0 the constant density from the statement.Note first that it suffices to provide an upper bound, since the energy at fixed density ρ_Ω_N is bounded from below by the minimal energy obtained when not putting restrictions on the density, that is, lim inf_N →∞F_T ρ_Ω_N/Ω_N≥ f_T ρ. A partition of unity. In order to build a suitable trial state with density ρ_Ω_N and compare its energy to f_T ρ, we will need to construct a specific partition of unity on ^d. We split ^d into a lattice of cubes C_L + kL, k ∈^d, where C_L = [-L/2, L/2)^d, and L is chosen such that n:= ρ L^d ∈. To handle the core of the interaction, we place corridors between the cubes (by shrinking the cubes a little) to keep particles from getting too close to each other. That is, we fix a tiny > 0, and define ℓ, λ∈0,L by L := []ρ + /ρ^1/dℓ, and λ := L - ℓ = [][]ρ + /ρ^1/d - 1ℓ. Then λ and ℓ satisfy L = ℓ + λ and n = ρ L^d = ρ + ℓ^d. Now, let _n be the minimizer of the unrestricted n-body problem in the slightly smaller cube C_ℓ, and let ρ_n := ρ__n be its density. From the lower bound on w in Assumption <ref>, we know that _n and ρ_n are bounded functions, although _n_L^ and ρ_n_L^ could in principle depend on n. By considering ρ_n as a function in the larger cube C_L, the family of translated densities ρ_n- kL, k ∈^d, yields a partition of unity at density ρ when averaged over translations in C_L, i.e., 1/C_L∫_C_L∑_k ∈^dρ_n- kL - τ𝕀τ = ρ + ℓ^d/L^d = ρ, a.e. in ^d. Note that the distance between any pair of the smaller cubes C_ℓ + kL, k ∈^d, is at least λ, and that λ is of order L. Constructing the trial state. A simple way of constructing a constant density in Ω_N would be to multiply (<ref>) by the characteristic function _Ω_N and pull it inside the integral. The difficulty here is that _Ω_N∑_k ∈^dρ_n- kL - τ might not have a constant mass for all τ, which is needed to build a canonical trial state. Our main observation is that the mass is in fact independent of τ in the particular situation that the domain is a union of cubes of side length L (the mass is in fact constant in each of the initial small cubes of side length L). The main idea is thus to first approximate Ω_N from inside by a union of cubes Λ_N as in Figure <ref>. In the complementary region close to the boundary we just take any reasonable trial state with constant density and use our universal bound (<ref>). Inside we take independent copies of a given trial state in each cube. When we translate the tiling, some cubes will leave Λ_N but the same amount will penetrate on the other side by periodicity. We simply merge the incomplete cubes with the boundary part.As announced, we decompose the density ρ_Ω_N into a bulk term consisting of copies of ρ_n placed on a cubic lattice, and a boundary term giving rise to lower order energy. We denote byΛ_N := ⋃_k ∈^d C_L + k_L ⊆Ω_NC_L + kLour interior approximation of Ω_N by cubes, and for τ∈ C_L,Λ_N,τ := ⋃_k ∈𝒜_τC_L + kL + τ, 𝒜_τ := k ∈^dC_L + kL +τ⊆Λ_N,the union of cubes contained in Λ_N after shifting the lattice of cubes by τ. Multiplying (<ref>) by _Λ_N allows us to write ρ_Ω_N = 1/C_L∫_C_L∑_k ∈𝒜_τρ_n -kL-τ𝕀τ+ 1/C_L∫_C_Lρ_Ω_N ∖Λ_N + _Λ_N∑_k ∈^d ∖𝒜_τρ_n -kL-τ_ =: β_τ𝕀τ,see Figure <ref>. Note that because of the corridors between the cubes C_ℓ + kL + τ, the distance between the supports of the bulk term ∑_k ∈𝒜_τρ_n- kL - τ and the boundary term β_τ is at least λ/2. Furthermore, we have by the regularity (<ref>) of Ω_N, β_τ≤Ω_N ∖Λ_N,τ≤x ∈^d x, ∂Ω_N≤32√(d) L≤ CΩ_N^1-1/dL, in particular, Λ_N,τ = Ω_N + o Ω_N. Furthermore, by definition of β_τ we have for any of the translated cubes C_L + kL + τ, ∫_C_L + kL + τβ_τx𝕀 x ≤∫ρ_n x + ρ_C_Lx𝕀 x ≤ 2 n. Now, in each subcube C_L + kL + τ of Λ_N,τ, we place a copy _n,kL+τ of the n-particle minimizer _n with density ρ_n, and in the remaining boundary region Ω_N ∖Λ_N,τ we simply place any state _τ with density β_τ. By our regularity assumptions on Ω_N, it follows from <cit.> that the diameter of Ω_N is of order |Ω_N|^1/d, hence the radius R(x) for β_τ, defined in (<ref>), satisfies R(x)≤ C|Ω_N|^1/d. The free energy of the boundary state _τ can thus be estimated using our upper bound (<ref>) by ℱ_T _τ≤ C ρ LΩ_N^1-1/dlog|Ω_N|. The final trial state _N on Ω_N is then defined by _N = 1/C_L∫_C_L_τ⊗_τ𝕀τ := 1/C_L∫_C_L[]⊗_k ∈𝒜_τ_n, kL+τ⊗_τ𝕀τ, where _τ has density ρ__τx = ∑_k ∈𝒜_τρ_n x- kL - τ. Energy bounds. The energy of the state _N splits into a bulk term, a boundary term, and an interaction term between the two, ℱ_T _N = 1/C_L∫_C_Lℱ_T _τ + ℱ_T _τ + 2 D_w ρ__τ, β_τ𝕀τ, where D_w ν, μ := 1/2∬νxμy w x-y𝕀 x 𝕀 y. Denoting ρ_C_k = ρ_n- kL, k ∈^d, the energy of the bulk term is ℱ_T _τ = Λ_N,τ/L^dℱ_T _n + ∑_k,k' ∈𝒜_τk ≠ k' 2 D_w ρ_n- kL - τ, ρ_n-k'L - τ = Λ_N,τ/ℓ^dρ/ρ +F_T n, C_ℓ + ∑_k,k' ∈𝒜_τk ≠ k'∬ρ_C_kxρ_C_k'y w x-y𝕀 x 𝕀 y, where F_T n, C_ℓ is the minimal energy for n particles in C_ℓ, without restrictions on the density. To estimate the interaction between the cubes, we introduce the “tail" of the interaction, ϕt := κ/1+ λ^s_(-∞,λ]t + κ/1 + t^s_λ,∞t t ∈, and the minimal distance between particles in two different cubes δ_kk' := inf_x ∈ C_ℓ + kL y ∈ C_ℓ + k'Lx-y. Note that because of the corridors between the cubes, we have δ_0k≥λ + L k-1≥λ + x - L 1+ √(d) for all k ≠ 0 and x ∈ C_L + kL. In particular, since we always have δ_0k≥λ, we can estimate [6] ∑_k,k' ∈𝒜_τk ≠ k'∬ρ_C_kxρ_C_k'y w x-y𝕀 x 𝕀 y≤ ∑_k ∈𝒜_τ∬ρ_C_kx∑_k' ∈^d k ≠ k'ρ_C_k'yϕδ_kk'𝕀 x 𝕀 y = #𝒜_τ n^2 ∑_k ∈^d k ≠ 0ϕδ_0k≤ρ^2 Λ_N,τ L^d ∑_k ∈^d k ≠ 0ϕδ_0k ≤ ρ^2 Λ_N,τ∫_^dϕ[]x - L 1+ √(d)𝕀 x, where we used (<ref>) in the last inequality. We have ∫_^dϕ[]x - L 1+ √(d)𝕀 x = 𝕊^d-1/dϕ0 L^d 1+√(d)^d + 𝕊^d-1∫_0^∞ϕr[]r + L 1+√(d)^d-1𝕀 r. Splitting the integral at r = L 1+ √(d) gives [4] 𝕊^d-1∫_0^∞ϕr[]r + L 1+√(d)^d-1𝕀 r≤ 𝕊^d-1/dϕ02^d -1 L^d 1+√(d)^d + 2^d-1∫_x≥ L 1+ √(d)ϕx𝕀 x, where for instance we can bound ∫_x≥ L 1+ √(d)ϕ≤κ𝕊^d-1∫_L 1+√(d)^∞ r^d-s-1𝕀 r = κ𝕊^d-1/s-dL 1+√(d)^s-d. Collecting the bounds, we have shown [4] ∫_^dϕ[]x - L 1+ √(d)𝕀 x≤2^d-1κ𝕊^d-1[]2/dL 1+√(d)^d/1 + λ^s + 1/s-dL 1+√(d)^s-d, and hence the interaction energy between the cubes in the bulk of Ω_N is bounded by [4] 1/Ω_N∑_k,k' ∈𝒜_τk ≠ k'∬ρ_C_kxρ_C_k'y w x-y𝕀 x 𝕀 y≤ ρ^2 2^d-1κ𝕊^d-1[]2/dL 1+√(d)^d/1 + λ^s + 1/s-dL 1+√(d)^s-d, where the right hand side tends to zero when L →∞, since λ is of order L, and s > d. What remains now is to estimate the interaction D_w ρ__τ, β_τ between the bulk and the boundary in (<ref>). Denote ℬ_N,τ := k ∈^d C_L + kL + τ∩Ω_N ∖Λ_N≠∅. Then 𝒜_N,τ∩ℬ_N,τ = ∅, and since the cubes C_L + kL + τ_k ∈𝒜_N,τ∪ℬ_N,τ by construction constitute a cover of Ω_N for any τ∈ C_L, we have Ω_N ∖Λ_N,τ⊆⋃_k ∈ℬ_N,τC_L + kL + τ, where by regularity of Ω_N, []⋃_k ∈ℬ_N,τC_L + kL≤ x ∈^d x, ∂Ω_N≤32√(d) L ≤ CΩ_N^1-1/dL. To estimate D_w ρ__τ, β_τ, we will use the slightly smaller distance δ_kk' :=inf_x ∈ C_ℓ + kL + τy ∈ C_L + k'L + τx-y≤inf_x ∈ C_ℓ + kL + τy ∈β_τ∩C_L + k'L + τx-y, for k ∈𝒜_N,τ and k' ∈ℬ_N,τ, because of the presence of the term ρ_Ω_N ∖Λ_N in the definition of β_τ. Note that in spite of this, the distance between the supports of the bulk density ρ__τ and β_τ is still at least λ/2, and we have the bound δ_0k≥x - L 1+ √(d). Hence we can estimate the interaction term in (<ref>), 2D_w ρ__τ, β_τ = ∑_k' ∈ℬ_N,τk ∈𝒜_N,τ∫_C_L + k'L + τβ_τy∫ρ_n x - kL - τ w x-y𝕀 x 𝕀 y≤ ∑_k' ∈ℬ_N,τ∫_C_L + k'L + τβ_τy𝕀 y ∑_k ∈^d k ≠ k' n ϕδ_kk' ≤ #ℬ_N,τ 2 n^2 ∑_k ∈^d k ≠ 0ϕδ_0k ≤C ρ^2 Ω_N^1-1/d L ∫_^dϕ[]x - L 1+ √(d)𝕀 x, where we also used (<ref>). The integral on the right hand side is bounded by (<ref>), so this interaction term is of lower order than Ω_N. Finally, combining (<ref>) with (<ref>), (<ref>), (<ref>), and (<ref>), we conclude that lim_L →∞lim sup_N →∞ℱ_T _N/Ω_N≤ lim_L →∞ρ/ρ + F_T n, C_ℓ/ℓ^d + o1_L →∞ = ρ/ρ +f_T ρ + . Sinceis arbitrary, and f_T is continuous, this concludes the proof of Theorem <ref>. The limit in the canonical case implies the same in the grand-canonical case. We state it here, although we will prove more later. Under the same assumptions as <ref>, the corresponding grand-canonical thermodynamic limit exists, and is the same as in the canonical case, lim_N →∞G_T ρ_Ω_N/Ω_N = f_T ρ. Here, we do not need to assume that ρΩ_N is an integer. For the upper bound, we can use the same proof as in <ref> with only minor modifications. Because the particle number is now not fixed, there is no need to put any restrictions on the side length L of the cubic lattice covering ^d. Instead of using ρ_n to construct the partition of unity with corridors in (<ref>), we take the density of the grand-canonical Gibbs state _T, μ_ℓρ+,ℓ in C_ℓ, and also use this Gibbs state in the construction of the trial state (<ref>). Here, the chemical potential μ_ℓρ+ is chosen to maximize (<ref>) at density ρ+ and side length ℓ, in particular, 𝒢_T _T, μ_ℓρ+,ℓ = ℓ^d f_T ρ+,ℓ, as explained in (<ref>)-(<ref>). Going through the proof exactly as before now yields lim sup_N →∞G_T ρ_Ω_N/Ω_N≤lim_ℓ→∞ρ/ρ+f_T ρ+,ℓ/ℓ^d + o1_ℓ→∞ = ρ/ρ+ f_T ρ+ for any sufficiently small > 0. For the corresponding lower bound on G_T ρ_Ω_N, we introduce the chemical potential μ=f'_T(ρ) and note that for any grand-canonical statewith ρ_ = ρ_Ω_N, we have 𝒢_T = 𝒢_T-μ() + μρΩ_N. It follows that lim inf_N →∞G_T ρ_Ω_N/Ω_N≥lim inf_N →∞G_T μ, Ω_N/Ω_N + μρ = g_T μ + μρ=f_T(ρ), where we recall g_T is the usual grand-canonical thermodynamic limit (<ref>). The last equality is due to our choice μ=f'_T(ρ).Corollary <ref> is also valid in the hard-core case α=+, under the additional assumption that ρ_0 <ρ_c(d) r_0^-d, where ρ_c(d) is the packing density. Due to the corridors we slightly increase the density in the small boxes and therefore need the strict inequality. In the grand-canonical case we do not need to separate the boundary part as we did in Figure <ref> and the proof goes as in <cit.>. To make the proof of <ref> work for hard cores in the canonical case, we would need an upper bound on the free energy of the boundary part, but it is not clear if the corresponding density is representable (see the discussion in <cit.>. § GRAND-CANONICAL CONVERGENCE RATES The aim of this section is to provide a priori bounds for the convergence rate of the usual grand-canonical free energy for cubes in the thermodynamic limit. That is, for cubes C_L = [ -L/2 , L/2 )^d, temperature T ≥ 0, and any chemical potential μ∈, we estimateG_T μ, C_L/L^d-g_T μ. A central tool for controlling error terms is a bound due to Ruelle <cit.> which allows one to uniformly control the local average (square) number of particles []n_Q^2_T,μ,Ω in a cube Q, for a Gibbs state in the set Ω⊆^d. One version of the bound takes the formn_Q_T, μ, Ω≤[]n_Q^2_T, μ, Ω^1/2≤Qξ_T,μ,for all cubes Q of side length at least L_0 (which is independent of μ and T), and whereξ_T,μ= C_Te^μ/2T[]1+e^μ d/2T for T>0,C_0(C_0+μ)_+^1+d/ for T=0.We recall that _T,μ,Ω denotes the expectation against the Gibbs state at temperature T, chemical potential μ in a domain Ω. At T=0 we just take any minimizer {x̅_1,...,x̅_N} of the free energymin_n≥0min_x_1,...,x_n∈Ω[]∑_1≤ j<k≤ nw(x_j-x_k)-μ nand []n_Q^2_0, μ, Ω:=#{x̅_j∈ Q}^2 is simply the square of the number of points in Q. In (<ref>), =min(1,s-d)/2 and the constant C_T depends only on T and the interaction w between the particles, and not on the cube Q, or the larger domain Ω. The bound (<ref>) is well known at low activity z=e^μ/T≪1 and can be found in <cit.>. The bound for larger activities is given in Appendix <ref>, in Corollary <ref> for T>0 and Corollary <ref> for T=0.We express the convergence rate in two different ways. The following proposition provides an additive error term. Later in Corollary <ref> we instead express the rate using a shift of the chemical potential μ, which turns out to be convenient for our purposes. Let T≥0 and μ∈, and suppose that the interaction w satisfies the conditions in <ref>. Then there exists a constant c > 0 depending only on d, T, and the interaction w, such that |G_T μ,C_ℓ/ℓ^d-g_T μ| ≤c ξ_T,μ^2_ℓ for any ℓ > 0 sufficiently large (independently of μ and T), where_ℓ:= ℓ^-1 if s>d+1, ℓ^-1logℓ if s=d+1, ℓ^d-s if d<s<d+1.The inequality (<ref>) provides the expected error bound on G_T(μ,C_ℓ) in terms of the surface area, when w decays fast enough, that is, s>d+1. When d<s≤ d+1 the bound gets worse. The proof will be based on the following simple lemma, which will be used to estimate the interaction between subsystems. For any s > d, there is a constant c > 0, depending only on s and d, such that for any ℓ≥2 and x_0 ∈ C_ℓ, ∫_C_ℓ^c𝕀 y/1 + x_0-y^s≤c/1+x_0, ∂ C_ℓ^s-d, and ∫_C_ℓ∫_C_ℓ^c𝕀 y/1 + x-y^s𝕀 x ≤ cℓ^d_ℓ, with _ℓ as in (<ref>). We denote η = x_0, ∂ C_ℓ and assume first that η≤1. Then we simply bound∫_C_ℓ^c 𝕀 y/1 + x_0-y^s≤∫_^d𝕀 y/1 + y^s≤2/1+η^s-d∫_^d𝕀 y/1 + y^s.When η≥1, we note that C_ℓ^c + x_0 ⊆C_η^c ⊆y≥η/2, so ∫_C_ℓ^c 𝕀 y/1 + x_0-y^s = ∫_C_ℓ^c + x_0 𝕀 y/1 + y^s≤∫_y≥η/2𝕀 y/y^s = 𝕊^d-1∫_η/2^∞𝕀 r/r^s-d+1 =c_1/η^s-d≤2c_1/1+η^s-d, where the last bound is because η≥1.To prove the second inequality, we write the boundary as a union of faces ∂ C_ℓ=∪_j=1^2d F_j and we use the previous bound to get∫_C_ℓ∫_C_ℓ^c𝕀 y/1 + x-y^s𝕀 x≤ c∑_j=1^2d∫_C_ℓ𝕀 x/1+ (x,F_j)^s-d=2cdℓ^d-1∫_0^ℓ𝕀 x_1/1+x_1^s-d.The last equality is because the integral involving F_j is independent of the face, by symmetry. The last integral is bounded for s>d+1, behaves as logℓ for s=d+1 and as ℓ^d+1-s for d<s<d+1, hence the result follows. We are now ready to provide the following. Lower bound on G_T. Consider a large cube C_L = [ -L/2, L/2 )^d of side length L > 0. Write C_L as a union of smaller cubes C_ℓ, C_L = ⋃_k ∈𝒜 C_ℓ + ℓ k, for some appropriate subset 𝒜⊆^d. We insert corridors of size δ > 0 between the smaller cubes by shrinking them a little and instead considering the union of the cubes C_ℓ^k := C_ℓ + ℓ k, where ℓ = ℓ - δ. In each of the smaller cubes C_ℓ^k we place the usual Gibbs state _T,μ,C_ℓ^k and consider in the large cube C_L the (grand-canonical) tensor product _L := ⊗_k ∈𝒜_T,μ,C_ℓ^k. We use _L as a trial state for the grand-canonical problem in C_L at the same chemical potential μ. The free energy then satisfies 𝒢_T^μ_L = ∑_k ∈𝒜 G_T μ, C_ℓ + ∑_k,m∈𝒜 k ≠ m 2 D_w ρ_ℓ,k , ρ_ℓ,m ≤ L^d/ℓ+δ^d G_T μ, C_ℓ + L^d/ℓ+δ^d∑_k ∈^d k ≠ 0 2D_wρ_ℓ,0, ρ_ℓ,k, where ρ_ℓ,k = ρ_ℓ· - ℓ k denotes the one-body density of the Gibbs state _T,μ,C_ℓ placed in the cube C_ℓ^k. We will proceed to estimate the interaction term above, and then take L to infinity at fixed ℓ to obtain a lower bound on G_T μ, C_ℓ. By taking δ≥ r_0, none of the error terms see the core of w, so by dividing each of the small cubes into even smaller cubes of side length r ∼ L_0, i.e., C_ℓ^k = ⋃_m ∈𝒜 C_r^k,m := ⋃_m ∈𝒜C_r + r m + ℓ k, for some appropriate subset 𝒜⊆^d, we obtain 2D_wρ_ℓ,0, ρ_ℓ,k≤ ∬κ/1+ x-y^sρ_ℓ,0xρ_ℓ, ky𝕀 x 𝕀 y≤ ∑_j,m ∈𝒜κ/1+ []C_r^0,j, C_r^k,m^s ∫_C_r^0,jρ_ℓ,0∫_C_r^k,mρ_ℓ,k ≤C r^d ξ_T,μ^2 ∑_j,m ∈𝒜1/1+ []C_r^0,j, C_r^k,m^s , from the Ruelle bound (<ref>). Note that we can choose a constant c such that for all t ≥ 0, 1+t + 2 √(d) r^s ≤c̃(1+t^s), and denote by C_1, C_2 any two open, disjoint cubes of side length r. Then, since max_x ∈ C_1, y ∈ C_2x-y≤C_1, C_2 + 2 √(d) r, it follows that (by taking t = C_1, C_2), 1/1 + C_1, C_2^s≤c/1+ max_x ∈ C_1 ,y ∈ C_2x-y^s≤c̃/r^2d∫_C_1∫_C_2𝕀 x 𝕀 y/1 + x-y^s.Continuing (<ref>) and summing over k, we now obtain∑_j,m ∈𝒜1/1+ []C_r^0,j, C_r^k,m^s ≤c̃/r^2d∫_C^0_ℓ∫_C^k_ℓ𝕀 x 𝕀 y/1 + x-y^s.Summing finally then over k, we find for the interaction between the cube C^0_ℓ and the rest of the system∑_k ∈^d k ≠ 0 2D_wρ_ℓ,0, ρ_ℓ,k≤ C ξ_T,μ^2∫_C^0_ℓ∫_(C^0_ℓ)^c𝕀 x 𝕀 y/1 + x-y^s≤ C ξ_T,μ^2ℓ^d_ℓ,by <ref>. After taking L→ in (<ref>) we end up with (1+δ/ℓ)^d g_T μ≤G_T μ,C_ℓ/ℓ^d+Cξ_T,μ^2_ℓ.The stability of w and the free energy of the non-interacting gas imply the simple lower bound0≥ g_T μ≥ -Te^1/T(μ+κ)≥ -C_Tξ_μ,T^2.from which we deduce that the correction δ |g_T(μ)|/ℓ can be absorbed into Cξ_T,μ^2_ℓ. At T=0 we also have g_0(μ)≥ -Cξ_0,μ^2. Hence we obtaing_T μ-Cξ_T,μ^2_ℓ≤G_T μ,C_ℓ/ℓ^das we wanted. Upper bound on G_T. We consider again a large cube C_L = [ -L/2; L/2 )^d and divide it into a union of smaller cubes C_ℓ, C_L = ⋃_k ∈𝒜 C_ℓ^k = ⋃_k∈𝒜 C_ℓ + ℓ k . For a stateon C_L we denote by _k := __C_ℓ^k its geometric localization to the cube C_ℓ^k(see <cit.> and <cit.>). Note that the expected number of particles behaves additively, 𝒩() = ∫ρ_ = ∑_k ∈𝒜∫_C_ℓ^kρ__k = ∑_k ∈𝒜𝒩_k, and the interaction energy ofsplits into local terms and cross terms, 𝒰 = 1/2∬ w x-yρ_^2x,y𝕀 x 𝕀 y = ∑_k,m ∈𝒜1/2∬ w x-y_C_ℓ^kx_C_ℓ^myρ_^2𝕀 x 𝕀 y = ∑_k ∈𝒜𝒰_k + ∑_k,m ∈𝒜k ≠ m1/2∬_C_ℓ^k × C_ℓ^m w x-yρ_^2x,y𝕀 x 𝕀 y. By the sub-additivity property of the entropy for T>0, and takingto be the Gibbs state in C_L, we conclude that G_T μ, C_L≥ L^d G_T μ, C_ℓ/ℓ^d + ∑_k,m ∈𝒜k ≠ mI_k,m_, with I_k,m_ :=1/2∬ w x-y_C_ℓ^kx_C_ℓ^myρ_^2x,y𝕀 x 𝕀 y. Hence, it only remains to provide a suitable lower bound on the sum of cross terms I_k,m_ in the thermodynamic limit. For this, we again divide the cubes C_ℓ^k into smaller cubes of fixed side length r, C_ℓ^k = ⋃_γ∈𝒜 C_r^k,γ = ⋃_γ∈𝒜 C_r + r γ + ℓ k , and recall that we can bound the interaction from below by w x≥ - κ/1 + x^s, so that - 2 I_k,m_≤ ∑_γ, γ' ∈𝒜∑_i < jκ/1+x_i-x_j^s_C_r^k,γx_i_C_r^m, γ'x_j_T,μ,C_L ≤ ∑_γ, γ' ∈𝒜κ/1+ []C_r^k,γ, C_r^m,γ'^s [] n_C_r^k,γn_C_r^m,γ'_T,μ,C_L ≤ ∑_γ, γ' ∈𝒜κ/1+ []C_r^k,γ, C_r^m,γ'^s []n_C_r^k,γ^2_T, μ, C_L^1/2[]n_C_r^m,γ'^2_T, μ, C_L^1/2 ≤C ξ_T,μ^2 r^2d∑_γ, γ' ∈𝒜1/1+ []C_r^k,γ, C_r^m,γ'^s , where we have again used the Ruelle bound (<ref>). By (<ref>) and Lemma <ref>, we obtain∑_m ∈𝒜∖{k}I_k,m_≥ - C ξ_T,μ^2ℓ^d_ℓwhere C now contains the factor r^2d. Finally, returning to (<ref>), we can now divide by L^d and let L →∞ to conclude that G_T μ, C_ℓ/ℓ^d≤ g_T μ +C ξ_T,μ^2 _ℓ. which concludes the proof of Proposition <ref>. In our setting, a slightly different way of stating the convergence rate will be useful. Instead of shifting the total free energy, we modify the chemical potential μ. This works well at T>0 but we get a slightly worse lower bound for T=0, which is probably an artefact of our proof. Under the same assumptions as in Proposition <ref>, we haveg_T []μ+C_T(1+ξ^2_T,μ)_ℓ≤G_T μ, C_ℓ/ℓ^d≤ g_T []μ-C_T(1+ξ^2_T,μ)_ℓfor T>0, andg_0(μ+Cξ_0,μ/ℓ^s-d/s-d+1)≤G_0 μ, C_ℓ/ℓ^d≤ g_0(μ-Cξ_0,μ_ℓ).Let ρ:=-∂/∂μ_+ g_T(μ) be the (right) density corresponding to the chemical potential μ. If ρ≥1 we simply estimate ξ_T,μ^2≤ξ_T,μ^2ρ. For ρ≤1 we use Corollary <ref> (in the limit L→), which providesξ_T,μ^2≤ C_Tρ,for ρ≤1 and T>0where C_T only depends on T. All in all, we can thus bound when T>0g_T(μ)-Cξ_T,μ^2_ℓ≥ g_T(μ)+C(1+ξ_T,μ^2)_ℓ∂/∂μ_+g_T(μ)≥ g_T(μ+C_T(1+ξ_T,μ^2)_ℓ)andg_T(μ)+Cξ_T,μ^2_ℓ≤ g_T(μ)-C(1+ξ_T,μ^2)_ℓ∂/∂μ_+g_T(μ)≤ g_T(μ-C_T(1+ξ_T,μ^2)_ℓ),by concavity of μ↦ g_T(μ). This is the claimed estimate (<ref>).At T=0 the situation is more complicated, since we have no good control on ξ_0,μ at low density. Recall that μ↦ g_0(μ) vanishes over some interval (-,μ_c) for some unknown μ_c=f_0'(0). We need to revisit the proof of Proposition <ref> and we will get a slightly worse lower bound.Lower bound on G_0. We argue as before, except that we use _L as a trial state for the grand-canonical problem in C_L at a modified chemical potential μ + ν_ℓ, with ν_ℓ to be chosen later. We will also have to take the length of the corridors δ large. The free energy then satisfies𝒢_0^μ+ ν_ℓ_L≤L^d/ℓ+δ^d G_0 μ, C_ℓ + L^d/ℓ+δ^d∑_k ∈^d k ≠ 0 2D_wρ_ℓ,0, ρ_ℓ,k - ν_ℓ_L.For simplicity, in the interaction of one cube to the rest of the system we have added all the cubes in ^d. The main idea is to use the last term to control the interaction, but this requires an estimate in terms of _L. We thus use the Ruelle bound for the cubes outside of C_ℓ^0 but not for C^0_ℓ itself, and obtain by (<ref>) and Lemma <ref>D_wρ_ℓ,0, ρ_ℓ,k ≤ C r^d ξ_0,μ∑_j ∈𝒜[]n_C_r^0,j_0,μ,C_ℓ^0∑_m ∈𝒜1/1+ []C_r^0,j, C_r^k,m^s ≤ Cξ_0,μ∑_j ∈𝒜[]n_C_r^0,j_0,μ,C_ℓ^0∫_(C_ℓ+δ)^c y/1+ |X_j-y|^s ≤ Cξ_0,μ∑_j ∈𝒜[]n_C_r^0,j_0,μ,C_ℓ^0/(X_j,∂ C_ℓ+δ)^s-dwhere X_j denotes the center of the cube C_r^0,j. As a sub-optimal but simple bound we can use that (X_j,∂ C_ℓ+δ)≥δ and thus get after summing𝒢_0^μ+ ν_ℓ_L≤L^d/ℓ+δ^d G_0 μ, C_ℓ + [] Cξ_0,μ/δ^s-d- ν_ℓ_L.Choosing ν_ℓ=Cξ_0,μ/δ^s-d and passing to the limit L→, we arrive at(1+δ/ℓ)^d g_0 []μ+Cξ_0,μ/δ^s-d≤ℓ^-d G_0 μ, C_ℓ.On the other hand, using the stability of w, we have(1+t)g_0(μ)≥ g_0(μ+tμ+tκ),∀μ∈, t>0,and we thus obtainℓ^-d G_0 μ, C_ℓ≥ g_0(μ+Cξ_0,μ/δ^s-d +(μ+Cξ_0,μ/δ^s-d+κ)δ/ℓ).The optimal choice here is δ = ℓ^1/s-d+1, which leads to the lower bound in (<ref>), after changing the constant C_0 in the definition (<ref>) of ξ_0,μ.Upper bound on G_0. It turns out that we can get the optimal rate. We argue exactly as in the proof of Proposition <ref>, taking this timea minimizer for the chemical potential μ-ν_ℓ. We also split ^d=∪ C_ℓ^k using cubes a priori unrelated to the large cube C_L (the latter is not necessarily an exact union of the smaller cubes). In the estimate (<ref>) we use the important fact that[] n_C_r^k,γn_C_r^m,γ'_0,μ,C_L=[] n_C_r^k,γ_0,μ,C_L[]n_C_r^m,γ'_0,μ,C_Lsinceis here a delta measure as a minimizer, hence everything is deterministic. In the following we simply suppress the expectations. The same arguments as for the lower bound then provide[2] G_0 μ-ν_ℓ, C_L ≥L^d G_0 μ, C_ℓ/ℓ^d - Cξ_0,μ∑_k∑_j ∈𝒜n_C_r^k,j/1+(X_j,∂ C_ℓ^k)^s-d+ν_ℓ() =L^d G_0 μ, C_ℓ/ℓ^d +∫_C_L[]ν_ℓ- Cξ_0,μ∑_k∑_j ∈𝒜_C_r^k,jx/1+(X_j,∂ C_ℓ^k)^s-dρ_x𝕀 x ≥L^d G_0 μ, C_ℓ/ℓ^d +∫_C_L(ν_ℓ- C'ξ_0,μ/1+(x, ∪_k ∂ C_ℓ^k)^s-d)ρ_(x) 𝕀 x.Now we use that the right side simplifies if we average over the position of the tiling of size ℓ, at fixed . Under this procedure, the periodic functionx↦1/1+(x, ∪_k ∂ C_ℓ^k)^s-dis replaced by a constant over ^d, which equals its average over one cube. This can be bounded by1/ℓ^d∫_C^0_ℓ/1+(x, ∂ C_ℓ^0)^s-d≤ C_ℓ,due to the proof of Lemma <ref>. Thus we have proved thatG_0 μ-ν_ℓ, C_L≥ L^d G_0 μ, C_ℓ/ℓ^d +(ν_ℓ- C'ξ_0,μ_ℓ)()and we can conclude using ν_ℓ=C'ξ_0,μ_ℓ and taking L→. We will need the following tool. Let Ω = Ω_1 ∪Ω_2 ⊆^d be any disjoint union of Borel subsets, and consider a tiling ^d=⋃_k∈^d L_0(k+Q) of cubes of side length L_0, with Q=(0,1]^d. Letbe a grand-canonical state on Ω.For any v ∈^d satisfying Ω_1 ∩Ω_2 + v= ∅, we define a map T: Ω→Ω_v := Ω_1 ∪Ω_2 + v by Tx := x _Ω_1x + x+v_Ω_2x, and a state _v supported on Ω_v by _v_n x := _n T^-1^⊗ nx. Then the k-particle densities of _v are given by ρ__v^kx := ∑_n ≥ k1/n-k!∫_Ω_v^n-k_v x,y𝕀 y = ρ_T^-1^⊗ kx. Furthermore, if Ω_1, Ω_2 + v≥ 2r_0+2√(d)L_0, then the free energy of the state _v satisfies 𝒢_T _v≤𝒢_T+ 2κ𝒩 + C∑_k⟨ n_k^2⟩_/Ω_1, Ω_2+v^s-d, where n_k denotes the number of particles in the cube L_0(k+Q). Recall that r_0 is the range of the core of the interaction w, as defined in Assumption <ref>. The fact that the k-particle densities satisfy (<ref>) follows immediately from T being measure preserving, which also implies 𝒮_v = 𝒮. Hence, to finish the proof, we only need to provide a bound on the interaction energy 𝒰_v. Applying (<ref>), we have 𝒰_v = 1/2∬_Ω_v^2 w x-yρ__v^2x,y𝕀 x 𝕀 y = 𝒰 - ∬_Ω_1 ×Ω_2w x - yρ_^2x, y𝕀 x 𝕀 y+ ∬_Ω_1 ×Ω_2 + v w x - yρ_^2x, y-v𝕀 x 𝕀 y.To estimate the second term, we use the stability of w in the form [4] ∬_Ω_1 ×Ω_2w x - yρ_^2x, y𝕀 x 𝕀 y =2 ∬_Ω_1×Ω_2∑_n ≥ 21/n!∫_Ω^n-2∑_j < k w x_j - x_k_n x𝕀 x≥2 ∑_n ≥ 21/n!∫_Ω^n∑_j < k w_2 x_j - x_k_n x𝕀 x ≥ - 2 κ𝒩.To estimate the third term, we argue similarly as in (<ref>), leading to [6] ∬_Ω_1 ×Ω_2 + v w x - yρ_^2x, y-v𝕀 x 𝕀 y ≤C∑_k,m∫_Q_k∩Ω_1∫_Q_m∩Ω_2 /1+|x-y-v|^s n_k n_m_ ≤C∑_k,m∫_Q_k∩Ω_1∫_Q_m∩Ω_2 /1+|x-y-v|^s( n_k^2_+n_m^2_) ≤ C/(Ω_1,Ω_2+v)^s-d∑_kn_k^2_,by Lemma <ref>. This concludes the proof of (<ref>). We now provide a bound on the convergence rate at constant density. Let T ≥ 0 and ρ > 0. If T=0 we also assume that s>d+1. There is a constant C > 1 depending only on L_0, T, and w, such that for ℓ sufficiently large, |G_T ρ_C_ℓ/ℓ^d-f_T ρ|≤ξ(ρ)η_ℓ, with ξ(ρ):=Cρ e^Cρ^γ-1 if T>0, √(ρ)(1+ρ^γ-1)^2+2d/ if T=0, andη_ℓ:=ℓ^-1/2 if s>d+1, ℓ^-1/2√(logℓ) if s=d+1, ℓ^-s-d/1+s-d if d<s<d+1 . When γ=2, ρ^γ-1 has to be replaced by ρlog(2+ρ) in the definition of ξ(ρ). Here, =min(1,s-d)/2 is the same as in (<ref>). First we quickly discuss the lower bound. We introduce μ=μ(ρ) (at T=0 we take the largest admissible μ) and writeℓ^-dG_T[ρ_C_ℓ] ≥ℓ^-dG_T(μ,C_ℓ)+μρ≥ g_T(μ)+μρ -Cξ_T,μ^2_ℓ=f_T(ρ)-Cξ_T,μ^2_ℓby Proposition <ref>. Recall that ξ_T,μ is defined in (<ref>). When T>0, we have ξ_T,μ^2≤ξ(ρ) for a large enough constant C, due to the bounds (<ref>) on the chemical potential. This provides an estimate better than the one stated, with _ℓ in place of η_ℓ. When T=0 we only getℓ^-dG_0[ρ_C_ℓ]≥ f_0(ρ)-C(1+ρ^γ-1)^2+2d/_ℓ.When γ=2, ρ^γ-1 is replaced by ρlog(2+ρ). This does not have the claimed behavior √(ρ) at low density. However, from the universal bounds in <cit.> we see that[]ℓ^-d G_0 ρ_C_ℓ - f_0 ρ≤ Cρ(1+ρ^γ-1)(with an additional logarithm for γ=2). We can therefore introduce a power of ρ at the expense of decreasing the power of ℓ:ℓ^-d G_0 ρ_C_ℓ - f_0 ρ≥ -C√(_ℓ)√(ρ)(1+ρ^γ-1)^3/2+d/,which can be bounded by ξ(ρ). Note that √(_ℓ)=η_ℓ since s>d+1 by assumption.Next we turn to the upper bound. This time, we split the cube of interest into smaller cubes. For consistency with the previous proofs we thus call the length of interest L and the smaller length ℓ. Let δ > 0 and the length ℓ + δ =: ℓ < L. We consider the tiling of space^d=⋃_k∈^dC_ℓ^k, C^k_ℓ:=C_ℓ + kℓ.Consider a function ρ_ℓ in C_ℓ̃ supported in the slightly smaller cube C_ℓ, and such that ∫_C_ℓρ_ℓ = ρℓ^d. Repeat this function in each cube C_ℓ^k to obtain an ℓ̃–periodic function over ^d. When we average this function over translations of the tiling, we obtain a constant ρ over the whole space: ρ = 1/ℓ^d∫_C_ℓ∑_k ∈^dρ_ℓx-k ℓ - τ𝕀τ,∀ x∈^d.The idea of the proof is to interpret this as a partition of unity and writeρ_C_Lx = 1/ℓ^d∫_C_ℓ∑_k ∈^dρ_ℓx-k ℓ - τ_C_Lx𝕀τ.We choose the size of the corridors, δ, to be large enough but fixed. Let then _ℓ be the Gibbs state (or a minimizer at T=0) for G_T(μ_ℓ,C_ℓ), where μ_ℓ is chosen so that(_ℓ)=ρℓ^d.In other words, _ℓ has the exact average density ρ in the bigger cube C_ℓ̃. We take ρ_ℓ:=ρ__ℓ. For every τ, we will use _ℓ to construct a trial state _τ of densityρ__τ(x):=∑_k ∈^dρ_ℓx-k ℓ - τ_C_Lx=:∑_k∈^dρ_τ^k(x).Then the state:=1/ℓ^d∫_C_ℓ_τ τhas the desired density ρ_C_L. We now explain how to construct _τ. For every fixed τ, we call _τ⊂^d the indices of the cubes which are completely inside C_L:_τ:={k∈^d : C_ℓ^k+τ⊂ C_L}.We call _τ the set of indices so that C_ℓ^k+τ intersects the boundary of C_L. We use the fact that the union of these boundary cubes⋃_k∈_τC_L∩(C^k_ℓ̃+τ)is made of a certain number of complete cubes which have been split into finitely many parts as displayed in Figure <ref>. In other words, any piece of cube at the boundary can be merged with some other pieces to build an entire cube. We can thus write⋃_k∈_τC_L∩(C^k_ℓ̃+τ)=⋃_k∈'_τT_k(C^k_ℓ̃+τ)where T_k is a translation map as in Lemma <ref> (or, rather, the composition of finitely many such maps depending on the number of pieces), and '_τ is a subset of _τ.Next we use as trial state for a given shift τ_τ=⊗_k∈_τ_ℓ^k⊗⊗_k∈'_τ_ℓ^kwith _ℓ^k:= _ℓ+kℓ̃+τ and _ℓ^k constructed from _ℓ using Lemma <ref>. This state has the desired density ρ_τ^k by construction. From the subadditivity of the entropy, we haveG_T[ρ_C_L]≤1/ℓ̃^d∫_C_ℓ̃_T(_τ) τand_T(_τ)=∑_k∈_τ_T(_ℓ)+∑_k∈'_τ_T(^k_ℓ)+2∑_k≠ k'∈_τ∪'_τD(ρ_τ^k,ρ_τ^k').Using Lemma <ref> and the fact that the pieces of a cube at the boundary are at a distance of order L from each other, we have for k∈'_τ_T(^k_ℓ)≤_T(_ℓ)+ ℓ̃^d2κρ + C/L^s-d∑_j∈^dn_j^2__ℓwhere n_j denotes the number of particles in the cube L_0(j+Q), with L_0 the side length for which the Ruelle bound (<ref>) holds. When T>0, the Ruelle bound provides∑_jn_j^2__ℓ≤ Cℓ^dξ_T,μ_ℓ^2≤ Cρ e^Cρ^γ-1ℓ̃^d,using the definition of ξ_T,μ and the bounds (<ref>) on the chemical potential. There is an additional logarithm when γ=2. When T=0 the state _ℓ can be taken to be a delta measure, hencen_j^2__ℓ=n_j^2__ℓ≤ξ_0,μ_ℓn_j__ℓso that∑_jn_j^2__ℓ≤ Cρ(1+ρ^γ-1)^1+d/ℓ̃^d.On the other hand, the interaction between any cube and the rest can be estimated exactly like in (<ref>) in the proof of Proposition <ref> by∑_k'∈_τ∪_τ'∖{k}D(ρ_τ^k,ρ_τ^k')≤ Cξ_T,μ_ℓ^2ℓ^d_ℓ.Note that this uses the presence of the corridors to estimate w≤κ(1+|x|^s)^-1. Thus we have proved that_T(_τ)/L^d≤_T(_ℓ)/ℓ̃^d+ζ(ρ)ρ(ℓ/L+_ℓ) if T>0, ρℓ/L+_ℓ if T=0.whereζ(ρ)= Ce^Cρ^γ-1 if T>0,C(1+ρ^γ-1)^2+2d/ if T=0,with an additional logarithm for γ=2. We have used here that the volume of the union of the cubes in '_τ can be controlled by L^d-1ℓ, since ℓ and ℓ̃ are comparable. Next we study the free energy of _ℓ and recall thatℓ̃^-d𝒢_T _ℓ = (1+δ/ℓ)^-d f_T (ρ(1+δ/ℓ)^d,ℓ).We apply <ref> and obtainf_T (ρ(1+δ/ℓ)^d,ℓ) = g_T(μ_ℓ,C_ℓ)+μ_ℓρ(1+δ/ℓ)^d ≤g_T(μ_ℓ-ζ(ρ)_ℓ)+μ_ℓρ(1+δ/ℓ)^d.so thatℓ̃^-d𝒢_T _ℓ≤ (1+δ/ℓ)^-dg_T(μ_ℓ-ζ(ρ)_ℓ)+μ_ℓρ.If T>0 we use that g_T(μ)≥ -Te^1/T(μ+κ) due to the stability of w and the free energy of the non-interacting gas. This gives(1+δ/ℓ)^-dg_T(μ_ℓ-ζ(ρ)_ℓ) ≤ (1-Cδ/ℓ)g_T(μ_ℓ-ζ(ρ)_ℓ)≤ g_T(μ_ℓ-ζ(ρ)_ℓ)+Cδ/ℓTe^μ_ℓ/T≤ g_T(μ_ℓ-ζ(ρ)_ℓ)+_ℓρζ(ρ),from the bounds on μ_ℓ in Proposition <ref> and the fact that 1/ℓ≤_ℓ, after increasing the constant in ζ(ρ). From the duality formula f_T(ρ)=sup_ν{g_T(ν)+νρ}, we then obtain from (<ref>)ℓ̃^-d𝒢_T _ℓ≤ f_T(ρ)+2ζ(ρ)_ℓρ.At T=0 we argue slightly differently and immediately use the duality formula in (<ref>) to getℓ̃^-d𝒢_0 _ℓ≤(1+δ/ℓ)^-dg_0(μ_ℓ-ζ(ρ)_ℓ)+μ_ℓρ ≤(1+δ/ℓ)^-df_0(ρ)+ζ(ρ)ρ_ℓ/(1+δ/ℓ)^d+μ_ℓρ[] 1-1/(1+δ/ℓ)^d ≤f_0(ρ)+Cδ/ℓf_0(ρ)_-+ζ(ρ)ρ_ℓ+Cδ/ℓ(μ_ℓ)_+ρ.Using that f_0(ρ)≥ -κρ by stability and the upper bounds on μ_ℓ in Proposition <ref>, we get the same as (<ref>) at T=0.As a conclusion, after increasing the constant in ζ(ρ), we have proved in all cases_T(_τ)/L^d≤ f_T(ρ)+Cζ(ρ)ρ(ℓ/L+_ℓ) if T>0, ρℓ/L+_ℓ if T=0.After optimizing we are led to choosingℓ=√(L) for s>d+1, √(Llog L) for s=d+1, L^1/1+s-d for d<s<d+1for T>0 or T=0 with ρ≥1, andℓ=√(L/ρ) for s>d+1, √(L/ρlog (L/ρ)) for s=d+1, (L/ρ)^1/1+s-d for d<s<d+1 for T=0 and ρ≤1. This is how we get η_L for the error term. When T=0 and s=d+1 we have an additional logρ which is why we have assumed for simplicity that s>d+1 at zero temperature, in the statement. This concludes the proof of Proposition <ref>. § PROOF OF THEOREM <REF> ON THE LOCAL DENSITY APPROXIMATION§.§ Lower boundIn this subsection, we prove the lower bound on G_T ρ in <ref>. Let M > 0, and p ≥ 1, and let w be a short-range interaction (satisfying the conditions of <ref>). Furthermore, take any b ≥ 0 satisfying b >1+min(1,s-d)( 1-1/2p). For any T ≥ 0, there exists a constant C > 0 depending on M, p, b, T, d, and w, such that G_T ρ - ∫_^d f_T ρx𝕀 x ≥ - C √(_ℓ)[]∫_^d√(ρ) + ℓ^bp∫_^dδρ_ℓ(z)^pz, for any ℓ > 0, and any density 0 ≤ρ∈ L^1 ^d satisfying ρ∞≤ M and √(ρ)∈ L^1 ^d. Here _ℓ is defined by (<ref>).The upper bound stated in Proposition <ref> below will require a stronger condition on b. In Theorem <ref> we have just taken the worse of the two.The main difficulty in proving Proposition <ref> is that we have little information on the optimal state minimizing G_T[ρ]. In particular we have no local Ruelle-type bounds on (except on the average local number which is given by ρ by definition). To circumvent this problem, we will argue by Legendre-Fenchel duality and just remove the constraint on the density at the expense of adding an external potential. More precisely, we use thatG_T ρ≥min_{_T()+∫_^dV(x)ρ_(x)x}-∫_^dV(x)ρ(x)xfor any external potential V. There is equality if we maximize over V but here we just pick a well-chosen V to obtain the lower bound (<ref>). To construct this external potential we tile ^d with cubes of side length ℓ. In each cube we take V to be an approximation of the expected opposite chemical potential V≈ -μ with μ=f'_T(ρ). More precisely, in the cubes C_k where ρ varies slowly enough (that is, δρ_ℓ is small enough compared to the other terms), we just take the potential to be constant, for instance -f_T'(max_C_kρ). In the cubes where ρ varies too much, we take V(x)=-f'_T(ρ(x)). However, we will need to modify this choice of V a bit to also handle the zero temperature case. The proof requires to estimate the interactions between the cubes, for which we need the Ruelle bounds of Appendix <ref>. Due to the universal bounds on G_T ρ proved in <cit.>, and the bounds on f_T in <ref>, the estimate (<ref>) holds when ℓ is finite, even without the term involving δρ_ℓ. We may thus assume in the whole proof that ℓ is large enough.We write the whole proof for T>0 and explain at the end how to adapt it for T=0. As announced, we split ^d into a union of cubes C_k = [ -ℓ/2, ℓ/2 )^d +ℓ k, k ∈^d of side length ℓ > 0. Denote by V the one-body potential V = ∑_k ∈^d v_k _C_k, v_k = -μ_k + K_ℓ where the functions μ_k will be chosen below. The constant K depends on M and we explain later how to choose it so that the shift appearing in the lower bound on the convergence rate in Corollary <ref> is bounded above by K_ℓ. We let = _n be a grand-canonical state minimizing the grand-canonical free energy G_T[ρ] at fixed density ρ. Then we can rewrite and bound from below G_T ρ =G_T ρ + ∫ V ρ - ∫ V ρ = 𝒰 + ∑_n ≥ 1∫_^dn∑_i=1^n V x_i𝕀_n x - T 𝒮 - ∫ V ρ ≥ inf_ = _n[]𝒰 +- T 𝒮 - ∫ V ρ, wheredenotes the grand-canonical energy of the potential V in the state . In the minimization problem above, we have removed the restriction that the one-body density of the statemust be equal to ρ, meaning that the minimizer is just a Gibbs state (with external potential V).Choosing the μ_k and localizing. Since the map ρ↦μ(ρ) is well defined for T>0, but its inverse might not exist in case of phase transitions, it is easier to think in terms of densities. For any k, we will thus choose some function ρ_k in C_k and then take μ_k(x):=f'_T(ρ_k(x))=μ(ρ_k(x)). We will either pick ρ_k = max_C_kρ constant (in which case μ_k is also constant), or just ρ_k=ρ_C_k. By definition, we have in all cases f_T ρ_k x = g_T μ_k x + μ_k xρ_k x,∀ x∈ C_k.Note that in either case μ_k is universally bounded from above in terms of M ≥ρ, by (<ref>). Hence by (<ref>) we must have ξ_T,μ_k≤ C_M for all k. We choose the constant K in (<ref>) so that the shift appearing in Corollary <ref> is controlled by K_ℓ:K≥ C_T(1+ξ^2_T,μ_k),∀ k. Let us now denote bythe Gibbs state in the external potential V, that is, minimizing (<ref>). By geometric localization and sub-additivity of the entropy, we obtain in the same way as (<ref>), G_T ρ≥ 𝒰 +- T 𝒮 - ∫_^d V ρ ≥ ∑_k ∈^d G_T^v_k(C_k) + ∫_C_kμ_k ρ + ∑_k,m ∈^d k ≠ mI_k,m_ - K_ℓ∫_^dρ, where G_T^vC_k denotes the minimal grand-canonical energy in C_k with external potential v, and I_k,m_ :=1/2∬_C_k × C_m w x-yρ_^2x,y𝕀 x 𝕀 y denotes the interaction between the cubes C_k and C_m. We will first choose the ρ_k and bound G_T^v_k(C_k) from below, depending on the behaviour of ρ in each cube, and then treat the interaction terms at the end. For the remainder of the proof, we denote by r_k:=min_C_kρ, R_k:=max_C_kρ the (essential) minimum and maximum values of ρ in the cube C_k.Free energy in the simple cubes. Now, to decide whether to use ρ_k = ρ or ρ_k = R_k, we consider all the k's for which∫_C_kρx𝕀 x ≤√(_ℓ)(∫_C_k√(ρx)𝕀 x + ℓ^bp+dδρ_ℓℓ k^p). where p≥ 1 and b satisfying (<ref>) are fixed as in the statement. In other words we look at the cubes where ∫_C_kρ is already controlled by the error term we are aiming for. A similar argument was used in the quantum case in <cit.>. Note that the set of such “simple” cubes where (<ref>) holds contains all the cubes where the density is uniformly small. In fact, if R_k≤_ℓ, then ∫_C_kρx𝕀 x ≤√(_ℓ)∫_C_k√(ρx)𝕀 x, so (<ref>) holds, even with just the square root term.For all those simple cubes, we choose ρ_k=ρ_C_k and use the bound on ∫_C_kρ to control the whole free energy. Using G_T^v C_k≥ - C ∫_C_k e^-v/T along with <ref>, we get for T> 0, G_T^v_kC_k + ∫_C_kμ_k ρ - ∫_C_k f_T ρ≥- C ∫_C_k e^μ_k-K_ℓ/T+ ∫_C_kT logρ_k - Cρ- ∫_C_k C ρ + T ρlogρ ≥- C∫_C_kρ.We have used here that |μρ-Tlogρ|≤ C_T,M for ρ≤ M by (<ref>) and (<ref>), as well asf_T(ρ)≤ C_T(ρ^γ+ρ)+Tρlogρ≤ C_T,Mρ+Tρlogρby (<ref>). It is thus important that ρ_k(x)=ρ(x) in these cubes, so that the two logarithms cancel each other. Combining with (<ref>), we arrive at G_T^v_kC_k + ∫_C_kμ_k ρ - ∫_C_k f_T ρ≥ -C√(_ℓ)( ∫_C_k√(ρ) + ℓ^bp+dδρ_ℓℓ k^p) for the simple cubes. We obtain the desired error term for the local free energy, after summing over k.Free energy in the main cubes. For the remaining cubes we have ∫_C_kρx𝕀 x > √(_ℓ)( ∫_C_k√(ρx)𝕀 x + ℓ^bp+dδρ_ℓℓ k^p). We call these the “main cubes” since, as we will see, the bound (<ref>) implies that the density is slowly varying in C_k, hence this is where the Local Density Approximation is efficient. In these cubes we take ρ_k to be a constant: ρ_k=R_k=max_C_kρ. We recall that the corresponding μ_k satisfies (<ref>). Since ρ_k is constant, v_k = -μ_k+K _ℓ is constant as well, so we can directly use the convergence rate from <ref>, G_T^v_kC_k=G_T -v_k, C_k≥ ℓ^d g_T []μ_k-K_ℓ + C_T(1+ξ^2_T,μ_k-K_ℓ)_ℓ ≥ ℓ^d g_T []μ_k-K_ℓ + C_T(1+ξ^2_T,μ_k)_ℓ ≥ ℓ^d g_T μ_k= ℓ^d (f_T ρ_k - μ_k ρ_k). We used that ξ_T,μ and g_T are respectively increasing and non-increasing in μ. Using that μ_k ≤ C_T and ρ≤ρ_k=R_k, we obtain G_T^v_kC_k + ∫_C_kμ_k ρ≥ ℓ^d f_T ρ_k + ∫_C_kμ_k ρ-ρ_k ≥ ℓ^d f_T R_k -C ∫_C_k(R_k-ρ). By the argument just below (<ref>), necessarily R_k ≥_ℓ for all the cubes satisfying (<ref>). Using (<ref>), we then have R_k-r_k ≤ ℓ^1-b-d/p_ℓ^-1/2p[]∫_C_kρ^1/p≤ℓ^1-b_ℓ^-1/2p R_k^1/p ≤ ℓ^1-b_ℓ^-1/2p(_ℓ^-1)^p-1/pR_k =ℓ^1-b_ℓ^-1+1/2p R_k. We require that b is so large that the coefficient of the right side tends to zero ℓ^1-b_ℓ^-1+1/2p→ 0, which can be checked to be true under the condition (<ref>). Hence, we obtain for ℓ large enough R_k-r_k≤ R_k/2, which gives _ℓ≤ R_k ≤ 2r_k≤ 2ρ≤ 2R_k, that is, the (essential) maximum and minimum of ρ are comparable and not too small in the box C_k. With this we can go back to (<ref>). By convexity of f_T we have f_T(R_k) ≥f_T(ρ)+μ(ρ)(R_k-ρ) ≥f_T ρ + (Tlogρ-C)(R_k-ρ) ≥ f_T ρ-Clogℓ(R_k-ρ), due to the lower bound (<ref>) on μ(ρ) and (<ref>). We used here that _ℓ is a power of ℓ (with an additional logarithm when s=d+1).As usual, the constant depends on T and M≥ρ_. We have thus proved G_T^v_kC_k + ∫_C_kμ_k ρ≥∫_C_kf_T(ρ)-Clogℓ∫_C_k(R_k-ρ). On the other hand, we have for p > 1, using (<ref>), logℓ∫_C_kR_k-ρ≤ ℓ^d+1δρ_ℓℓ klogℓ = ℓ^1-b_ℓ^1/2p-1/2logℓ[]√(_ℓ)ℓ^d^p-1/p(ℓ^bp+dδρ_ℓ(ℓ k)^p)^1/p ≤ √(2)ℓ^1-b_ℓ^1/2p-1/2logℓ[]∫_C_k√(ρ)^p-1/p[]ℓ^bp+dδρ_ℓ(ℓ k)^p ^1/p ≤Cℓ^1-b_ℓ^1/2p-1/2logℓ[]∫_C_k√(ρ)+ℓ^bp+dδρ_ℓ(ℓ k)^p . In the second inequality we used that √(_ℓ)ℓ^d≤∫_C_k√(R_k)≤√(2)∫_C_k√(ρ) by (<ref>). We require that ℓ^1-b_ℓ^1/2p-1/2logℓ≤ C√(_ℓ), which implies our other condition (<ref>), due to the additional logarithm. From the definition of _ℓ in (<ref>), the condition (<ref>) is true when b > 2-1/2p if s≥ d+1, 1+(s-d) []1-1/2p if d<s<d+1. The strict inequality is used to control the logarithm. This is exactly the condition (<ref>) from the statement. When p=1 we also get the desired bound, using the strict inequality in the condition on b. This means that for the “main” cubes satisfying (<ref>), we obtain from (<ref>) G_T^v_kC_k + ∫_C_kμ_k ρ≥∫_C_k f_T ρx𝕀 x - C√(_ℓ)[]∫_C_k√(ρ) +ℓ^bp+dδρ_ℓℓ k^p . Combining (<ref>) with (<ref>) and (<ref>), we have proven after summing over k [6] G_T ρ - ∫_^d f_T ρx𝕀 x≥ ∑_k, m ∈^d k ≠ m[]I_k,m_ - C√(_ℓ)[]∫_^d√(ρ) +ℓ^bp+dδρ_ℓℓ k^p . It remains to provide a bound on the interaction terms.Bound on the interactions at T>0. We claim that ∑_k,m ∈^d k ≠ mI_k,m_≥ - C√(_ℓ)[]∫_^dρ + ∫_^d√(ρ) + ℓ^bp + d∑_k ∈^dδρ_ℓℓ k^p .The proof will make use of a local version of the Ruelle bound (<ref>) in the presence of an external potential V, which is bounded from below by a constant -μ_0. Precisely, there exist C, L_0 > 0, such that for any cube Q of side length L_0, we have[]n_Q^2_T,V≤ C |Q|∫_Q e^-1/T V x𝕀 x(1+e^dμ_0/ T).The side length L_0 of the cube Q can be chosen arbitrarily large and the constant C depends on L_0 and the temperature T. The estimate (<ref>) is in Corollary <ref> in Appendix <ref>. When ∫_Q e^-1/T V x is small, it can be proved the same as in <cit.> and <cit.>.First, we recall that the potential V is given by (<ref>), and thatis the Gibbs state minimizing the problem in (<ref>). Since ℓ≥ L_0, we can choose an r∈L_0; 2L_0 such that each C_k is exactly a disjoint union of cubes of side length r, C_k = ⋃_γ∈𝒜 Q_k,γ. Since V = - μ_k + K_ℓ in each of the smaller cubes Q_k, γ, we get by throwing away the positive part of the interaction and repeating the calculation in (<ref>), ∑_k≠ mI_k,m_≥- ∑_k≠ m∑_γ, γ' ∈𝒜κ/1+ []Q_k,γ, Q_m,γ'^s []n_Q_k,γ^2_T, V^1/2[]n_Q_m,γ'^2_T, V^1/2 ≥- ∑_k≠ m∑_γ, γ' ∈𝒜κ/1+ []Q_k,γ, Q_m,γ'^s []n_Q_k,γ^2_T, V ≥- c_M ∑_k∑_γ∈𝒜∫_Q_k,γ∫_(C_k)^c /1 + x-y^s∫_Q_k,γρ_k.Here we have used that2[]n_Q_k,γ^2_T, V^1/2[]n_Q_m,γ'^2_T, V^1/2≤[]n_Q_k,γ^2_T, V+ []n_Q_m,γ'^2_T, Vand then[]n_Q_k,γ^2_T, V≤ C∫_Q_k,γe^μ_k/T≤ C∫_Q_k,γρ_kdue to the Ruelle bound (<ref>), and the estimates on μ_k in terms of ρ_k from <ref>. We also used (<ref>) to replace 1/(1+(Q_k,γ,Q_m,γ')^s) by an integral.Next we have two cases. In the “simple” cubes C_k we just estimate∫_(C_k)^c /1 + x-y^s≤∫_^d /1 + x-y^s≤ Cand obtain∑_γ∈𝒜∫_Q_k,γ∫_(C_k)^c /1 + x-y^s∫_Q_k,γρ_k≤ C∫_C_kρwhich can be bounded by the desired error terms due to the definition (<ref>) of the simple cubes. In the “main” cubes, we use that ρ_k≡ R_k is constant and thus get[4] ∑_γ∈𝒜∫_Q_k,γ∫_(C_k)^c /1 + x-y^s∫_Q_k,γρ_k=R_k r^d∫_C_k∫_(C_k)^c /1 + x-y^s≤ c r^d_ℓℓ^d R_k≤ c 2r^d_ℓ∫_C_kρ,by Lemma <ref> and the fact that ρ≥ R_k/2 on C_k. We recall that r is the (fixed) side length of the small cubes Q_k,γ. We have therefore proved (<ref>). Concluding the proof for T>0. Inserting the estimate (<ref>) into (<ref>), we finally conclude that G_T ρ - ∫_^d f_T ρx𝕀 x ≥ -C√(_ℓ)[]∫_^dρ + ∫_^d√(ρ) + ℓ^bp+d∑_k∈^dδρ_ℓℓ k^p . The terms in this inequality are all invariant under translations except for the last sum, due to our initial choice of the tiling of space. However, since (<ref>) holds for all densities ρ we can freely average the bound over translations ρ(· +τ) with τ∈ C_ℓ, which amounts to averaging over the position of the tiling. This is how we obtain the bound (<ref>) in Proposition <ref> for T>0. Zero temperature case. We conclude the proof by explaining how to treat the case T=0.Since ρ↦μ(ρ) is not necessarily single-valued, we can for instance instead take μ_k to be the largest value of μ such that (<ref>) holds. In addition, we do not insert any shift and instead define v_k=-μ_k for all k.We take the same definition of the “simple” and “main” cubes as in the T>0 case, i.e. the cubes satisfying (<ref>) and (<ref>), respectively. In the “main” cubes (those for which (<ref>) holds) we take μ_k to be maximal such that (<ref>) holds with ρ_k=R_k:=max_C_kρ. In the “simple” cubes (where we again have R_k≤_ℓ) we do not take ρ_k=ρ as we did for T>0, but rather choose μ_k = -C sufficiently negative, such thatG_0^v_kC_k= 0,n_C_k_=0.In other words, we enforce that there is no particle at all in the simple cubes. For the first condition it suffices that C≥κ, the stability constant. For the second condition (recall thatis the minimizer with the external potential V), we have to use the Ruelle bound (<ref>) which provides the existence of such a constant C, depending on M.For the simple cubes, we are thus left withG_0^v_kC_k + ∫_C_kμ_k ρ - ∫_C_k f_0 ρ= -C∫_C_kρ - ∫_C_k f_0 ρ ≥ -C_M∫_C_kρ≥-C_M√(_ℓ)( ∫_C_k√(ρx)𝕀 x + ℓ^bp+dδρ_ℓℓ k^p),since at T=0 we have |f_0(ρ)|≤ C_Mρ by <ref>. In the main cubes we again have R_k≥_ℓ and (<ref>), so (<ref>) still holds for ℓ sufficiently large. However, because f_0'ρ≥ -C, (<ref>) is replaced by∫_C_k(R_k-ρ)≤ Cℓ^1-b_ℓ^-1/2+1/2p(∫_C_k√(ρ)+ℓ^bp+dδρ_ℓ( ℓ k)^p),that is, the logarithm in (<ref>) disappears. The condition we need on b is therefore the same (<ref>) as in the T > 0 case (but without the logℓ), which is true under (<ref>). This allows us to argue as for T>0 to estimate the local energy for the main cubes. One difference is that we use instead the additive convergence rate in (<ref>), which givesG_0^v_kC_k + ∫_C_kμ_k ρ - ∫_C_k f_0 ρ ≥ -C∫_C_k (R_k-ρ) -Cℓ^d_ℓ≥ -C∫_C_k (R_k-ρ) -2C√(_ℓ)∫_C_k√(ρ),where in the last bound we have used that ρ≥ R_k/2≥_ℓ/2 in the box C_k. Inserting (<ref>) leads to the expected error term after summing over k.Finally, the interaction is estimated similarly as for T>0, but we use that there is just no “simple” cube to consider, since there is no particle at all there. Like for Corollary <ref>, the main difficulty is that ξ_0,μ cannot easily be controlled in terms of ρ at low density, as we used at T>0 in (<ref>). Therefore we use instead that ρ_k is uniformly bounded over the main cubes and get similarly as in the proof of Proposition <ref>∑_k,m ∈^d k ≠ mI_k,m_≥ -C∑_main cubesℓ^d_ℓ ≥ -2C√(_ℓ)∑_main cubes∫_C_k√(ρ)≥-2C√(_ℓ)∫_^d√(ρ)as desired. This concludes the proof of Proposition <ref>.§.§ Upper bound Let M > 0 and p ≥ 1, and let w be an interaction satisfying <ref>. Let T≥0 and assume furthermore that s > d+1 if T=0. Let b > 2 - 1/2p ifp ≥ 2,3/2+1/2p if1 ≤ p < 2. There exists a constant C > 0 depending on M,p,b,T,d, and w, such that G_T ρ - ∫_^d f_T ρx𝕀 x≤Cη_ℓ[]∫_^d√(ρ)+ ℓ^bp∫_^dδρ_ℓ(z)^pz, for any ℓ > 0, and any density 0 ≤ρ∈ L^1 ^d satisfying ρ∞≤ M and √(ρ)∈ L^1 ^d. Recall that η_ℓ is defined in <ref>. For s>d+1 we just have η_ℓ=√(_ℓ)=1/√(ℓ) so that (<ref>) takes the same form as the lower bound in Proposition <ref>. The constraint on b is slightly worse when 1≤ p<2, however. For T=0 we can also handle d<s≤ d+1 but we get an error worse than η_ℓ. For simplicity we do not give the details. The proof uses very similar arguments as for the lower bound, but with difficulties at different places. The interaction is much easier to treat but the local free energy is slightly trickier to estimate. Again, due to the universal bounds proved in <cit.>, the estimate (<ref>) holds when ℓ is finite, even without the term involving δρ_ℓ. We can thus assume that ℓ is large enough.The first step is to localize the problem into cubes of side length ℓ > 0, with corridors. Following <cit.>, we pick a χ supported on the cube Q_ℓ:=[-ℓ/2,ℓ/2)^d with ∫χ=ℓ^d and use the relation1/ℓ^d∫_Q_ℓ∑_k ∈^dχ(x-ℓ k-τ) 𝕀τ=1,∀ x∈^d,which we interpret as a continuous partition of unity. We automatically have corridors if χ is supported well inside the cube and thus takeχ:=_C_0/1-δ/ℓ^d, C_0:=[-ℓ-δ/2,ℓ-δ/2)^d.where δ≥ r_0 (the range of the core of w). Then we let χ_k:=χ(·-ℓ k) and deduce that ρx = 1/ℓ^d∫_Q_ℓ∑_k ∈^dχ_kx-τρx𝕀τ =: 1/ℓ^d∫_Q_ℓ∑_k ∈^dρ_k,τx𝕀τ. For each k ∈^d and τ∈ Q_ℓ, we let _k,τ be the state minimizing the energy at fixed density ρ_k,τ and take as a trial state := 1/ℓ^d∫_Q_ℓ⊗_k ∈^d_k,τ𝕀τ. Then, from the concavity of the entropy we have G_T ρ≤𝒢_T ≤1/ℓ^d∫_Q_ℓ{∑_k ∈^d𝒢_T _k,τ + ∑_k,m ∈^d k ≠ m 2 D_w ρ_k,τ, ρ_m,τ}τ.From (<ref>), we can also write∫_^d f_T ρx𝕀 x=1/ℓ^d∫_Q_ℓ∫_^d∑_k ∈^dχ_kx-τ f_T (ρx) 𝕀 x τ.Since everything will be done at fixed τ, for simplicity we suppress τ from the notation. We also denote by C_k=C_0+ℓ k the translated cube. With these notations we have to estimateG_T[ρ_k,τ]-∫χ_k· -τ f_T ρ=G_T[ρ_C_k/(1-δ/ℓ)^d]-1/(1-δ/ℓ)^d∫_C_kf_T ρ. As for the lower bound, we then split the cubes into two categories, depending whether ρ varies too much or not. We thus consider all the cubes satisfying the simple estimate∫_C_kρ≤η_ℓ( ∫_C_k√(ρ) + ℓ^bp+dδρ_ℓℓ k^p),where η_ℓ is from Proposition <ref>, which we call the “simple cubes”. In those cubes we use the upper bound (<ref>) on G_T and the lower bound (<ref>) on f_T to obtain the simple estimate [3] G_T[ρ_k]-1/(1-δ/ℓ)^d∫_C_kf_T ρ ≤C ∫_C_kρ_k + T ∫_C_kρ_k logρ_k + ∫_C_k1/1-δ / ℓ^d[] C ρ - T ρlogρ =2C ∫_C_kρ_k + T ∫_C_kρ_k []logρ_k - logρ = 2C-Tdlog(1-δ/ℓ)/(1-δ/ℓ)^d∫_C_kρ≤ C∫_C_kρ.From the definition (<ref>) of the simple cubes, we get the desired error term for those cubes.Next we look at the “main cubes” for which∫_C_kρ≥η_ℓ( ∫_C_k√(ρ) + ℓ^bp+dδρ_ℓℓ k^p),We denote again r_k:=min_C_kρ, R_k:=max_C_kρ.In these cubes, we must have √(R_k)≥η_ℓ. The difference R_k - r_k is bounded in terms of R_k exactly as in (<ref>), R_k - r_k ≤ ℓ^1-bη_ℓ^-2+1/p R_k, while on the other hand, the same argument also gives R_k - r_k ≤ℓ^1-bη_ℓ^-1+1/p√(R_k) ifp ≥ 2 M^2-p/2pℓ^1-bη_ℓ^-1/p√(R_k) if1 ≤ p < 2. If we require that max[]ℓ^1-bη_ℓ^-1+1/p, ℓ^1-bη_ℓ^-1/p≤C η_ℓ/logℓ, which is true for any value of s > d under our choice of b in (<ref>), then we obtain in either case that the right hand side of (<ref>) tends to zero for large ℓ, and R_k - r_k ≤C η_ℓ√(R_k)/logℓ. Note the additional η_ℓ/logℓ on the right of (<ref>) compared to the similar estimate (<ref>) in the proof of the lower bound. From (<ref>) we obtain again η_ℓ^2 ≤ R_k ≤ 2 r_k ≤ 2 ρ≤ 2R_k for ℓ large enough. Furthermore, (<ref>) implies the pointwise bound ρ_k - r_k =1/1- δ / ℓ^dρ - r_k ≤ρ - r_k + c δ/ℓρ≤ Cη_ℓ√(R_k)/logℓ, since logℓ/ℓ≤η_ℓ. In the “main” cubes satisfying (<ref>), we will replace ρ_k by the minimum r_k of ρ in the cube C_k, using the sub-additivity bound in <ref>. More precisely, choosing = η_ℓ/logℓ in (<ref>) yields in the case α≠ d, G_T ρ_k =G_T r_k + ρ_k - r_k_C_k ≤G_T r_k _C_k + C [] 1+log[]η_ℓ/logℓ_- η_ℓ/logℓ√(R_k)ℓ^d ≤G_T r_k _C_k + Cη_ℓ∫_C_k√(ρ) ≤C_k f_T r_k + Cη_ℓ∫_C_k√(ρ). In the last line we have used the upper bound on the convergence rate from Proposition <ref> (and hence also the assumption s > d+1 when T = 0). The same bound can be obtained in the α = d case by using (<ref>). Similarly, the lower bound (<ref>) on f_T and ρ≥η_ℓ^2 / 2 in C_k imply - ∫_C_k f_T ρ≤ Clogℓ∫_C_kρ if T>0, ∫_C_k√(ρ) if T=0. Using logℓ/ℓ≤η_ℓ, we obtain in either case G_T ρ_k - 1/1-δ / ℓ^d∫_C_k f_T ρ≤∫_C_k(f_T r_k - f_T ρ) +Cη_ℓ∫_C_k√(ρ). By (<ref>) and the bounds on f_T'(ρ), we have as in (<ref>), ∫_C_k f_T r_k - f_T ρ≤ C logℓ∫_C_kρ - r_k≤ C η_ℓ∫_C_kρ. We conclude that for any “main” cube C_k where (<ref>) holds, we have G_T ρ_k - 1/1-δ / ℓ^d∫_C_k f_T ρ≤ C η_ℓ∫_C_k√(ρ). It remains to deal with the interaction terms. Due to the corridors, we haveD_w ρ_k,τ, ρ_m,τ≤κ∬ρ_k,τ(x)ρ_m,τ(y)/1+|x-y|^s .Hence, using the fact that ρ≤ M, we have∑_m∈^d∖{k}D_w ρ_k, ρ_m≤ C∫_C_k∫_(C_k)^cρ(x)/1+|x-y|^s .In the simple cubes we just bound the previous integral by ∫_C_kρ and then use (<ref>) to obtain η_ℓ∫_C_k√(ρ). In the main cubes satisfying (<ref>), we rather use Lemma <ref> and obtain∫_C_k∫_(C_k)^cρ(x)/1+|x-y|^s ≤ C_ℓℓ^d R_k≤ 2C_ℓ∫_C_kρ.We arrive at the desired estimate after integrating over τ. § UNIFORM BOUNDS ON ENERGY PER UNIT VOLUMEThis section contains the proofs of Proposition <ref> and Corollaries <ref> and <ref>. We note first that because f_T ρ,L is given by the minimization problem (<ref>), the upper bounds (<ref>) follow easily from the universal bounds (<ref>) and (<ref>) by simply using the density ρ_C_L, f_T ρ,L =inf_⊆ C_L𝒩() = ρ L^d 𝒢_T /L^d≤min_ρ_ = ρ_C_L𝒢_T /L^d =G_T ρ_C_L/L^d. Thus we need only to prove the lower bound (<ref>). For this, we use (<ref>) and split the minimization problem f_T ρ,L = inf_⊆ C_L𝒩() = ρ L^d 𝒢_T /L^d≥inf_⊆ C_L𝒩() = ρ L^d /L^d - sup_⊆ C_L𝒩() = ρ L^d T/L^d, where 𝒮 under the stated conditions is maximized by the Poisson state <cit.> with density ρ_C_L, with sup_⊆ C_L𝒩() = ρ L^d ≤ L^d ρ - ρlogρ. Thus we proceed to derive a bound on the interaction energy 𝒰. We write the cube C_L as a union of smaller cubes of side length 0 < r < √(d)^-1 r_0, C_L = ⋃_k ∈𝒜 C_k. Given a configuration x = x_1, …, x_n of n particles in C_L, we denote by n_k = #x ∩ C_k the number of particles in the cube C_k. It follows from <ref> that for any n ≥ 1, ∑_i < j^n w x_i - x_j≥ ∑_i < j^n r_0^α/κx_i-x_j<r_0/x_i-x_j^α - κ n≥ ∑_k ∈𝒜∑_x_i, x_j ∈ C_k i ≠ j1/κ d^α/2r^α/x_i - x_j^α - κ n≥ ∑_k ∈𝒜c/κ n_k^γ - []κ + c/κ n. We have used here the well-known inequality min_x_i ∈ C_r∑_i < j^n r^α/x_i - x_j^α≥ c n^γ - c n which holds for α≠ d in any cube of side length r, see for instance <cit.> and <cit.> (when α=d there is even a lower bound involving n^2log(n)). It follows for any statein C_L with 𝒩 = ρ L^d that we can estimate, using Jensen's inequality, 𝒰≥ ∑_n ≥ 01/n!∫_C_L^n∑_k ∈𝒜c/κ n_k^γ - []κ + c/κ n 𝕀_n≥ ∑_n ≥ 01/n!∫_C_L^nc/κ𝒜^1-γ n^γ𝕀_n - []κ + c/κ𝒩 ≥ c/κ𝒜^1-γ𝒩^γ - []κ + c/κ𝒩 =c r^d γ-1/κ L^d ρ^γ - []κ + c/κ L^d ρ. Combining with (<ref>) and choosing for instance r ≥r_0/2 √(d), we finally conclude for α≠ d that f_T ρ,L≥c r_0^d γ-1/κρ^γ - []κ + c/κ +Tρ + T ρlogρ. In the case α = d, we get in the same way, 𝒰≥ ∑_n ≥ 01/n!∫_C_L^n∑_k ∈𝒜c/κ n_k^2 log n_k - κ n 𝕀_n ≥ c 𝒜/κ[]𝒩/𝒜^2 log[]𝒩/𝒜_+ - κ𝒩 =c r^d L^d/κρ^2 []log r^d ρ_+ - κ L^d ρ. Again choosing r ≥r_0/2 √(d), we conclude that (<ref>) holds. Since f_T ρ,L is a convex function, we have for all ρ, ρ≥ 0, f_T ρ,L≥ f_T ρ,L + f_T' ρ,Lρ- ρ. Taking ρ = 0 and using f_T 0,L= 0 immediately gives μ_L ρ=f_T' ρ,L≥f_T ρ,L/ρ, so combining with the lower bounds on f_T from <ref> gives the stated lower bounds (<ref>) on μ_L ρ. For the upper bounds, we simply take ρ = 2 ρ and obtain μ_L ρ=f_T' ρ,L≤f_T 2ρ,L-f_T ρ,L/ρ. Using both the upper and lower bounds on f_T in <ref> yields the upper bounds (<ref>). At T=0 the argument is the same, using any derivative in the interval []∂/∂ρ_- f_0(ρ,L), ∂/∂ρ_+ f_0(ρ,L). Dividing C_L into smaller cubes of side length r ≤√(d)^-1 r_0, we follow the approach of the proof of the lower bound in <ref> and obtain as in (<ref>) for α≠ d, ∑_i < j^n w x_i - x_j - μ n ≥ ∑_k ∈𝒜c/κ n_k^γ - []κ + c/κ+ μ n_k≥- L^d K []κ + c/κ + μ_+^γ/γ-1, where the constant K depends on κ, α, d, and r_0. We use the bound (<ref>) when C := κ + c/κ≥ - μ. If C + μ≤ 0, then we just remove the core of the interaction, so all in all we obtain ∑_i < j^n w x_i - x_j - μ n ≥ - L^d K C + μ_+^γ/γ-1 + C+μ_- n. Using this, it follows for any grand-canonical probability = _n supported in C_L, that _T -μ() ≥- L^d K C+ μ_+^γ/γ-1 + C+ μ_- 𝒩 - T 𝒮 ≥- L^d K C+ μ_+^γ/γ-1 - L^d T e^- 1/TC+μ_-, where we used that the energy of a non-interacting system in a bounded region Ω⊆^d is minimized by - μ𝒩 - T 𝒮≥ - T log[]∑_n ≥ 01/n!∫_Ω^n e^μ n/T𝕀_n = - T e^μ/TΩ. The bound (<ref>) then follows directly from (<ref>) by minimizing over . In the case α = d, the bound in (<ref>) is replaced by ∑_i < j^n w x_i - x_j - μ n ≥ ∑_k ∈𝒜c/κ n_k^2 log n_k - []κ + μ n_k.For λ>0, the function x∈_+↦ x^2log x-λ x is first decreasing and then increasing. It attains its minimum at x̅=e^W-1/2=λ/(2W) where W=W_0(λ√(e)/2)≥0, with W_0 the principal branch of the Lambert W-function, solving W_0 y e^W_0 y = y for y ≥ 0. The minimum equalsx̅^2logx̅-λx̅=-λ^2/8W^2(2W+1).We have x̅>1 if and only if W>1/2 and in this case we obtainx̅^2logx̅-λx̅≥ -λ^2/2W.If x̅≤ 1 then since the function is increasing on the right of x̅, we conclude that x^2log x-λx≥ -λ for all x≥1. Thus for integers we can conclude thatmin_n∈{n^2log n-λ n}≥ -λ -λ^2/2W_0(λ√(e)/2),∀λ>0.Using that W_0(y)∼_0 y and W_0(y)∼_+log (y), we obtainmin_n∈{n^2log n-λ n}≥ -Cλ(1+λ/log(2+λ)),∀λ>0,for some universal constant C. Inserting this bound in our problem and arguing as in the α≠ d case, we end up with a bound of the form g_T μ,L≥ - K κ + μ_+(1+κ + μ_+/log(2+κ + μ_+))- T e^- 1/Tκ+μ_-. Changing κ+μ into C+μ for some C>κ, we obtain the simpler bound in the statement. Since g_T is concave in μ, it is also the Legendre transform of f_T, that is, for α≠ d, g_T μ,L = inf_ρ≥ 0f_T ρ,L - μρ≤ C ρ^γ + C 1+Tρ + T ρlogρ - μρ, for any ρ≥ 0, where the inequality follows from (<ref>). When α = d, the same bound holds with ρ^γ replaced by ρ^2 logρ_+. When μ≤C := C 1+T + T + C, we choose ρ_μ = e^-1/Tμ - C_- ≤ 1 to obtain g_T μ,L≤ C ρ_μ^γ - C+Tρ_μ≤ -T ρ_μ = - T e^-1/Tμ - C_- . The same bound holds when α = d because ρ_μ≤ 1 implies ρ_μ^2 logρ_μ_+ = 0. In the case when μ≥C and α≠ d, we can for instance take ρ_μ to satisfy ρ_μ^γ-1 = 1/γ C + Tμ - C + 1, or equivalently, μ = γ C + Tρ_μ^γ-1 + C 1+T - γ -1C. Using that ρ_μ≥ 1, this leads to the bound g_T μ,L≤- γ-1 C + Tρ_μ^γ + γ - 1C ρ_μ + T ρ_μlogρ_μ ≤- γ-1C ρ_μρ_μ^γ-1 - 1 - T ρ_μ =- γ-1C/γ C + Tρ_μμ-C - T ρ_μ≤ - γ-1C/γ C + Tμ - C^γ/γ-1 - T. The argument is similar when α=d, using the Lambert W-function. For instance, taking ρ_μ = exp[]W_0 √(e)/2 C+Tμ - C 1+T - T≥ 1 immediately gives g_T μ,L≤ C+Tρ_μ^2 logρ_μ_+ - μ - C 1+T - Tρ_μ - T =- 2 √(e) - e/4 C+Tμ - C 1+T - T^2/W_0 √(e)/2 C+Tμ - C 1+T - T - T, which can then easily be put in the same form as in the statement. § RUELLE LOCAL BOUNDS FOR SUPERSTABLE INTERACTIONS §.§ Ruelle bounds for homogeneous systems Ruelle bounds <cit.> are local estimates on Gibbs point processes, which are uniform with respect to the size of the system. They have played a very important role in statistical mechanics and Ruelle's original article <cit.> is considered as a historical breakthrough. Unfortunately, the dependence in terms of the temperature T and the chemical potential μ (resp. density ρ) was not made explicit in Ruelle's paper. It is the goal of this appendix to state such quantitative bounds. The proof is essentially that of Ruelle in <cit.> with some small variations and we provide it for completeness. Assume that w=w_1+w_2 where∙ w_1≥0 with w_1(x)≥ A>0 for |x|≤δ;∙ w_2 is stable,∑_1≤ j<k≤ nw_2(x_j-x_k)≥ -κ n,∀ x_1,...,x_n⊂^d, ∀ n≥2and lower regularw_2(x)≥ -ϕ(|x|),where x↦ϕ(|x|) is a non-negative radial-decreasing function in L^1∩ L^(^d).For x={x_1,...,x_n}⊂^d, letH(x):=∑_1≤ j<k≤ nw(x_j-x_k), h(x):=∑_1≤ j<k≤ nw_1(x_j-x_k)be the total and repulsive part of the interaction. Denote by h_Q(x)=h(x∩ Q) its restriction to any set Q⊂^d. There exists a function ψ:↦_+ tending to + and a constant ζ (both depending only on w) such that[]e^β/2 h_Q_β,μ,Ω≤exp[]L^de^β(μ+ζ)[] e^ζβψ(L)L^d+∑_ℓ∈ ℓ> Le^-β(ψ(ℓ)/ζ-ζ e^βμ)ℓ^dfor any β>0, any μ∈, any bounded domain Ω and any cube Q of side length L≥ζ. Here _β,μ,Ω denotes the expectation against the grand-canonical Gibbs point process with energy H, chemical potential μ and inverse temperature β, in a domain Ω⊂^d. The estimate (<ref>) is not at all expected to be optimal and it is only displayed for concreteness. In the proof we explain in detail how to choose ψ and the large constant ζ (the latter is the maximum of several explicitconstants). The precise final estimate we obtain is in (<ref>) below. Note that (<ref>) behaves rather badly when the temperature tends to 0, that is, β→. In fact, the proof also works at T=0 and the corresponding bound is provided in Corollary <ref> below.The function ψ depends on the decay of ϕ at infinity and, in principle, it can diverge very slowly. If ϕ(x)=B(1+|x|)^-s for some s>d, we can just take ψ(ℓ)= ℓ^ for any 0<<min(1,s-d) or ψ(ℓ)=logℓ. The series on the right of (<ref>) is convergent because ψ(ℓ)→+, hence the power in the exponential is strictly negative for ℓ large enough.[In our proof we do not get the full sum over ℓ> L on the right side of (<ref>), but only over a subsequence {ℓ_j} of integers tending exponentially fast to infinity. The details are provided below.] Note, however, that the value of the sum can be quite large when the activity z=e^βμ is large. We do not expect this to be optimal at all. Better estimates exist at low activity z=e^βμ≪1, using expansion methods <cit.>.Since ψ diverges, we may choose L large enough (depending on β and μ) so that the term in the parenthesis in (<ref>) is ≤ e^2ζβψ(L)L^d. We then end up with the simpler estimate[]e^β/2 h_Q_β,μ,Ω≤exp[] L^d(e^β(μ+ζ)+2ζβψ(L)) .As we will see in (<ref>) below, there exists a constant c>0 (depending on δ) such that h_Q≥ (A/2)(cL^-dn_Q^2-n_Q) where n_Q is the number of points in Q and we obtain[]n_Q^2k_β,μ,Ω≤ C_kL^2dk(1+ψ(L))on the local moments of the Gibbs state, with C_k depending on w,β,μ. Due to the divergence of ψ(L), this bound is definitely not optimal for large values of L. But one should not think of taking L too large here. Once we obtain (<ref>) for one cube of side length L_0, it immediately follows for a bigger cube by covering it with the smaller cubes. This way we obtain the expected estimate involving the volume of the large cube. The estimate (<ref>) implies that the correlation functions are bounded in L^1_ unif(^d), independently of the size of Ω. In <cit.>, Ruelle proves L^(^d) bounds on the correlation functions, which are discussed later in Remark <ref>. Averaged bounds of the kind of (<ref>) and (<ref>) are usually enough in applications.In the case that ϕ decays polynomially at infinity, we can make the bound more explicit after choosing ψ(ℓ)=ℓ^. Assume that w satisfies the assumptions of Theorem <ref> with ϕ(|x|)=B(1+|x|)^-s for some s>d. Let 0<<min(1,s-d). Then we have[]e^β/2 h_Q_β,μ,Ω≤ e^ζβ, n_Q^2_β,μ,Ω≤ L^d min{ze^ζβ(2+1) ; ζ}for any cube Q of side length L≥ζ, where z=e^βμ and:=zL^de^βζ/βζ+L^d++ []z/β^1+d/+(logβ)_-+1/β.Applying the above bound for L_0=ζ and covering any larger cube by cubes of side length L_0, we conclude thatn_Q^2_β,μ,Ω≤ C_β,|Q|^2 z(1+z^d/)for a constant C_β, depending on w and β>0 and for every large enough cube Q. The proof of Corollary <ref> is provided in Section <ref>. §.§ Proof of Theorem <ref>We write the whole proof assuming for simplicity δ=√(d). The general case follows by scaling, that is, after applying the inequality to w'(x):=w(δ x/√(d)) which has δ'=√(d), takingΩ'=(√(d)/δ)Ω and e^βμ'=(δ/√(d)) e^βμ. This changes the constant ζ. The function ψ also needs to be rescaled appropriately, since it depends on ϕ.Step 1: Definition of a splitting of space. First we split our space into the small cubes C_i:=i+[0,1)^d with i∈^d and denote by n_i(x):=#C_i∩ x the number of points in the small cube C_i. Those have diameter δ=√(d) and therefore we have w_1(x-y)≥ A for x,y in such a cube, henceh_C_i≥An_i(n_i-1)/2.This estimate is going to play an important role in the following.Since the bound in (<ref>) is invariant under translations, we can assume that the cube Q of interest is centered at the origin of space. We will also assume for simplicity that the side length of Q is an even integer, so that Q can be written as the union of the smaller cubes C_i.The proof will require to estimate the interaction of some points x_j∈ Q with an arbitrary number of points y_j∈^d∖ Q. For this we will need to understand how the y_j are distributed in space and, in particular, what is the local number of points in any domain. Big clusters will typically generate a large interaction with the x_j. To this end, we introduce a growing sequence of cubes {Q_j}_j≥0 with Q_0=Q (the cube in the statement of the theorem). The side length will increase exponentially fast, but not too fast. More precisely, we choose an increasing sequence of integers ℓ_0<ℓ_1<⋯ which satisfies ℓ_j→ as well as ℓ_j+1/ℓ_j≈ 1+2α, where α>0. We will later need to assume that ℓ_0 is large enough and α is small enough, depending on w. In fact we will not need to have an exact limit and only require that1+α≤ℓ_j+1/ℓ_j≤ 1+3α,∀ j≥0.To be more concrete, we can for instance take ℓ_j=⌊ℓ_0(1+2α)^j⌋ and choose ℓ_0∈ large enough to have (<ref>). We then callQ_j:=(-ℓ_j,ℓ_j)^d, V_j:=(2ℓ_j)^dthe cube which is the union of (2ℓ_j)^d smaller cubes C_i and has volume V_j. We will focus our attention on the annulus-type regions A_j:=Q_j∖ Q_j-1 which has the volume|A_j|=(2ℓ_j)^d-(2ℓ_j-1)^d=(2ℓ_j)^d []1-ℓ_j-1^d/ℓ_j^d,henceα d V_j/2≤ V_j(1-1/(1+α)^d)≤ |A_j|≤ V_j(1-1/(1+3α)^d)≤ 3α d V_jfor α small enough (depending only on the dimension d).Our goal will be to look at how many particles there are in each A_j or Q_j, and whether this number is of the order of the volume or not. The fact that the interaction is integrable permits a slight deviation from a volume term, which will be described by a function ψ depending on the radial function ϕ (the lower bound to the stable part w_2 of the interaction), and is the function in the statement (after an appropriate rescaling). We take ℓ∈↦ψ(ℓ)∈[1,) any increasing function so thatψ(ℓ+1)/ψ(ℓ)≤ℓ+1/ℓ,for all ℓ≥1, and∑_ℓ≥1ψ(ℓ)ϕ(ℓ)ℓ^d-1<,that is, ψ must diverge to infinity sufficiently slowly. Recall that since x ↦ϕ (| x|) is in L^1 (^d) and is radial decreasing, we have ∑_ℓ≥1ϕ(ℓ)ℓ^d-1<. A typical example is when ϕ(|x|)=(1+|x|)^-s with s>d, in which case we just take ψ(ℓ)=ℓ^ for <min(1,s-d). Denote finally ψ_j:=ψ(ℓ_j).Step 2: Pointwise lower bound on H. Next we explain the main Ruelle estimate, which is a pointwise bound on the total energy H. We consider an arbitraryconfiguration of points in ^d, which we split into the points x_j in Q_0 and those outside of Q_0. The idea of Ruelle in <cit.> is to distinguish between `good' configurations where the points in ^d∖ Q_0 are well distributed in space, with a reasonable number of particles in any bounded region, from the `bad' configurations where too many of these points concentrate in a small volume, which results in a stronger interaction with the x_j's. The method is then to merge the bad points with the x_j, use the superstability for the union to prove that their energy is very positive, and use this large energy to control the interaction with the remaining good points. The latter interaction is essentially of the order of the volume where the bad points live. Thus we need the energy of the bad points to be much larger than the volume they occupy, in order to control this interaction.Let now Y⊂^d∖ Q_0 be a finite set. The condition which characterizes good and bad configurations of Y is whether ∑_i∈^d∩ Q_jn_i(Y)^2 is of the order of V_jψ_j or not. In other words, we use the function ψ_j to quantify large but still controllable deviations of the number of particles (squared locally) compared to the volume of the large cube Q_j. To be more precise, we call q=q_Y≥0 the smallest integer such that∑_C_i⊂ Q_jn_i(Y)^2≤V_jψ_j∀ j> q_Y.Since we have∑_C_i⊂ Q_jn_i(Y)^2≤[]∑_C_i⊂ Q_jn_i(Y) ^2≤ (#Y)^2the inequality (<ref>) always holds for j large enough, hence q=q_Y<. When q=0, the inequality (<ref>) holds for all j≥0 (it is valid for j=0 since the left side vanishes in this case). When q>0, we have by definition∑_C_i⊂ Q_qn_i(Y)^2>V_qψ_qif q>0.Now we are going to split our configuration space in the two regions Q_q and ^d∖ Q_q. We write Y=y∪ z where y are the particles in Q_q (if q>0) and z those in ^d∖ Q_q. We write our full energy in the formH(x∪ y∪ z)=H(x∪ y)+H(z)+W(x∪ y,z)where the third term is the interaction:W(x,y):=∑_i=1^#x∑_k=1^#yw(x_i-y_k).Our goal is to find a lower bound on H(x∪ y∪ z) which only involves x, H(z) and q. To state the bound, we recall that h(x) is the repulsive part of the energy. One can choose α small enough and ℓ_0 large enough (depending only on d and w) so that for all x⊂ Q_0 and all Y⊂^d∖ Q_0, we have the pointwise estimateH(x∪ y∪ z)≥h(x)/2 +A/16ψ_q V_q+A/16V_q(#y)^2+ H(z)-(κ+A/2)#xwhenever q=q_Y>0, with the decomposition Y=y∪ z, where y=Y∩ Q_q and z=Y∩ (^d∖ Q_q). If q=q_Y=0, we haveH(x∪ Y)≥h(x)/2 -4^d-2A/αψ_0V_0+H(Y)-(κ+A/2)#x.Recall that A is the minimum of w_1(x) for |x|≤δ=√(d) and that κ is the stability constant for w_2. We calln=#x, k=#y and m=#z the number of particles in each group. We assume q=q_Y>0 in the whole proof and treat the simpler case q=0 at the end. First we use the stability of w_2 and the positivity of w_1 to getH(x∪ y∪ z) =H(x∪ y)+H(z)+W(x∪ y,z)≥ h(x)+h(y)-κ(n+k)+H(z)+W(x∪ y,z)≥h(x)/2 +A/4∑_i n_i(x)^2+A/2∑_i n_i(y)^2 +H(z)+W(x∪ y,z)-C(n+k),where C=κ+A/2. Our first step will be to control the easy term -Ck. We use the Cauchy-Schwarz (or Jensen) inequality which says that∑_i n_i(y)^2≥ V_q^-1[]∑_i n_i(y) ^2=V_q^-1k^2.Since A/16V_qk^2-Ck≥ -4C^2/AV_q we obtainA/2∑_i n_i(y)^2-Ck≥3A/8∑_i n_i(y)^2+(Aψ_q/16-4C^2/A)V_q,where we have used that ∑_i n_i(y)^2≥ V_qψ_q due to the definition of q=q_Y. Using that ψ tends to infinity, we can assume that ℓ_0 is large enough so thatA/16ψ(ℓ_0)≥4C^2/A=(2κ+A)^2/A.Then we arrive atH(x∪ y∪ z)≥h(x)/2 +A/4∑_i n_i(x)^2+3A/8∑_i n_i(y)^2+H(z) -Cn+W(x∪ y,z). Our next and main task is to bound the interaction W(x∪ y,z). To simplify our writing, we callδ_ij=min_x∈ C_iy∈ C_j|x-y|, ϕ_ij:=max_x∈ C_iy∈ C_jϕ(|x-y|)=ϕ(δ_ij)the distance between the cubes C_i and C_j and the corresponding interaction. Note that when the two cubes have a common boundary, we just get δ_ij=0 and ϕ_ij=ϕ(0). Splitting our space into the small cubes C_i and using the lower regularity of w_2, we obtainW(x∪ y,z) ≥ -∑_i,jn_i(x)n_j(z)ϕ_ij-∑_i,jn_i(y)n_j(z)ϕ_ij≥-τ∑_i,j n_i(x)^2ϕ_ij-τ∑_i,jn_i(y)^2ϕ_ij-1/2τ∑_i,jC_i⊂ Q_qn_j(z)^2ϕ_ij.In the first two sums we have for simplicity dropped the condition that C_j⊂^d∖ Q_q. The integrability of ϕ implies thatS:=∑_i≠0ϕ_0i<.In view of (<ref>) we take τ :=A/(8S) so that the first two error terms are controlled. We thus obtainH(x∪ y∪ z)≥h(x)/2 +H(z) -Cn +A/4∑_in_i(y)^2-4S/A∑_i,jC_i⊂ Q_qn_j(z)^2ϕ_ij,after dropping the positive term (A/8)∑_i n_i(x)^2. It remains to estimate the last sum by ψ_q V_q. We distinguish between the C_j belonging to the first shell A_q+1=Q_q+1∖ Q_q and the ones further away.First shell A_q+1. After adding all the missing terms for i, the sum over all the C_j⊂ A_q+1 can be estimated by∑_C_i⊂ Q_qC_j⊂ A_q+1n_j(z)^2ϕ_ij≤ S∑_C_j⊂ A_q+1n_j(z)^2.By definition of q in (<ref>) and (<ref>), we have∑_C_j⊂ A_q+1n_j(z)^2+∑_C_j⊂ Q_qn_j(y)^2≤ V_q+1ψ_q+1,∑_C_j⊂ Q_qn_j(y)^2≥ V_qψ_qsince q>0 and therefore we obtain∑_C_j⊂ A_q+1n_j(z)^2≤ V_q+1ψ_q+1-V_qψ_q.Here comes the importance of the properties of ℓ_j and ψ. By (<ref>) we haveV_q+1=(2ℓ_q+1)^d≤ (1+3α)^d(2ℓ_q)^d=(1+3α)^dV_q.Similarly, the estimate ψ(ℓ+1)≤ (ℓ+1)ψ(ℓ)/ℓ in (<ref>) implies by inductionψ_j+1=ψ(ℓ_j+1)≤ℓ_j+1/ℓ_jψ(ℓ_j)≤ (1+3α)ψ(ℓ_j)=(1+3α)ψ_j.Thus we obtain∑_C_j⊂ A_q+1n_j(z)^2≤[] (1+3α)^d+1-1V_qψ_q.This term can be absorbed into the positive term in (<ref>) provided we choose α so that4S^2/A[](1+3α)^d+1-1 ≤A/16.Then we obtainH(x∪ y∪ z)≥h(x)/2 +H(z) -Cn+3A/16∑_in_i(y)^2-4S/A∑_C_i⊂ Q_qC_j⊂^d∖ Q_q+1n_j(z)^2ϕ_ij. Other shells A_j for j≥ q+2. Any shell A_j with j≥ q+2 is at least at a distance ℓ_j-1-ℓ_q from Q_q. We thus group our small cubes according to these shells and estimate the interaction by ϕ(ℓ_j-1-ℓ_q). This gives∑_C_i⊂ Q_qC_j⊂^d∖ Q_q+1n_j(z)^2ϕ_ij≤ V_q∑_j≥ q+2∑_C_i⊂ A_jn_i(z)^2ϕ(ℓ_j-1-ℓ_q)since there are V_q cubes C_i in Q_q. We would like to use that ∑_C_i⊂ Q_jn_i(z)^2≤ V_jψ_j by definition of q (we remove here the part involving y) and hence we rewrite the sum as[4] ∑_j≥ q+2ϕ(ℓ_j-1-ℓ_q)∑_C_i⊂A_jn_i(z)^2 = ∑_j≥ q+2ϕ(ℓ_j-1-ℓ_q) []∑_C_i⊂ Q_jn_i(z)^2-∑_C_i⊂ Q_j-1n_i(z)^2 = ∑_j≥ q+2[]∑_C_i⊂ Q_jn_i(z)^2 (ϕ(ℓ_j-1-ℓ_q)-ϕ(ℓ_j-ℓ_q))-ϕ(ℓ_q+1-ℓ_q)∑_C_i⊂ Q_q+1n_i(z)^2 ≤ ∑_j≥ q+2V_jψ_j(ϕ(ℓ_j-1-ℓ_q)-ϕ(ℓ_j-ℓ_q))=2^d ∑_j≥ q+2ℓ_j^dψ(ℓ_j)(ϕ(ℓ_j-1-ℓ_q)-ϕ(ℓ_j-ℓ_q)).This is now independent of z, as required. It will be useful to replace ℓ_j by ℓ_j-1-ℓ_q in the factor ℓ_j^dψ(ℓ_j). For the volume we can simply use that(ℓ_j-1-ℓ_q)^d/ℓ_j^d=(ℓ_j-1/ℓ_j)^d(1-ℓ_q/ℓ_j-1)^d≥1/(1+3α)^d(1-1/1+3α)^d≥α^dfor (1+3α)^2≤ 3. Similarly, using (<ref>) we haveψ(ℓ_j)≤ℓ_j/ℓ_j-1-ℓ_qψ(ℓ_j-1-ℓ_q)≤ψ(ℓ_j-1-ℓ_q)/α.This gives∑_j≥ q+2ϕ(ℓ_j-1-ℓ_q)∑_C_i⊂A_jn_i(z)^2 ≤2^d/α^d+1∑_j≥ q+2(ℓ_j-1-ℓ_q)^dψ(ℓ_j-1-ℓ_q)(ϕ(ℓ_j-1-ℓ_q)-ϕ(ℓ_j-ℓ_q)).We now show that the sum on the right side is bounded above byI(ℓ_0):=∑_ℓ≥ℓ_0ℓ^dψ(ℓ)(ϕ(ℓ)-ϕ(ℓ+1)).We use the fact that for f a non-decreasing function, g a non-increasing function and k_j an increasing sequence of integers, we have[2] f(k_j-1)(g(k_j-1)-g(k_j))=f(k_j-1)(g(k_j-1)-g(k_j-1+1))+⋯+f(k_j-1)(g(k_j-1)-g(k_j)) ≤f(k_j-1)(g(k_j-1)-g(k_j-1+1))+⋯+f(k_j-1)(g(k_j-1)-g(k_j)).In the estimate we used that f is non-decreasing and that g(k_j-1+k)-g(k_j-1+k+1)≥0 since g is non-increasing. This way we obtain all the f(ℓ)(g(ℓ)-g(ℓ+1)) for ℓ between k_j-1 and k_j-1. After summing over j we deduce that∑_j f(k_j-1)(g(k_j-1)-g(k_j))≤∑_ℓ f(ℓ)(g(ℓ)-g(ℓ+1)).Applying this to k_j=ℓ_j-ℓ_q and using that ℓ_j-1-ℓ_q≥ℓ_q+1-ℓ_q≥ (1+α)ℓ_0≥ℓ_0, this proves as we wanted that∑_j≥ q+2ϕ(ℓ_j-1-ℓ_q)∑_C_i⊂A_jn_i(z)^2≤2^dI(ℓ_0)/α^d+1.Note that the series in (<ref>) is convergent. In fact, after changing indices this is the same as∑_ℓ≥ℓ_0[] (ℓ+1)^dψ(ℓ+1)-ℓ^dψ(ℓ) ϕ(ℓ+1)<,which follows from the fact that ψ(ℓ+1)≤(1+ℓ^-1)ψ(ℓ) and (1+ℓ)^d≤ℓ^d(1+c/ℓ), so that (ℓ+1)^dψ(ℓ+1)-ℓ^dψ(ℓ)≤ cℓ^d-1ψ(ℓ). Thus I(ℓ_0) is the remainder of a convergent series due to our assumption (<ref>) on ψ. As a conclusion, α>0 being fixed so that (<ref>) holds, we need to take ℓ_0 large enough so that2^d+2SI(ℓ_0)/α^d+1A≤A/16ψ(ℓ_0).As a conclusion, we obtain the boundH(x∪ y∪ z)≥h(x)/2 +H(z) -Cn+A/8∑_in_i(y)^2,which reduces to the stated inequality (<ref>) after using again (<ref>) and the definition of q.It remains to deal with the case q=0. Then the energy is reduced to H(x∪ z) with z=Y, that is, y is empty. Of course, if n=0, then H(x∪ z)=H(z)and we can therefore assume n≥1. The exact same bounds as in (<ref>) (with τ=A/(4S) in (<ref>)) provideH(x∪ z)≥h(x)/2-Cn+H(z) -2S/A∑_i,jC_i⊂ Q_0n_j(z)^2ϕ_ij.For the first shell we just have∑_C_i⊂ Q_0C_j⊂ A_1n_j(z)^2ϕ_ij≤ Sψ_1V_1≤ S(1+3α)^d+1V_0ψ_0,whereas the next shells are estimated as above, leading toH(x∪ z)≥h(x)/2-Cn+H(z) -A/32ψ_0V_0(1+(1+3α)^d+1/(1+3α)^d+1-1).Using the simple estimate1+(1+3α)^d+1/(1+3α)^d+1-1≤1+4^d+1/6α≤4^d/α since α<1, we obtain (<ref>). This concludes the proof of Proposition <ref>. As a side remark, we notice that our conditions (<ref>), (<ref>) and (<ref>) are monotone in ℓ_0. When they are valid for one ℓ_0, then they also hold for larger values. A simple estimate at zero temperature follows immediately from Proposition <ref>. Let Ω⊂^d be any domain and let μ∈. Let X⊂Ω be any minimizer for the free energyX⊂Ω↦ H_Ω(X)-μn_Ω(X).Let L_μ≥0 be the smallest integer such that ψ(L_μ/2)≥ 64μ_+^2/A^2. Thenn_Q(X) ≤4(2κ+A+μ)_+/A(1+2^d-2α^-1/2√(ψ(L/2))+(L_μ/L)^d/2)L^d,h_Q(X) ≤12(2κ+A+μ)^2_+/A(1+2^2d-1α^-1ψ(L/2)+(L_μ/L)^d)L^d,for any cube Q of side length L≥ζ. For instance, for ψ(ℓ)=ℓ^, we have L_μ≈ cμ^2/. The local bounds from Corollary <ref> can be used to pass to the limit and prove the existence of infinite ground states, as defined in <cit.>. With a hard core this was done previously in <cit.>. Should the potential w_1 diverge at the origin, the bound on h_Q implies a lower bound on the smallest distance between the points as in <cit.>. For the Lennard-Jones interaction such an estimate appeared before in <cit.>. From the stability condition, we haveH_Ω(X)-μ n_Ω(X)≥ h(X)-(κ+μ)n_Ω(X)≥A/2∑_i n_i(X)^2-(C+μ)n_Ω(X)with the notation of the proof of Proposition <ref>. Recall that C=κ+A/2. In particular, we deduce that there is just no particle at all in Ω (n_Ω(X)=0) when μ<-C. We can therefore always assume that μ≥-C.Next we write X=x∪ Y with x=X∩ Q. If q:=q_Y>0, we write again Y=y∪ z. Using H_Ω(z)-μ n_Ω(z)≥ H_Ω(X)-μ n_Ω(X), we obtain from (<ref>)0 ≥h(x)/2 +A/16ψ_q V_q+A/16V_qk^2-(C+μ) n-μ k≥h(x)/2 +(A/16ψ_q-4μ_+^2/A) V_q-(C+μ) n≥h(x)/2 -4μ_+^2L_μ^d/A-(C+μ) n,from the definition of L_μ. We obtainh_Q_0(X)=h(x)≤ 2(C+μ) n+8L_μ^dμ_+^2/A.Using nowh(x)≥A/2∑_C_i⊂ Q_0n_i^2-An/2≥A/2V_0n^2-An/2and A≤ 2C we find that the number of points in any cube of volume V_0 satisfiesn≤4(2C+μ)_+/AV_0+4√(V_0)L_μ^d/2μ_+/A≤4(2C+μ)_+V_0/A[] 1+(L_μ/L)^d/2.From (<ref>), we then obtain for the energyh(x) ≤ 2(C+μ)n+8L_μ^dμ_+^2/A≤8(2C+μ)^2_+V_0/A[] 1+(L_μ/L)^d/2+8L_μ^d(2C+μ)_+^2/A≤8(2C+μ)^2_+V_0/A[] 1+(L_μ/L)^d/2+(L_μ/L)^d≤12(2C+μ)^2_+V_0/A[] 1+(L_μ/L)^d.The argument is similar when q_Y=0. From (<ref>), we have in this caseA/2V_0n^2-An/2≤ h(x) ≤2^2d-3A/αψ_0V_0+2(C+μ) nand therefore obtainn≤4(2C+μ)_+/AV_0+2^d-1√(ψ_0)/√(α)V_0≤4(2C+μ)_+/AV_0(1+2^d-2√(ψ_0)/√(α))since 1≤ (2C+μ)/C≤ (2/A)(2C+μ)_+ for μ≥-C. The estimate on h(x) is similar. Step 3: Exponential estimate on the Gibbs state. In Proposition <ref> we have derived a simple pointwise bound which depends on the location of the particles outside of Q_0, and in particular on the value of q=q_Y. We now turn to the derivation of the local bound on the grand-canonical Gibbs state.We have[2] Z_β,μ(Ω)[]e^β/2 h_Q_0_β,μ,Ω= ∑_n,Ke^βμ(n+K)/n! K!∫_(Q_0∩Ω)^n∫_(Ω∖ Q_0)^Ke^-β H(x∪ Y)+β/2 h(x)xY= ∑_q≥0∑_n,Ke^βμ(n+K)/n! K!∫_(Q_0∩Ω)^n∫_(Ω∖ Q_0)^K(q_Y=q)e^-β H(x∪ Y)+β/2 h(x)xY= ∑_q≥0∑_n,k,me^βμ(n+k+m)/n! k! m!∫_(Q_0∩Ω)^n∬_(Ω∩ Q_q∖ Q_0)^k× (Ω∖ Q_q)^m(q_y∪ z=q)× e^-β H(x∪ y∪ z)+β/2 h(x) xyz.Using the pointwise bound (<ref>) and then simply removing the condition that q_y∪ z=q, we can bound[]e^β/2 h_Q_0_β,μ,Ω≤ e^β4^d-2A/αψ_0V_0∑_ne^β(μ+C)n(V_0)^n/n!+∑_q>0∑_n,ke^β(μ+C)n+βμ k(V_0)^n(V_q)^ke^-β A/16V_qψ_q/n! k!=exp[] e^β(μ+C)V_0 [] e^β4^d-2A/αψ_0V_0+∑_q>0e^-βA/16ψ_qV_q+e^βμV_q≤exp[] e^β(μ+C)V_0 [] e^β4^d-2A/αψ_0V_0+∑_ℓ>ℓ_0e^-2^dℓ^d(βA/16ψ(ℓ)-e^βμ).After scaling and replacing all the constants by their maximum ζ, this can be written as in the statement. This concludes the proof of Theorem <ref>. Arguing as before we can estimate the correlation functions uniformly over Q_0. For any x⊂ (Q_0)^j we split the other particles according to the three domains Q_0, Q_q and Ω∖ Q_q. We obtainρ^(j)(x)= Z_β,μ(Ω)^-1∑_q≥0∑_n,k,me^βμ(j+n+k+m)/n! k! m!∫_(Q_0∩Ω)^n∫_(Ω∩ Q_q∖ Q_0)^k∫_(Ω∖ Q_q)^m××(q_y∪ z=q)e^-β H(x∪ x'∪ y∪ z) x'yz.We then merge x and x' and use the lower bound (<ref>) from Proposition <ref> in the formH(x∪ x'∪ y∪ z)≥h(x)/2-C(n+j)+A/16ψ_q V_q+H(z).We obtain with z=e^βμρ^(j)(x)≤ (e^β Cz)^j e^-βh(x)/2e^V_0 ze^β C[] e^β4^d-2A/αψ_0V_0+∑_ℓ>ℓ_0e^-2^dℓ^d(βA/16ψ(ℓ)-z) ∀ x⊂ (Q_0)^j.This pointwise bound requires that all the x_i are in Q_0. To get a true uniform bound on ρ^(j) we need a modification of Proposition <ref> dealing with the situation that some of the particles are far away from the others. We do not discuss this here and refer instead to <cit.>.§.§ Proof of Corollary <ref> When ϕ=B(1+|x|)^-s we can take ψ(r)=r^ for 0<<min(1,s-d) and thus obtain[]e^β/2 h_Q_0_β,μ,Ω≤exp[]e^β(μ+C)2^dℓ_0^d [] e^β2^3d-4A/αℓ_0^d++∑_ℓ>ℓ_0e^-2^dℓ^d(βA/16ℓ^-e^βμ).The first estimate of Corollary <ref> thus follows from estimating the last sum. A non-optimal but simple bound is provided in the following For a,b>0 and 2≤ℓ_0∈, we have∑_ℓ>ℓ_0e^-bℓ^d++aℓ^d≤exp[]Ba^d+/b^-d/+log(b/B)_-for a constant B depending only on d and .If ℓ_0^≥ 2a/b, then we can simply bound∑_ℓ>ℓ_0e^-bℓ^d++aℓ^d≤∑_ℓ>ℓ_0e^-b/2ℓ^d+ ≤∫_ℓ_0^ e^-b/2t^d+t= []b/2^-1/d+∫_(b/2)^1/d+ℓ_0^ e^-t^d+t≤ Ce^-bℓ_0^d+/4/b^1/d+.If 1<ℓ_0^< 2a/b=:ℓ_1^ we split the sum in two parts. We use that the number of integers ℓ satisfying ℓ_0<ℓ≤ℓ_1 is less than or equal to ℓ_1+1-ℓ_0≤ℓ_1 since ℓ_0≥2. Hence, using the previous estimate and the fact that xe^x≤ e^2x with x=aℓ_1^d, we find∑_ℓ>ℓ_0e^-bℓ^d++aℓ^d ≤ℓ_1e^aℓ_1^d+∑_ℓ>ℓ_1e^-bℓ^d++aℓ^d≤ e^2aℓ_1^d/a^1/d+Ce^-b⌊ℓ_1⌋^d+/4/b^1/d+≤ e^2aℓ_1^d/2(b/B)^1/d+e^-bℓ_0^d+/4/2(b/B)^1/d+≤e^1/dlog(b/B)_-/2(e^2aℓ_1^d+e^-bℓ_0^d+/4),with B=max(2^1+d,(2C)^d+). Since ℓ_1>ℓ_0, we have replaced ⌊ℓ_1⌋ by ℓ_0 in the the last exponential. To simplify the bound further, we use that e^-bℓ_0^d+/4≤1 and (e^x+1)/2≤ e^x for x≥0, leading to (<ref>) after further increasing B and using d≥1. Using (<ref>) and e^x+e^y≤ e^x+y+1 for x,y≥0, we find[]e^β/2 h_Q_0_β,μ,Ω≤ e^ζβ,=ze^β CV_0/βζ+ℓ_0^d++ []z/β^d+/ + (logβ)_-+1/βfor a large enough constant ζ. By Jensen's inequality, we obtainh_Q_0_β,μ,Ω≤ 2ζ.Using (<ref>) and the fact that ≥ℓ_0^d+≥V_0/2^d, we obtain after further increasing ζn^2_Q_0_β,μ,Ω≤ζ V_0.On the other hand, our bound (<ref>) on the correlation functions providesn_Q_0_β,μ,Ω =∫_Q_0ρ^(1)≤ (e^β Cz V_0)e^ζβ, n_Q_0(n_Q_0-1)_β,μ,Ω =∬_(Q_0)^2ρ^(2)≤ (e^β Cz V_0)^2e^ζβ.Thus we arrive at (<ref>) after using 1+e^β Cz V_0≤ e^e^β Cz V_0≤ e^ζβ. This concludes the proof of Corollary <ref>.§.§ Ruelle bounds in an external potentialWe can extend the previous result to the situation where we have a bounded-below external potential v(x), that is, the energy is of the formH_v(x)=∑_1≤ j<k≤ nw(x_j-x_k)+∑_i=1^n v(x_i)=:H(x)+V(x)with ∫_^de^-β v(x) x<. The previous situation corresponds to v≡ -μ on Ω and v≡+ outside of Ω. Assume that w=w_1+w_2 satisfies the same assumptions as in Theorem <ref>. Let v:^d→∪{+} be any measurable function such thatv≥ -μ_0 a.e.,∫_^de^-β v(x) dx<.The corresponding grand-canonical Gibbs state satisfies[]e^β/2 h_Q_β,v≤exp[] e^ζβ∫_Qe^-β v(x)x [] e^ζβψ(L)L^d+∑_ℓ∈ ℓ> Le^-β(ψ(ℓ)/ζ-ζ e^βμ_0)ℓ^dfor any β>0 and any cube Q of side length L≥ζ, where ψ and ζ are the same as in Theorem <ref>. If ϕ(|x|)=B(1+|x|)^-s for some s>d and 0<<min(1,s-d), then we have[]e^β/2 h_Q_β,v≤exp[] e^ζβ∫_Qe^-β v+ζβ_0 , n_Q^2_β,v≤ L^d min{e^ζβ(_0+1)[] 1+L^de^β(ζ+μ_0)∫_Qe^-β v; ζβ^-1e^ζβ∫_Qe^-β v+ζ_0},where _0:=L^d++ (e^βμ_0/β)^1+d/+1+(logβ)_-/β.The potential v only occurs in Step 3 of the proof of Theorem <ref>. We haveZ_β,v(Ω)[]e^β/2 h_Q_0_β,v =∑_q≥0∑_n,k,m1/n! k! m!∫_(Q_0)^n∬_( Q_q∖ Q_0)^k× (^d∖ Q_q)^m× ×(q_y∪ z=q)e^-β(H(x∪ y∪ z)+V(x)+V(y)+V(z))+β/2 h(x) xyz.We use V(y)≥ -μ_0 k and the previous bound on H(x∪ y∪ z), leading to[]e^β/2 h_Q_0_β,μ,Ω≤ e^β4^d-2A/αψ_0V_0∑_ne^β Cn[]∫_Q_0e^-β v^n/n!+∑_q>0∑_n,ke^β(Cn+μ_0 k)[]∫_Q_0e^-β v^n(V_q)^ke^-β A/16V_qψ_q/n! k!and (<ref>) follows as for (<ref>). When ϕ decreases polynomially, the bound (<ref>) is proved the same as Corollary <ref>. Finally, we have the pointwise bound on the correlation functionρ^(j)(x)≤ e^-β[]h(x)/2-Cj+∑_i=1^jv(x_i)[] e^β4^d-2A/αψ_0V_0+∑_ℓ>ℓ_0e^-2^dℓ^d(βA/16ψ(ℓ)-z)for x⊂ (Q_0)^j and (<ref>) follows as in the proof of Corollary <ref>. We have a similar result at zero temperature. Assume that w=w_1+w_2 satisfies the same assumptions as in Theorem <ref>. Let v:^d→∪{+} be any lower semi-continuous function such that v≥ -μ_0. Let X⊂^d be any minimizer for the grand-canonical free energyX↦ H(X)+V(X), V(X)=∑_x∈ Xv(x).Then the number of points in any cube Q of side length L≥ζ satisfiesn_Q(X)≤4L^d/A[] 2κ+A-min_Q v _++ []16V_0(μ_0)_+^2L_μ_0^d/A^2+2^2d-2/αψ(L/2)L^2d-4L^d/A[] 2κ+A-min_Q v _- ^1/2_+,where L_μ_0 is, like in Corollary <ref>, the smallest integer such that ψ(L_μ_0/2)≥ 64(μ_0)_+^2/A^2. The bound (<ref>) states that the number of particles in a cube can be bounded in terms of the “local” chemical potential μ=-min_Q v. The bound is linear for μ≥ -(2κ+A) and vanishes when μ is really negative. In between it interpolates continuously. As usual we should not think of L large here. Taking L=ζ and covering any large enough cube by finitely many smaller cubes, we get a bound in the formn_Q≤ C|Q| [] C-min_Q(v) _+where C depends on μ_0. We introduce μ:=-min_Q v and write X=x∪ Y with x=X∩ Q. If q:=q_Y>0, we write again Y=y∪ z. For the particles in x⊂ Q we use v≥ -μ but for the particles in y we use v≥ -μ_0. Arguing exactly as in the proof of Corollary <ref>, we obtainA/V_0n^2-4(2C+μ)n≤16(μ_0)_+^2L_μ_0^d/A.If q_Y=0 we get the same but with the right side replaced by 2^2d-2A/αψ_0V_0. To conclude, we use that n^2-α n≤β for an integer n and β>0 implies n≤α_++(β-α_-)^1/2_+. CDMS19[AF03]AdamsFournier R. A. Adams and J. J. F. Fournier, Sobolev spaces, vol. 140 of Pure and Applied Mathematics (Amsterdam), Elsevier/Academic Press, Amsterdam, second ed., 2003.[Bla04]Blanc-04 X. Blanc, Lower bound for the interatomic distance in Lennard-Jones clusters, Comput. Optim. Appl., 29 (2004), pp. 5–12.[BRS10]BelRadShl-10 J. Bellissard, C. Radin, and S. Shlosman, The characterization of ground states, Journal of Physics A: Mathematical and Theoretical, 43 (2010), p. 305001.[CDMS19]ColMarStr-19 M. Colombo, S. Di Marino, and F. Stra, Continuity of multimarginal optimal transport with repulsive cost, SIAM J. Math. Anal., 51 (2019), pp. 2903–2926.[CP19a]CotPet-19b C. Cotar and M. Petrache, Equality of the jellium and uniform electron gas next-order asymptotic terms for Coulomb and Riesz potentials, ArXiv e-prints 1707.07664 (version 5),(2019).[CP19b]CotPet-19 C. Cotar and M. Petrache, Next-order asymptotic expansion for N-marginal optimal transport with Coulomb and Riesz costs, Adv. Math., 344 (2019), pp. 137–233.[DLN22]MarLewNen-22_ppt S. Di Marino, M. Lewin, and L. Nenna, Grand-canonical optimal transport, ArXiv e-prints,(2022).[EP79]EbnPun-76 C. Ebner and C. Punyanitya, Density-functional theory of simple classical fluids. ii. localized excess electron states, Phys. Rev. A, 19 (1979), pp. 856–865.[ESS76]EbnSaaStr-76 C. Ebner, W. F. Saam, and D. Stroud, Density-functional theory of simple classical fluids. i. surfaces, Phys. Rev. A, 14 (1976), pp. 2264–2273.[Eva79]Evans-79 R. Evans, The nature of the liquid-vapour interface and other topics in the statistical mechanics of non-uniform, classical fluids, Advances in Physics, 28 (1979), pp. 143–200.[Eva92]Evans-92 R. Evans, Density Functionals in the Theory of Nonuniform Fluids, Marcel Dekker, Inc., 1992, pp. 85–176.[Fef85]Fefferman-85 C. Fefferman, The thermodynamic limit for a crystal, Commun. Math. Phys., 98 (1985), pp. 289–311.[Fis64]Fisher-64 M. E. Fisher, The free energy of a macroscopic system, Arch. Ration. Mech. Anal., 17 (1964), pp. 377–410.[Gre89]Gregg-89 J. N. Gregg, The existence of the thermodynamic limit in Coulomb-like systems, Comm. Math. Phys., 123 (1989), pp. 255–276.[GS72]GarSim-72 C. Garrod and C. Simmons, Rigorous statistical mechanics for nonuniform systems, J. Mathematical Phys., 13 (1972), pp. 1168–1176.[GS95]GraSch-95 G. M. Graf and D. Schenker, On the molecular limit of Coulomb gases, Commun. Math. Phys., 174 (1995), pp. 215–227.[HK64]HohKoh-64 P. Hohenberg and W. Kohn, Inhomogeneous electron gas, Phys. Rev., 136 (1964), pp. B864–B871.[HLS09a]HaiLewSol_1-09 C. Hainzl, M. Lewin, and J. P. Solovej, The thermodynamic limit of quantum Coulomb systems. Part I. General theory, Advances in Math., 221 (2009), pp. 454–487.[HLS09b]HaiLewSol_2-09 height 2pt depth -1.6pt width 23pt, The thermodynamic limit of quantum Coulomb systems. Part II. Applications, Advances in Math., 221 (2009), pp. 488–546.[HO81]HayOxt-81 A. Haymet and D. W. Oxtoby, A molecular theory for the solid–liquid interface, The Journal of Chemical Physics, 74 (1981), pp. 2559–2565.[HS05]HarSaf-05 D. P. Hardin and E. B. Saff, Minimal Riesz energy point configurations for rectifiable d-dimensional manifolds, Adv. Math., 193 (2005), pp. 174–204.[Hug85]Hugues-85 W. Hughes, Thermodynamics for Coulomb systems: a problem at vanishing particle densities, J. Statist. Phys., 41 (1985), pp. 975–1013.[JKT22]JanKunTsa-22 S. Jansen, T. Kuna, and D. Tsagkarogiannis, Virial inversion and density functionals, J. Funct. Anal.,(2022). online first.[JLM23]JexLewMad-23 M. Jex, M. Lewin, and P. Madsen, Classical Density Functional Theory: Representability and Universal Bounds, J. Stat. Phys., 190 (2023), p. 23.[KS65]KohSha-65 W. Kohn and L. J. Sham, Self-consistent equations including exchange and correlation effects, Phys. Rev. (2), 140 (1965), pp. A1133–A1138.[Lau21]Lauritsen-21 A. r. B. k. Lauritsen, Floating Wigner crystal and periodic jellium configurations, J. Math. Phys., 62 (2021), pp. Paper No. 083305, 17.[Lew11]Lewin-11 M. Lewin, Geometric methods for nonlinear many-body quantum systems, J. Funct. Anal., 260 (2011), pp. 3535–3595.[Lew22]Lewin-22 height 2pt depth -1.6pt width 23pt, Coulomb and Riesz gases: The known and the unknown, J. Math. Phys., 63 (2022), p. 061101. Special collection in honor of Freeman Dyson.[LLS18]LewLieSei-18 M. Lewin, E. H. Lieb, and R. Seiringer, Statistical mechanics of the Uniform Electron Gas, J. Éc. polytech. Math., 5 (2018), pp. 79–116.[LLS19]LewLieSei-19b height 2pt depth -1.6pt width 23pt, Floating Wigner crystal with no boundary charge fluctuations, Phys. Rev. B, 100 (2019), p. 035127.[LLS20]LewLieSei-20 height 2pt depth -1.6pt width 23pt, The Local Density Approximation in Density Functional Theory, Pure Appl. Anal., 2 (2020), pp. 35–73.[LLS23]LewLieSei-23_DFT height 2pt depth -1.6pt width 23pt, Density Functional Theory — Modeling, Mathematical Analysis, Computational Methods, and Applications, Springer, 2023, ch. Universal Functionals in Density Functional Theory, pp. 115–182.[Mie20]Mietzsch-20 N. Mietzsch, The validity of the local density approximation for smooth short range interaction potentials, J. Math. Phys., 61 (2020), p. 113503.[Mil72]Millard-72 K. Millard, A statistical mechanical approach to the problem of a fluid in an external field, J. Mathematical Phys., 13 (1972), pp. 222–226.[MP72]MarPre-72 C. Marchioro and E. Presutti, Thermodynamics of particle systems in the presence of external macroscopic fields. I. Classical case, Comm. Math. Phys., 27 (1972), pp. 146–154.[PS01]PerSch-01 J. P. Perdew and K. Schmidt, Jacob's ladder of density functional approximations for the exchange-correlation energy, AIP Conference Proceedings, 577 (2001), pp. 1–20.[Rad84]Radin-84 C. Radin, Classical ground states in one dimension, Journal of Statistical Physics, 35 (1984), pp. 109–117.[Rad04]Radin-04 height 2pt depth -1.6pt width 23pt, Existence of ground state configurations, Math. Phys. Electron. J., 10 (2004), pp. Paper 6, 7.[Rue70]Ruelle-70 D. Ruelle, Superstable interactions in classical statistical mechanics, Comm. Math. Phys., 18 (1970), pp. 127–159.[Rue99]Ruelle height 2pt depth -1.6pt width 23pt, Statistical mechanics. Rigorous results, Singapore: World Scientific. London: Imperial College Press, 1999.[RY77]RamYus-77 T. Ramakrishnan and M. Yussouff, Theory of the liquid-solid transition, Solid State Communications, 21 (1977), pp. 389 – 392.[RY79]YamYus-79 T. V. Ramakrishnan and M. Yussouff, First-principles order-parameter theory of freezing, Phys. Rev. B, 19 (1979), pp. 2775–2794.[SE77]SaaEbn-77 W. F. Saam and C. Ebner, Density-functional theory of classical systems, Phys. Rev. A, 15 (1977), pp. 2566–2568.[SG73]SimGar-73 C. S. Simmons and C. Garrod, The density of a nonuniform system in the thermodynamic limit, J. Mathematical Phys., 14 (1973), pp. 1075–1087.[Süt05]Suto-05 A. Sütő, Crystalline ground states for classical particles, Phys. Rev. Lett., 95 (2005), p. 265501.[Süt11]Suto-11 height 2pt depth -1.6pt width 23pt, Ground state at high density, Comm. Math. Phys., 305 (2011), pp. 657–710.[YFG76]YanFleGib-76 A. J. M. Yang, P. D. Fleming, and J. H. Gibbs, Molecular theory of surface tension, J. Chem. Phys., 64 (1976), pp. 3732–3747. | http://arxiv.org/abs/2310.18028v1 | {
"authors": [
"Michal Jex",
"Mathieu Lewin",
"Peter Madsen"
],
"categories": [
"math-ph",
"cond-mat.stat-mech",
"math.MP"
],
"primary_category": "math-ph",
"published": "20231027100725",
"title": "Classical Density Functional Theory: The Local Density Approximation"
} |
Deepthi Ayyagari et. al 1]Deepthi Ayyagaricor1 1]Soumen Datta 1]Saurabh Das 1]Abhirup Datta [1]Department of Astronomy, Astrophysics and Space Engineering, Indian Institute of Technology Indore, Simrol , Indore, 453552, Madhya Pradesh, India[cor1]Corresponding author:Deepthi Ayyagari [email protected] Tel.: +91-996-316-6161; Here, we explore the different characteristics of a possible coupling between tropospheric and ionospheric activities during the impact of tropical cyclones (TC) like Amphan and Nisarga in the Indian subcontinent. We have analyzed the effect of TCs Amphan and Nisarga on the low latitude ionosphere using the measurements from several IGS stations around India and a GPS+ NavIC station in Indore, India. For the first time, this study assesses the impact of tropical cyclones on the equatorial ionosphere using both GPS and NavIC. After the landfall of TC Amphan, the VTEC analysis shows a significant drop from nominal values in both NavIC as well in GPS by 5.1 TECU and 3.6 TECU, respectively. In contrast to TC Amphan, Nisarga showed a rise in VTEC which ranged from 0.9 TECU in GPS to 1.7 - 5 TECU in NavIC satellites except for PRN6. The paper examines Outgoing Longwave Radiation as a proxy to the convective activity which may be responsible for the ionospheric variation through the generation of gravity waves. In addition, the horizontal neutral wind observations at the location of TC landfall confirm the presence of ionospheric disturbances. VTEC perturbation analysis using a band-pass filter reveals a variation in differential TEC values between ±0.4 and ±0.8 based on the IGS station measurements. This indicates that the gravity wave is one of the responsible mechanisms for the lower-upper atmospheric coupling during both cyclones. Tropical cyclones VTECGravity waves§ INTRODUCTIONThe last five decades have seen many advances in ionospheric physics centered around issues related to ionospheric irregularities, height variations along with the peak densities of various layers of ionosphere <cit.>. The ionosphere is primarily impacted by electromagnetic radiation from space, seasonal changes in solar flux, and sudden events on the sun, such as coronal mass ejections (CME). It is these events that cause intense ionization in the ionosphere. GPS as well as the other ground-based radio systems are affected by this phenomenon, resulting in reduced operational reliability <cit.>. According to earlier studies, the lower layers of the ionosphere are coupled with the neutral atmosphere. Neutral particle dynamics are dominant in the lower atmosphere, affecting the bottom-most layers of the ionosphere. The number of neutral particles is several orders greater in magnitude than the number of ionized particles in the lower ionosphere. Because of the strong coupling between ions and neutral particles in the lower layers of the ionosphere, the neutral particles collide with ions <cit.>. Numerous experiments using sensitive instruments indicate that ionospheric disturbances are linked to atmospheric activity in the troposphere. Moreover, convective activities of the troposphere (tropical cyclones, hurricanes, typhoons, etc.) can play an integral role in triggering small and medium-scale ionospheric disturbances <cit.>.A chain of interconnected processes within the lithosphere-atmosphere-ionosphere interaction system reacts to various phenomena such as lightning discharge, high-power transmitters, high-power explosions, earthquakes, volcano eruptions, and cyclones<cit.>. Through the global electric circuit (GEC), thunderstorms and cyclonic storms transfer energy from the atmosphere to the ionosphere and establish electricity between the atmosphere and ionosphere<cit.>. Large-scale convection transports the charged aerosols and water droplets upward in a cloudy environment <cit.>. Large scale convection transports the charged aerosols and water droplets upward resulting in changes in electric circuit between the ground and ionosphere on a horizontal scale of hundreds of kilometers <cit.>. Furthermore, experimental observations and the simulation models revealed that atmospheric gravity wave (GW), which is a most common consequence of convective system like cyclone or thunderstorm, can propagate from the convective plumes in upward direction and can reach up to the ionospheric height.<cit.>. Tropical cyclones (TC) by definition are rotational low-pressure systems (classification presented in Table <ref>). They originate over the oceans in the equatorial region between 5and 20 latitudes. In general, the central pressure of TC drops to a value of 5 to 6 hectopascals (hPa) and the maximum sustained wind speed exceeds 62 kilometers per hour (kmph). In general, TC is a powerful vortex structure, with a diameter of 150 km to 800 km by spiraling around a center along the surface of the sea associated with windspeed at the rate of 300 to 500 kmph <cit.>. Hence a TC is described as an extending disturbance, having its effect spread over 1000–2000 km westward or eastward from the TC location. An in-depth study of cyclone Davina shows that the GWs produced by TCs can alter the composition of the Middle Atmosphere <cit.>. Following the above studies, a significant disturbance in electron density measurement has been observed during the active phase of a TC in the D (60-80 km) region <cit.>, F region<cit.>of the ionosphere. Within the effect zone of TC, ionospheric electron density increases and reaches a maximum before landfall of a TC, and decreases approximately a day after landfall<cit.>. On the day following the landfall of a typhoon, observations from more than 50 GPS stations revealed (along the path of the typhoon Matsa) a gain of 5 TECU over its average monthly value and a drop of 1 TECU below the average monthly value <cit.>. A comprehensive study from the Indian sector, for TC Mahasen and TC Hudhud, reveals an anomalous decrease of vertical TEC (VTEC) value from the monthly mean for TC Mahasen as well as TC Hudhud as 3.8 TECU and 2.1 TECU respectively on the day of the landfall <cit.>. A significamt ionospheric perturbation has also been reported during another tropical cyclone over the indian region <cit.>. The study also reveals that geomagnetic conditions should be quiet to investigate the passage of cyclones and their response in the ionosphere <cit.>.Based on the earlier studies, this paper presents the details of the ionospheric response to two tropical cyclones, over the Indian region, i.e TC Amphan(super cyclonic storm) and TC Nisarga(severe cyclonic storm) which hit the east and west coast of the Indian peninsula.The TEC observations from a new constellation of Indian origin, known as NavIC (an acronym for NAVigation with Indian Constellation) is used along with the TEC estimates from the IGS stations to investigate the amount of local ionospheric perturbation caused by TCs. The major advantage that we have in using the data from NavIC is that it is a regional satellite navigation system, which is a constellation of seven satellites (a combination of Geostationary Earth Orbit (GEO) and Geosynchronous Orbit (GSO) satellites) in its space segment. NavIC satellites transmit signals in the L5 and S1 bands, with carrier frequencies of 1176.45 MHz and 2492.028 MHz, respectively, in a 24 MHz bandwidth. It is engineered to provide positional accuracy information to Indian users as well as a 1500 km radius around its boundary, which is defined by a rectangular grid spanning from 30 S to 50 N in latitude to 30 E to 130 E in longitude<cit.>. Unlike GPS, NavIC is designed to provide continuous spatial as well as temporal coverage (24 x 7) in the these regions. The reliability of NavIC in exploring the upper atmosphere is well demonstrated by <cit.>. In general, the Waveforms derived from the ion density can demonstrate how the ionospheric plasma responds to the interaction with neutral gravity wave. Hence, a detrending of the ion density is done by calculating the temporal deviation, which gives us an idea of how the neutral wave-like gravity wave perturbs the ion density, and then we filter the deviation time series by a suitable bandpass filter in order to find the wave structure. § DATA AND METHODOLOGYA Global Navigation Satellite System (GNSS) receiver(Septentrio PolarX5s) along with Navigation with Indian Constellation (NavIC receiver: Accord Rx), provided by ISRO-Space Application Centre, is operational at the Space weather laboratory facility available at the Department of Astronomy, Astrophysics and Space Engineering, IIT Indore. These multi-constellations and multi-frequency receivers log in TEC data (as calculated from equation 1). Thus obtained TEC data from the receivers is used for the analysis to observe the ionospheric response of Tropical cyclones Amphan and Nisarga.The TEC estimates are the slant TEC values which depend on the length of the signal's path through the ionosphere and on the satellite elevation E <cit.>. STEC = ∫_S^Rn_e . ds where R is receiver, S is the satellite, n_e is the electron density, ds is the path length. To redress this effect, an estimation of the Vertical TEC (VTEC) above a given point on the Earth's surface is essential. So VTEC is estimated with a single layer model mapping factor conversion, assuming all the free electrons are concentrated in a layer of infinitesimal thickness located at the altitude(I_h = 350 km) above the surface from the center of the earth(R_e = 6371 km) as shown in equations (2) and (3) <cit.>MF(E) = STEC/VTEC MF(E)= [ [ 1 -[R_e * cos(E)/R_e+I_h]^2]]^-1/2 Apart from the VTEC measurements over the region Indore which is very near to northern crest of Equatorial Ionization Anomaly(EIA), the VTEC estimates from three different stations from the International Global Navigation Satellite System Service Network (IGS:<http://sopac-csrc.ucsd.edu/>) have also been used from(i) Lucknow, located far away from the northern crest of EIA,(ii) Hyderabad, located between the northern crest of EIA and the magnetic equator and(iii) Bengaluru, located near the magnetic equator. Figure <ref> depicts the tracks of both TC along with the locations of the receivers and the distance from the landfall areas in the case of each tropical cyclone.Table <ref> presents the geographic and magnetic dip location of receivers used in this study. The gravity wave measurement has been computed based on the differential TEC estimates by detrending the differential carrier phase delay with a five minutes interval(<cit.>): s= [L_1 - L_2/k] + b Δ^n s(t)= Δ^n-1 s(t) -0.5×[Δ^n-1 s(t+τ) + Δ^n-1 s(t-τ) ]n is the order of numerical difference;L_1 and L_2 are two frequency carrier phase measurements;k is the factor for conversion to the VTEC measurement; τ is the time step which is considered to be five minutes here in the current study. After detrending, the signature of the ionospheric gravity wave has been measured by application of a band-pass filter with varying bandwidth of 1 to 3mHz (milli Hertz) <cit.>. In addition, the outgoing longwave radiation (OLR) data is also used in the current analysis to probe for the energy density variation signatures. For tropical and subtropical regions, OLR values are used as proxy indicators of convection activity <cit.>. Both top-of-the-atmosphere (TOA) and surface flux observations collected by the Clouds and the Earth's Radiant Energy System (CERES) sensors on NASA's Aqua and Terra satellites are used in this study <cit.>. The daily average regional flux is estimated using diurnal models and the 0.25 x 0.25 regional fluxes at the hour of observation from the FLASHFlux monthly gridded TOA-surface flux. The data thus generated is space-time averaged data for OLR values during the whole period. Apart from OLR data the horizontal wind model (HWM14) data which is an empirical model of the horizontal neutral wind in the upper thermosphere is used. The updated model known as HWM14 consists of two parts: a quiet-time portion, and a geomagnetically disturbed portion that is dependent on the Ap index. The model currently is independent of solar activity as well as the F107 and F107A arguments. The HMW14 model provides zonal and meridional winds for specified latitude, longitude, time, and Ap index <cit.>.§ OBSERVATIONAL RESULTS§.§ Ionospheric response to TC AmphanTC Amphan is the first tropical cyclone of 2020 in the North Indian Ocean basin, originated from a low-pressure area persisting a couple of hundred km east of Colombo, Sri Lanka around 6N. Tracking northeastward as shown in Fig.<ref>(a), the disturbance organized over exceptionally warm sea surface temperatures where TC Amphan underwent rapid intensification on May 17, 2020, and became an extremely severe cyclonic storm within 12 hours. On May 18, 2020, at approximately 12 UT(h), Amphan reached its peak intensity with three(3)-minute sustained wind speeds of 240 kmph and became the only super cyclonic storm in the last two decades over the Bay of Bengal. On May 20, 2020, between 10-11 UTC, the cyclone made landfall in West Bengal. At the time, the estimated Amphan's one-minute sustained winds were 155 kmph. The maximum sustained wind speed of any TC tends to fall during landfall. It further weakened and entered into a low-pressure area away from the state of West Bengal and moved away toward Bangladesh. The Fig.<ref>(a) represents the F 10.7 solar flux units which indicates the solar activity for the TC Amphan period. Solar radio flux at 10.7 cm (2800 MHz) is a good indicator of solar activity. The F10.7 radio emissions are generated high in the chromosphere and low in the corona of the solar atmosphere. Unlike many solar indices, the F10.7 radio flux can easily be measured reliably on a day-to-day basis from the Earth’s surface, in all types of weather. Reported in solar flux units (s.f.u.), the F10.7 can vary from below 50 s.f.u. to above 300 s.f.u., over the course of a solar cycle. (b) Disturbance storm time (Dst) index which shows the geomagnetic activity during the days 18 to May 22, 2020. In general, to classify the severity of geomagnetic storms, the vital parameter is the Dst index, which measures the horizontal component of the Earth's magnetic field (H) in nano Tesla (nT). During such disturbances, this field gets depressed and its magnitude, which is axially symmetric in nature, varies with the time measured from the onset of a storm. Severity of geomagnetic storms can be classified as moderate storm (-50 nT ≤ Dst < -100 nT) and intense storm (-100 nT ≤ Dst < -200 nT) <cit.>. Here in case of TC Amphan the Dst index value has not dropped below -13nT which is highly favourable to probe the response of the ionosphere during TC activities. Hence the TEC values from May 18 to 22, 2020, have been analyzed to detect the anomalies on the day of landfall (May, 20, 2020) of Amphan.Fig. <ref> (c-f) displays the VTEC values as estimated from four different stations, namely, Lucknow, Indore, Hyderabad, and Bengaluru, which fall within the 2000 km horizontal zone from the track of TC Amphan. The black solid line indicates the monthly mean VTEC values for May 2020 for each of the stations respectively.On May 20, 2020, the VTEC values of all the available NavIC satellites identification numbers also known as Pseudo-Random Numbers (PRNs) for the Indore-NavIC receiver shown in Fig.<ref>(a-e). In this Fig.<ref> (a-e) the VTEC value is greater than the monthly mean value for all the PRNs. However, in the case of PRN6, the VTEC value is lower than its monthly mean value by 5.17 TECU. During the time of landfall i.e between 9 to 12 UT (hours) on May 20, 2020, PRNs 2, 3, 4, and 5 observed values higher than monthly mean values. The increased values above the monthly mean for each of the PRN are observed for PRN 2 above 1.11 TECU, PRN 3 above 4.43 TECU, PRN 4 above 2.34 TECU, and PRN 5 above 1.42 TECU respectively. On the contrary, after 12 UT (i.e 14:30 to 17:30 LT) decrease in TECU has been noted for all the 4 PRNs of NavIC which has not been reflected in the observations from PRN 3 that is depicted in Fig.<ref> (a-e). Likewise, a similar observation has been noted in the VTEC values from the IGS - reference stations Fig.<ref> (b,d,e) Lucknow (above 1.81TECU and below 3.64TECU), Hyderabad (above 0.63TECU and below 3.87TECU) and Bengaluru (above 1.73TECU and below 0.53TECU), before and after the time of landfall from the values of the monthly mean values respectively. The Fig.<ref>(a-d) shows the convective activity during the period of May 18 to 21, 2020, for TC Amphan. From the OLR surface interpolated data it is evident that there was one visibly increased activity between 10N to 30 N Lat and 70E to 90E Lon on May 18, 2020, which has widespread region till May 19, 2020 up to 90E has merged abruptly during May 20, 2020, with decreased intensity and weakened on May 21, 2020. The energy values over the regions of Amphan trajectory grid is between 10N to 20 N Lat and 80E to 90E Lon have varied from 240W/m^2 on May 18, 2020 to 200W/m^2 on May 19, 2020 and below 180W/m^2 on May 20, 2020, and finally 160W/m^2 on May 21,2020. However, the energy values between 20N to 30 N Lat and 70E to 90E Lon, which is a (10x20) grid, remained uninterrupted with constant energy values above 300W/m^2 before and after arrival as well as during the landfall of TC Amphan. Such low values of OLR during the landfall day are in agreement with earlier findings which clearly indicate the gravity wave energy of TCs is associated with low OLR values <cit.>. §.§.§ Ionospheric response to TC Nisarga The second cyclone of the annual cyclone season, Nisarga originated as a depression in the Arabian Sea and moved gradually northward. In between 12-14 UT on June 2, 2020, the deep depression intensified into a cyclonic storm and thereby receiving the name Nisarga. It later intensified into the Deep Depression on the same day. TC Nisarga reached the peak intensity of 110 kmph which makes as a Severe Cyclonic Storm whereas a one-minute mean wind speed was 140 kmph which makes as a category 1 tropical cyclone. At 7 UT on June 3, 2020, Nisarga made landfall near the town of Alibag, at peak intensity which is in the vicinity of 700 km away from the Indore(NavIC) station Figure 1(b) represents the trajectory of TC Nisarga from 1 to June 3, 2020. The Fig.<ref> (a-g) presents the ionospheric response to TC Nisarga from June 1 to 5, 2020, for various stations as shown in Figure 1(b). The Fig.<ref> (a) represents the F 10.7 solar flux units which indicate the solar activity for the TC Nisarga period. (b) shows the Dst index for this period which again has not dropped below the value of -30nT. Hence, TC conditions are much more favorable to record the ionospheric response during this period. On June 3, 2020, the VTEC values of all the available 6 PRNs for the Indore-NavIC station (Fig.<ref>(a-e)), the TECU value is higher than the average of the mean of the month VTEC value except for PRN6( which is below 1.8TECU below the monthly mean value) during the time of landfall i.e between 6 to 9 UT(hours) when the maximum sustained wind speed reached the highest on June 3, 2020, whereas other reported values higher than monthly mean values, particularly during the time of landfall, PRN 2 above 3.92 TECU, PRN 3 above 4.88TECU, PRN 4 above 5.12TECU and PRN 5 above 1.74TECU respectively. Unlike TC Amphan before and after 6 to 9 UT (i.e 11:30 to 14:30 LT) increase of TECU has been noted on all 6 of PRNs.A similar increase has been observed in the VTEC values as recorded by the IGS-reference stations(Fig.<ref>(c-e-f-g)), Hyderabad (above 0.9TECU), and Bengaluru (above 0.35TECU), before and after the time of landfall from the values of the monthly mean values respectively. The station Lucknow and Indore GPS have no VTEC measurements before and during the landfall time, but the mean values remained lower than the landfall day VTEC values. The Fig.<ref>(a-e) shows the convective activity during the period of June 1 to 5, 2020, for TC Nisarga. From the OLR surface interpolated data it is evident that the energy values between 10N to 30 N Lat and 70E to 80E Lon on June 1, 2020, remained in the range of 260 to 280W/m^2till June 3, 2020, up to 85E has reduced below 260 W/m^2 on the landfall day of Nisarga. The energy values over the regions of the Nisarga trajectory grid that is between 10N to 20 N Lat and 70E to 80E Lon have varied from 300W/m^2 on June 1, 2020, to 240W/m^2 June 4, 2020, finally revived back to 300W/m^2 on June 5, 2020. The behavior of OLR values is consistent with that of TC Amphan and early findings. The OLR values remained to be lower than the surrounding OLR values especially in the region of TC Nisarga and during the landfall. The above observations clearly show that for both the TCs, there was intense convective activity observed over the land region before and after the landfall of the TCs as well as when the TCs were in their active stages. The regions of merged convective activities during Amphan and Nisarga can be a probable explanation for the anomaly observed in VTEC values just after the landfall time on May 20 and June 3, 2020, respectively but the fact of rising of VTEC values during such intense convective activity can open up new studies in this field. This observation was in contrast to earlier observations made in the Indian Sector <cit.>. §.§ Possible Mechanism: Gravity Wave Signature It is noteworthy to mention that horizontal neutral wind and temperature play a key role in the propagation of the gravity wave through the atmosphere <cit.>. The horizontal neutral wind in the upper thermosphere has been defined using Horizontal Wind Model 14 (HWM14) an empirical model <cit.>. The signature of the background atmospheric wind observations on May 20 and June 3, 2020, is quite evident. The variation of atmospheric horizontal wind velocity with altitude as measured at locations: Bakkhali, West Bengal, and Alibaug, Maharashtra is shown in Figure <ref> (a) & (b). A strong eastward wind has been observed above an altitude of 175Km which was along the northeast direction one hour before and towards the southeast direction from one hour after (not shown in Figure) on May 20. Hence the ionospheric perturbation or the traveling ionospheric disturbance signature towards the west direction (opposite to the horizontal wind) will be evident as compared to eastward. The statistical investigation <cit.> and theoretical studies <cit.> reported that gravity wave mostly propagates roughly against the neutral wind. It happens because gravity waves propagating along the wind have larger vertical wavelength as compared to gravity waves propagating against the wind. It results dissipation from viscosity at lower thermospheric altitude and critical level of filtering<cit.>. We could not investigate the signatures of GW in multiple direction due to limited number of navigation receivers and it is not also within the scope of this study and would be incorporated for future study.A detailed investigation have been done based on the perturbed TEC estimate during the TC period of Amphan and Nisarga. The preliminary analysis shows ionospheric disturbances after the TC Amphan. The TEC estimates were measured from two IGS stations Lucknow and Bengaluru. During the analysis, a sudden perturbed signature from the TEC estimates of IGS Lucknow station is observed and the variation of absolute VTEC with the signature of perturbation from two different satellite measurements on May 20, 2020, is shown in Fig.<ref>.The color bar shows the range of the perturbation amplitude in the estimates of the absolute VTEC estimates. However, the signatures from Bengaluru do not show any such anomaly.It is fascinating to note that the landfall event of the TC Amphan began from 11 UT(h) on May 20, 2020, and the movement of the convective system along with the IPP location of satellite trajectories indicate the possible motion of neutral particles to the lower ionosphere. The retrieved GW measurements from different receivers by applying the band-pass filter, have also very significant signatures. The signal amplitude observed is above the value of ±0.1 TECU which reaches up to 0.4 TECU to 0.6 TECU for the measurements from Lucknow station and 0.3TECU for Bengaluru station (Fig.<ref>). The dominant frequency for every cases obtained of about 2.0 mHz. It indicates the possible role of gravity wave behind the generation of this ionospheric perturbation signature. The same is in accordance with absolute VTEC signatures of ionospheric perturbations measured from Bengaluru and Lucknow IGS stations where the signatures obtained from Lucknow station are more evident when compared to Bengaluru station. The nature of perturbation in the local ionosphere is quite evident from two of the satellite observations on May 21, 2020. The variation of VTEC measurement with its differential measurement from IGS stations Lucknow and Bengaluru on this day is shown in Fig. <ref> The gravity wave signature is significant for this day also where the amplitude variation reached up to ± 0.4 TECU for both the stations. (Fig.<ref>).Later, as a follow-up to the TC Amphan analysis, a similar analysis has been performed with the TEC estimates during the TC Nisarga period from the data extracted from IGS Lucknow and Bengaluru stations. Fig.<ref> shows the absolute VTEC measurement with perturbation signatures from two different satellites on June 3, 2020, during different time on this day. The variations of the color in Fig.<ref> indicate the amplitude of perturbation.Gravity wave signature based on differential VTEC measurement is shown in Fig.<ref> The observation states that a significantly low amplitude was detected from both the Bengaluru and Lucknow stations TEC analysis. A strong westward zonal wind from 100km altitude has been observed from the wind density variation at Alibaug, Maharashtra (18.65N Lat. and 72.87E Lon. at 10 UT) during 10 UT as observed in Fig. <ref> (b). Even-though, a sharp peak in the southward meridional wind has been observed around 180km altitude but the combined variation was westward for higher altitudes. Hence the significant ionospheric perturbance signature is expected to be observed along the eastward direction due to the propagation of gravity waves against the wind direction. The possible reason for the low amplitude signature in IGS Bengaluru and Lucknow is the long range of observation area from the possible source.The VTEC observation from three IGS stations (Bengaluru, Lucknow, and Hyderabad) on June 4, 2020, is observed as shown in Fig.<ref>. It is quite indicative that a different range of perturbations has been observed from various stations. The variations observed at Hyderabad station are higher when compared to the other two stations. The signal waveform, as an outcome of the bandpass filter, also indicate the same and is shown in Fig.<ref>. A higher amplitude variation has been observed from the Hyderabad station which reached up to 0.6 TECU. The results obtained from IGS Lucknow and Bengaluru are also similar for this day. The amplitude of the perturbation has reached 0.4 TECU for the observations both from IGS Lucknow and Bengaluru stations. These different signatures obtained from different stations' observations can be attributed to the possible effect of wind flow. The IPP position of the satellite trajectories is directly opposite to the direction of zonal wind for the Hyderabad station and it is closer to the cyclone strike area. Hence the dissipation from viscosity will be less at a lower thermospheric altitude around IGS Hyderabad station and the effect is more for the other two stations also which has been reflected in our observation. A continuous six days(June 1 to 6, 2020) measurement of gravity wave by two different satellites (PRN 25, 28, and 29) from Bengaluru station has been shown in Fig. <ref>. It has been observed from all satellite measurements that a higher perturbation occurred on June 3 and 4, 2020, and gradually decreased in the next two days. Due to the unavailability of data from June 1-3, 2020, the investigation on these days can't be carried out from Hyderabad station observations. However, the signature from the Bengaluru station indicates the possibility of the generation of gravity waves that perturbed the ionosphere after the landfall of the cyclone. § DISCUSSIONS AND CONCLUSIONSThe analysis highlights the observation using NavIC, which is one of the recent regional navigation satellite systems launched specifically for the Indian subcontinent, on the effects of tropical cyclones Amphan and Nisarga on the equatorial ionosphere. As all the observations are registered at low geomagnetic conditions, they would aid in the detection of ionospheric perturbations in response to large-scale tropospheric activities. The results showed a sharp increase in VTEC for all the stations during the time of landfall and the values increased further on the next day of landfall in both the TC cases. These observations are contrary to earlier observations made by <cit.> but in agreement with <cit.>. However the OLR signatures were in accordance with earlier studies during both the cyclones <cit.>.In general, the rise of VTEC values above the monthly mean on the day of the landfall of the TCs has been observed. This observation, to some extent, is qualitatively in agreement with previous literature <cit.>. As far as the disturbances produced by the landfall of TCs are concerned, the magnitude of VTEC increment produced by the TCs over the Bay of Bengal is much higher than the previously published results. These are some new observations that are the first of their kind from the Indian sector using a combination of NavIC and GPS.To understand the possible mechanism and the level of ionospheric perturbation the horizontal meridional wind model data is utilized in this analysis. Based on the signatures of the HMW14 the gravity wave signature is explored for individual satellites. The waveform retrieved from the ion density was used in this study to observe the response of the ionospheric plasma to the interaction of neutral gravity waves. The variation of absolute VTEC along with the perturbation signature from different satellite measurements from Lucknow which are above 0.4 TECU, indicate the presence of gravity wave during TC Amphan. Likewise, significant signatures have been obtained for the case of TC Nisarga also. In both cases the role of zonal and meridional winds for the propagation of gravity wave has been found. Though a significantly low amplitude signal above 0.2 TECU has been detected from the Bengaluru station which is indicative of the far-field observation and the depletion of the perturbation signature against the meridional wind direction for TC Nisarga. These are clear signatures of gravity wave-induced perturbation of the ionosphere during these two cyclones.It is however to be noted that these preliminary results are affirmative and indicate the possible effect of cyclones on the ionosphere. However the study presented here is with the limited number of GPS stations. Hence, it is not possible to completely separate the effect of other probable sources which may also affect the ionosphere. Furthermore to understand such possible mechanisms data from more number of such observations around a TC-affected region in the vicinity (of within and beyond the 800-1000kms range) can be an extension to this study.§ ACKNOWLEDGMENTSDA acknowledges the Department of Science and Technology for providing her with the INSPIRE fellowship grant to pursue her research. SAC, ISRO is further acknowledged by the authors for providing the NavIC receiver (ACCORD) under NGP-17 to the Department of Astronomy, Astrophysics and Space Engineering, IIT Indore. The authors would also like to acknowledge Prof. Gopi Seemala of the Indian Institute of Geomagnetism (IIG), Navi Mumbai, India for providing the software(<https://drive.google.com/file/d/1XgwY8iBtoHvqz8IVc4J8dGirgTs5BVwL/view?usp=sharing>) to analyze the IGS data( <http://sopac-csrc.ucsd.edu/index.php/data-download/>). Further acknowledgements go to the World Data Center for Geomagnetism, Kyoto for the Dst index data accessible via <http://wdc.kugi.kyoto-u.ac.jp/kp/index.html> and the Space Weather prediction Center (SWPC) under National Oceanic and Atmospheric Admisnistration for the F10.7 data archives accessible via <https://lasp.colorado.edu/lisird/>. In addition the authors also thank the NASA's CCMC for the HMW 14 model data accessible via <https://kauai.ccmc.gsfc.nasa.gov/instantrun/hwm>.jasr-model5-names authoryear | http://arxiv.org/abs/2310.18114v1 | {
"authors": [
"Deepthi Ayyagari",
"Soumen Datta",
"Saurabh Das",
"Abhirup Datta"
],
"categories": [
"physics.ao-ph"
],
"primary_category": "physics.ao-ph",
"published": "20231027125859",
"title": "Ionospheric response during Tropical Cyclones-a brief review on Amphan and Nisarga"
} |
Fully Relativistic Entanglement Harvesting Eduardo Martín-Martínez January 14, 2024 ========================================== Social and behavioral determinants of health (SDOH) play a significant role in shaping health outcomes, and extracting these determinants from clinical notes is a first step to help healthcare providers systematically identify opportunities to provide appropriate care and address disparities. Progress on using NLP methods for this task has been hindered by the lack of high-quality publicly available labeled data, largely due to the privacy and regulatory constraints on the use of real patients' information. This paper introduces a new dataset, SDOH-NLI, that is based on publicly available notes and which we release publicly.[<https://github.com/google-research-datasets/SDOH-NLI>] We formulate SDOH extraction as a natural language inference (NLI) task, and provide binary textual entailment labels obtained from human raters for a cross product of a set of social history snippets as premises and SDOH factors as hypotheses. Our dataset differs from standard NLI benchmarks in that our premises and hypotheses are obtained independently. We evaluate both "off-the-shelf" entailment models as well as models fine-tuned on our data, and highlight the ways in which our dataset appears more challenging than commonly used NLI datasets. § INTRODUCTIONThere has been growing recognition that social and behavioral determinants of health (SDOH) play a significant role in shaping health outcomes for individuals and populations. The ability to accurately identify and extract social and behavioral determinants of health from clinical notes can provide valuable insights that can enable healthcare providers to better understand and address the underlying determinants of health that contribute to poor health outcomes and health disparities.Social determinants of health are frequently recorded in clinical notes as unstructured text, so natural language processing (NLP) can be a valuable tool for extracting actionable insights for care teams. However, research in this area often uses patient records from private health systems' electronic health records (EHRs), which makes it difficult to compare results from other health systems or even replicate the studies. The development and release of high-quality publicly available datasets could enable more reproducible research in this area.In this work, we introduce a new, public SDOH dataset based on <MTSamples.com>, an online collection of transcribed medical reports. Our setup is motivated by the use cases of slicing patient populations along social determinant dimensions for population analytics, and retrieving patients with certain social determinants of health to allow for more targeted outreach and intervention. Given a large set of social determinant factors, our goal is to make binary determinations for each patient about whether that patient's notes imply a particular SDOH factor. In other words, for example, we want to be able to find all patients who lack access to transportation, as opposed to just tagging transportation-related spans in their notes, as done in some previous work.To achieve this goal, we formulate the task as a textual entailment problem, with patient note snippets as the premises, SDOH factors as the hypotheses, and binary entailment labels. We use human annotators to label 1,398 social history snippets according to a curated list of 60 SDOH statements, resulting in a dataset of 29,635 labeled premise-hypothesis examples after some filtering (see Section <ref> for details). We release this dataset publicly. We also evaluate state-of-the-art publicly available large language models on our data in a range of different settings (see Section <ref>).A notable feature of our entailment dataset is that unlike other entailment datasets, our premises and hypotheses were obtained independently, and we label the full cross product of premises and hypotheses. In traditional entailment datasets, the hypotheses areconstructed to be tied to a particular premise; however, in our formulation, the hypotheses are drawn from a large set of SDOH factors that may or may not be discussed in a particular premise (drawn from a clinical note).Since all our text comes from the same domain, we still have a non-negligible fraction of positive entailment labels (albeit with a much larger label imbalance than in standard NLI benchmark datasets).This requires NLI methods to understand both the premise and the hypothesis, and defeats typical shortcuts that have been observed to work for other entailment datasets, such as guessing the label based on the hypothesis alone, or relying on simple syntactic clues such as the presence of negations (see Section <ref>). Indeed, even though our task does not require domain-specific knowledge, we observe that state-of-the-art models struggle to generalize from common NLI benchmarks to our dataset, and highlight typical failure cases (see Section <ref>).We evaluate both off-the-shelf and fine-tuned models in different setups: dual encoders, zero/few-shot prompting, and binary classification. We show that state-of-the-art off-the-shelf models, even if they were fine-tuned on various NLI datasets, do not reliably solve our problem; on the other hand, models fine-tuned on our training set robustly generalize both to unseen notes and to unseen factors, providing evidence for the usefulness of our dataset for model fine-tuning and evaluation. § RELATED WORK §.§ Social and Behavorial Determinants of HealthThere has been a lot of interest in using NLP techniques for SDOH extraction; see <cit.> for a recent survey. The range and granularity of SDOH factors vary considerably across different papers. There has also been a range of methods used, from rule-based heuristics, to n-grams, to fine-tuning pretrained Transformer models.Many previous research studies on SDOH extraction were performed on EHR data from particular health systems and are not released publicly. Exceptions include the i2b2 NLP Smoking Challenge <cit.>, which classified 502 deidentified medical discharge records for smoking status only, and small number of datasets based on MIMIC-III <cit.>, a large publicly available database of deidentified health records of patients who stayed in critical care units of Beth Israel Deaconess Medical Center between 2001 and 2012. For example, <cit.> annotated MIMIC-III with binary labels for certain "phenotypes" including alcohol and substance abuse, and <cit.> annotated note spans with SDOH information.We are aware of four other previous SDOH-related papers which used MTSamples data <cit.>, all of which focused on extracting and tagging SDOH-related spans from social history sections.Among previous papers, the one methodologically closest to ours is <cit.> which also formulated SDOH extraction as an entailment problem and evaluated RoBERTa <cit.> fine-tuned on ANLI <cit.> on clinical notes from UCSF, without any in-domain fine-tuning experiments. §.§ Natural Language Inference Natural language inference (NLI), also called recognizing textual entailment (RTE), has been a very well-studied NLP task; see e.g. <cit.> for recent surveys. Many standard NLI datasets, such as SNLI <cit.> or MultiNLI <cit.>, are obtained by automatically collecting premises and, for each premise and target entailment label, having human annotators write a hypothesis with the specified entailment relation to the premise. (Even some of the datasets specifically designed to address these datasets' shortcomings, such as ANLI <cit.>, follow a similar setup.) It has been observed that this leads to annotation artifacts that lets models do well on these tasks without requiring true understanding of entailment, including by using simple syntactic heuristics <cit.> or by completely ignoring the premise, and considering only the hypothesis <cit.>. ContractNLI <cit.> is an example of a previous dataset which used the same fixed set of hypotheses for all the premises.§ DATASET CONSTRUCTION We scraped all 5,003 medical reports from MTSamples. Within these reports, we obtained 1,030 note sections related to social history by searching for a collection of note section titles identified as synonyms of social history. We then split each note section into sentences, resulting in 3,281 sentences. Many sentences (such as "He is married") appear in multiple notes; after deduplication, we have 1,398 unique sentences.We manually curated a collection of SDOH factors primarily from two sources, the AHRQ Social Determinants of Health Database <cit.> and UCSF SIREN's Compendium of medical terminology codes for social risk factors <cit.>. We rephrased each factor as a full English sentence stating a fact about a person; e.g. "The person is employed." For binary factors, we included a statement and its negation; e.g. "The person currently drinks alcohol" and "The person currently does not drink alcohol." For factors with multiple potential values (such as housing status), we aimed to list all the common options. We grouped these 60 statements into 10 categories: smoking, alcohol use, drug use, employment, housing, food, transportation, health insurance, social support, and financial situation. See the full list in Appendix <ref>. For each social history sentence, we asked human raters to select all the relevant categories and, within each category, all the statements that are supported by the social history snippet. Each snippet was rated by at least three raters. For each (snippet, statement) pair, we took the majority vote of raters to get binary entailment labels. Rater agreement was high, with a Krippendorff's alpha of 0.97 (computed over the binary entailment labels provided by different raters for (premise, hypothesis) pairs).We removed all SDOH statements which were not entailed by any of the social history snippets as well as all note sentences that were not relevant to any SDOH categories. Finally, after inspecting the dataset, we removed three SDOH factors ("The person is stably housed.", "The person has social support.", "The person does not have social support.") because raters weren't able to consistently give correct ratings. That left us with 787 unique sentences and 38 SDOH factors. Since each snippet would typically only entail one factor or a small number of factors, the resulting dataset is heavily imbalanced: only 4.6% of the labels are positive. We split the dataset along the snippets into training, validation, and test sets with a 70:15:15 ratio, with the following modification: we remove a single pair of factors, "The person lives alone" and "The person does not live alone," from the training and validation sets, in order to evaluate fine-tuned models' ability to generalize to unseen factors. In other words, the training, validation, and test sets have disjoint note snippets but the same SDOH factors, except for the test set which also contains an additional two factors that are not present in the training or validation sets.§ MODEL EVALUATION Since the typical use case is retrieving patients with a particular SDOH factor, and because of the heavy label imbalance in our dataset, we use precision, recall, and F1 score as our evaluation metrics.We evaluate state-of-the-art public models in four different setups: * Treating the problem as a retrieval task with SDOH factors as queries and note snippets as documents. We evaluate Sentence-T5 <cit.> and GTR <cit.>, two state-of-the-art dual encoder models. We select the cosine similarity threshold which maximizes F1 score on the training set. * A general-purpose NLI model. We evaluate the seNtLI model <cit.>, a state-of-the-art NLI model (T5 large fine-tuned on the SNLI <cit.>, MNLI <cit.>, ANLI <cit.>, Fever <cit.>, and VitaminC <cit.> datasets). To help the model adapt to the label imbalance in our dataset, we also evaluate it with re-tuning the threshold for positive prediction to maximize F1 score on our training set. * Zero/few-shot prompting. We evaluate Flan-T5 XXL <cit.> and Flan-UL2 <cit.> in both a zero- and a few-shot setting. These models instruction tuned for a large set of tasks, including NLI tasks. (See Appendix <ref> for details.) * Fine-tuning experiments. We fine-tune T5 and Flan-T5 on our dataset (SDOH-NLI), on ANLI, and on a mixture of both at various model sizes. (See Appendix <ref> for details.)§.§ ResultsTable <ref> shows selected results (see Appendix <ref> for more model fine-tuning metrics). First, we observe that while our problem can naturally be framed as an information retrieval problem, even state-of-the-art retrieval models perform poorly on it; formulating the problem as natural language inference yields dramatically better results.However, even powerful models fine-tuned for NLI, either on NLI datasets alone or as part of the much larger Flan collection, do not reliably solve our problem, with an F1 score of at most .67 on the test set. Even T5 small fine-tuned on our dataset outperforms the largest off-the-shelf models, highlighting the added value of our dataset.When prompting instruction-tuned models, including few-shot examples does not appear to add any value compared to a zero-shot setup. We conjecture that this is because the models are already familiar with the NLI task and the prompt format from instruction tuning, and presenting them with a small number of additional examples is not sufficient for teaching them the difference between this task and the NLI benchmark datasets they have seen during fine-tuning.For fine-tuned models below XXL size, when fine-tuning on the SDOH-NLI training set only, we observed poor generalization to the held-out SDOH factors in the test set (see Appendix <ref> for detailed metrics). Because of this, why we experimented with fine-tuning on a combination of our data and ANLI R1, a challenging general-purpose NLI dataset of a similar size. For smaller models, fine-tuning on this mixture enabled robust generalization to unseen factors without sacrificing overall test performance. §.§ DiscussionWhat makes our dataset challenging for models fine-tuned on standard NLI datasets?We emphasize that it is not that it requires any specialized (e.g., medical) knowledge: most of our examples describe everyday situations in plain English (e.g., "The patient is retired on disability due to her knee replacements"); human raters without any domain-specific training had no difficulty understanding them. We conjecture that the main difference is that the hypotheses of typical NLI datasets are written to satisfy a given entailment label for a given premise, whereas ours are obtained independently from the premises, and therefore their entailment status can be more subtle and ambiguous.These models also struggled with distinguishing between statements about the present and the past; e.g., Flan-UL2 erroneously predicting that "He is not a smoker" entails "The person wasn't a smoker in the past." Also, our dataset contains a lot of irrelevant hypotheses and requires the model to correctly classify all of those; we see off-the-shelf models occasionally giving them positive labels (e.g. predicting that "She was living alone and is now living in assisted living." implies "The person drank alcohol in the past."), hurting precision. As an example, Flan-T5 fine-tuned on ANLI had comparable recall to our best models, but much worse precision.§ CONCLUSIONIn this work, we introduced SDOH-NLI, a new entailment dataset containing social history snippets as premises and social and behavioral determinants of health as hypotheses. Our dataset was designed both to reflect realistic use cases of SDOH extraction in clinical settings as well as to provide a high quality entailment dataset to support broader NLI research. We evaluated baseline methods using state-of-the-art public models in a variety of setups, and highlighted novel andchallenging features of our dataset.§ ACKNOWLEDGMENTSThe authors would like to thank Jonas B. Kemp, Donald Metzler, Von Nguyen, Birju Patel, Martin G. Seneviratne, Andy Strunk, and Vinh Q. Tran for their valuable feedback and discussions.§ LIMITATIONSOur dataset is English-only, and reflects the American healthcare system. While a lot of the social and behavioral determinants of health mentioned in the data could apply elsewhere, too, their distribution and the language used to describe them could reflect U.S. norms. Also, since the dataset is based on transcription samples, the text can be cleaner than in some other settings (such as notes in EHRs), where the task could be more challenging due to typos and abbreviations that do not appear in our dataset. In such settings, performance could be improved by first using the methods of <cit.> to decode abbreviations, but we have not included this in our evaluations. Finally, our dataset only contains short note snippets, and we have not evaluated the models' ability to reconcile contradictory statements or reason about the chronology of information in longer patient records. For longer contexts, especially if the social history sections don't fit in the Transformer model's context window, we recommend evaluating the methods of <cit.>.§ ETHICS STATEMENTMachine learning research in the clinical domain has many ethical considerations.Given the nature of this work is to identify opportunities for care teams to improve the health outcomes of patients due to factors commonly not addressed, we think it contributes to human well-being and avoids harms.The use of public, non-identifiable data that is released for the NLP community helps balance the need to have reproducible (i.e., honest and trustworthy) data to enable technical advances while limiting the need for sensitive, private medical records for research. We acknowledge that the purpose of the work is to identify SDOH to provide additional help and services to patients, and we warn against any use to deny patients care or services based on any factor that can be identified. Although we picked a large number of SDOH factors to test our method, we acknowledge that there may be additional factors that are important for specific patients and populations, so we encourage researchers to reflect on those possible factors and create datasets to help others study them, as well.acl_natbib§ LABELING SETUP Raters were given the following instructions:"In this task, you will be given a list of snippets from transcribed medical records, describing a person's social history. Your job is to select the categories that are relevant to the snippet and, in each category, select statements about the person that are supported by the snippet.Only select categories and statements relevant to the subject of the sentence, NOT if they apply to someone else (such as the subject's relatives).The source of the medical transcript samples is a public dataset (MTSamples.com), and not your, other raters' or users health transcript data."For each snippet, they were required select one or more of the following categories (including "None of the above" if none of the categories were relevant). If they selected a category, the statements within the category would be displayed, and they were required to select one of statements within that category, or "None of the above":* Smoking * The person is currently a smoker.* The person is currently not a smoker.* The person was a smoker in the past.* The person wasn't a smoker in the past.* None of the above* Alcohol use * The person currently drinks alcohol.* The person currently does not drink alcohol.* The person drank alcohol in the past.* The person did not drink alcohol in the past.* None of the above* Drug use * The person is a drug user.* The person is not a drug user.* The person was a drug user in the past.* The person wasn'ta drug user in the past.* The person uses cocaine.* The person does not use cocaine.* The person used cocaine in the past.* The person did not use cocaine in the past.* The person uses marijuana.* The person does not use marijuana.* The person used marijuana in the past.* The person did not use marijuana in the past.* The person uses opioids (e.g., heroin, fentanyl, oxycodone).* The person does not opioids (e.g., heroin, fentanyl, oxycodone).* The person used opioids in the past (e.g., heroin, fentanyl, oxycodone).* The person did not use opioids in the past (e.g., heroin, fentanyl, oxycodone).* None of the above* Employment * The person is employed.* The person is not employed.* The person is employed part time.* The person is a student.* The person is a homemaker.* The person is retired due to age or preference.* The person is retired due to disability.* The person is retired due to an unknown reason.* None of the above* Housing * The person lives in their own or their family's home.* The person lives in a housing facility.* The person is homeless.* The person is stably housed.* The person's housing is unsuited to their needs.* None of the above* Food * The person is able to obtain food on a consistent basis.* The person is not able to obtain food on a consistent basis.* The person has consistent fruit and vegetable intake.* The person does not have consistent fruit and vegetable intake.* None of the above* Transportation * The person has access to transportation.* The person does not have access to transportation.* The person has access to a car.* The person does not have access to a car.* The person has access to public transit.* The person does not have access to public transit.* The person has issues with finding transportation.* None of the above* Health insurance * The person has private health insurance.* The person is on Medicare.* The person is on Medicaid.* The person does not have health insurance.* None of the above* Social support * The person has social support.* The person does not have social support.* The person lives alone.* The person does not live alone.* None of the above* Financial situation * The person is below the poverty line.* The person is above the poverty line.* The person is able to afford medications.* The person is not able to afford medications.* None of the above* None of the above § PROMPTING SETUP Our goal with the design of our zero/few-shot experiments was to stay close to how the models were trained and evaluated on similar NLI tasks. Since the models we used were fine-tuned on the Flan collection <cit.>, which includes several NLI datasets, we reused some of the Flan collections's prompt templates.In particular, we used the templates for the SNLI and MNLI dataset which were the most relevant to our data. (We excluded templates for other datasets such as RTE, ANLI, or WNLI because the wording of some of the prompts could be slightly misleading; e.g. by referring to the premise as a "paragraph.") For each example, we chose a prompt template uniformly at random from the 19 templates. For few-shot experiments, we first picked a random positive example from the training set, then picked the other few-shot examples uniformly at random from the training examples. (Without explicitly forcing at least one example to be positive, it would be very likely that all the few-shot examples would be negative, given the label imbalance in the dataset. We tried dropping that requirement and sampling all examples at random, which resulted in a slight drop in model performance, as we expected.)Since SNLI and MNLI have three answer options ("yes," "it is not possible to tell," "no"), we kept all three of these options in the prompt template, even though our dataset is binary. We experimented with dropping either the "no" or the "it is not possible to tell" option; both of these deviations from the original prompts resulted in slight decreases in model performance.We use rank classification to obtain binary labels (as is customary): i.e., instead of decoding a model prediction, we score the three answer options (for three copies of each input example) and consider the prediction positive if "yes" has the highest score. Similarly to our experiment with seNtLI, we also tried taking softmax over the three options and using a fixed threshold for "yes" which maximizes F1 score on the training set; this resulted in a small drop in test F1 score (e.g., .6501 to .6222 for Flan-UL2).§ FINE-TUNING SETUP All models were fine-tuned using the T5X framework <cit.> on TPUv3 chips for 10k steps with batch size 32 and learning rate 1e-4, with the exception of the T5 small size models which were finetuned for 50k steps. For each model, we picked the checkpoint with the highest F1 score on the validation set.§ ADDITIONAL MODEL FINE-TUNING RESULTS See Table <ref> for the full set of results from our finetuning experiments. Here we also include metrics on the subset of the test set consisiting on the new SDOH factors not included in the training and validation sets to highlight the differences in models' ability to generalize to unseen factors.Note that since Flan-T5 models between sizes large and XL fine-tuned on either ANLI or our SDOH dataset alone underperformed the same models fine-tuned on the combined dataset, we did not perform these dataset ablations on smaller Flan-T5 models or on T5 below the XXL size. | http://arxiv.org/abs/2310.18431v1 | {
"authors": [
"Adam D. Lelkes",
"Eric Loreaux",
"Tal Schuster",
"Ming-Jun Chen",
"Alvin Rajkomar"
],
"categories": [
"cs.CL"
],
"primary_category": "cs.CL",
"published": "20231027190930",
"title": "SDOH-NLI: a Dataset for Inferring Social Determinants of Health from Clinical Notes"
} |
Diagrammatic approach to excitonic effects on nonlinear optical response Yang-Hao Chan^1,2 January 14, 2024 ========================================================================Despite rapid progress in Visual question answering (VQA), existing datasets and models mainly focus on testing reasoning in 2D. However, it is important that VQA models also understand the 3D structure of visual scenes, for example to support tasks like navigation or manipulation.This includes an understanding of the 3D object pose, their parts and occlusions. In this work, we introduce the task of 3D-aware VQA, which focuses on challenging questions that require a compositional reasoning over the 3D structure of visual scenes. We address 3D-aware VQA from both the dataset and the model perspective. First, we introduce Super-CLEVR-3D, a compositional reasoning dataset that contains questions about object parts, their 3D poses, and occlusions. Second, we propose , a 3D-aware VQA model that marries two powerful ideas: probabilistic neural symbolic program execution for reasoning and deep neural networks with 3D generative representations of objects for robust visual recognition. Our experimental results show our modeloutperforms existing methods significantly, but we still observe a significant performance gap compared to 2D VQA benchmarks, indicating that 3D-aware VQA remains an important open research area. The code is available at <https://github.com/XingruiWang/3D-Aware-VQA>. § INTRODUCTION Visual question answering (VQA) is a challenging task that requires an in-depth understanding of vision and language, as well as multi-modal reasoning. Various benchmarks and models have been proposed to tackle this challenging task, but they mainly focus on 2D questions about objects, attributes, or 2D spatial relationships. However, it is important that VQA models understand the 3D structure of scenes, in order to support tasks like autonomous navigation and manipulation. An inherent property of human vision is that we can naturally answer questions that require a comprehensive understanding of the 3D structure in images. For example, humans can answer the questions shown in <ref>, which ask about the object parts, their 3D poses, and occlusions. However, current VQA models, which often rely on 2D bounding boxes to encode a visual scene <cit.> struggle to answer such questions reliably (as can be seen from our experiments). We hypothesize this is caused by the lack of understanding of the 3D structure images. In this work, we introduce the task of 3D-aware VQA, where answering the questions requires compositional reasoning over the 3D structure of the visual scenes. More specifically, we focus on challenging questions that require multi-step reasoning about the object-part hierarchy, the 3D poses of the objects, and the occlusion relationships between objects or parts. We address the challenging 3D-aware VQA task from both the dataset and the model perspective. From the dataset perspective, we introduce , which extends the Super-CLEVR dataset <cit.> with 3D-aware questions. Given the visual scenes from Super-CLEVR that contain randomly placed vehicles of various categories, we define a set of 3D-aware reasoning operations and automatically generate 3D questions based on these operations. <ref> shows examples of the images, questions and the underlying 3D operations for the questions. From the model perspective, we introduce , a VQA model that marries two powerful ideas: probabilistic neural symbolic program execution for reasoning and a deep neural network with 3D generative representations of objects for robust visual scene parsing. Our model first recovers a 3D scene representation from the image and a program from the question, and subsequently executes the program on the 3D scene representation to obtain an answer using a probabilistic reasoning process that takes into account the confidence of predictions from the neural network. We refer to our system as , which stands for Parts, Poses, and Occlusions in 3D Visual Question Answering. On , we experiment with existing representative models, their variants, and our model . The results show that our model outperforms existing methods significantly, leading to an improvement in accuracy of more than 11%, which shows the advantage of the generative 3D scene parser and the probabilistic neural symbolic reasoning process. Moreover, further analysis on questions with different difficulty levels reveals that the improvements of our model are even greater on harder questions with heavy occlusions and small part sizes. Our results indicate that a reliable 3D understanding, together with the modular reasoning procedure, produces a desirable 3D-aware VQA model. In summary, our contributions are as follows. (1) We introduce the challenging task of 3D-aware VQA and propose thedataset, where 3D visual understanding about parts, 3D poses, and occlusions are required. (2) We propose a 3D-aware neural modular modelthat conducts probabilistic reasoning in a step-wise modular procedure based on robust 3D scene parsing. (3) With experiments, we show that 3D-aware knowledge and modular reasoning are crucial for 3D-aware VQA, and suggest future VQA methods take 3D understanding into account. § RELATED WORK Visual Question Answering (VQA). Rapid progress has been made in VQA <cit.> in both the datasets and the models. To solve the challenging VQA datasets <cit.> with real images, multiple models are developed including two-stream feature fusion <cit.> or transformer-based pretraining <cit.>. However, the real datasets are shown to suffer from spurious correlations and biases <cit.>. Alternatively, synthetic datasets like CLEVR <cit.> and Super-CLEVR <cit.>, are developed to study the compositional reasoning ability of VQA systems, which are also extended to study other vision-and-language tasks <cit.>. The synthetic datasets promote the development of neural modular methods <cit.>, where the reasoning is done in a modular step-by-step manner. It is shown that the modular methods have nice properties including interpretability, data efficiency <cit.>, better robustness <cit.> and strong performance on synthetic images <cit.>. However, most existing methods rely on region features <cit.> extracted using 2D object detectors <cit.> for image encoding, which is not 3D-aware. We follow the works on the synthetic dataset and enhance the modular methods with 3D understanding.VQA in 3D.Multiple existing works study VQA under the 3D setting, such as SimVQA <cit.>, SQA3D <cit.>, 3DMV-VQA <cit.>, CLEVR-3D <cit.>, ScanQA <cit.>, 3DQA <cit.>, and EmbodiedQA <cit.>, which focus on question answering on the 3D visual scenes like real 3D scans <cit.>, simulated 3D environments <cit.>, or multi-view images <cit.>.PTR <cit.> is a synthetic VQA dataset that requires part-based reasoning about physics, analogy and geometry. Our setting differs from these works because we focus on 3D in the questions instead of 3D in the visual scenes, since our 3D-aware questions explicitly query the 3D information that can be inferred from the 2D input images.3D scene understanding. One popular approach for scene understanding is to use the CLIP features pretrained on large-scale text-image pairs and segment the 2D scene into semantic regions <cit.>. However, these methods lack a 3D understanding of the scene and cannot be used to answer 3D-related questions. Another approach is to adopt category-level 6D pose estimation methods that can locate objects in the image and estimate their 3D formulations. Previous approaches include classification-based methods that extend a Faster R-CNN model for 6D pose estimation <cit.> and compositional models that predicts 6D poses with analysis-by-synthesis <cit.>. We also notice the huge progress of 3D vision language foundation models, which excel in multiple 3D vision-language understanding tasks <cit.>. Still, we focus on the reasoning with compositional reasoning that brings more interpretability and robustness <cit.>.§DATASET To study 3D-aware VQA, we propose thedataset, which contains questions explicitly asking about the 3D object configurations of the image. The images are rendered using scenes from the Super-CLEVR dataset <cit.>, which is a VQA dataset containing synthetic scenes of randomly placed vehicles from 5 categories (car, plane, bicycle, motorbike, bus) with various of sub-types (different types of cars) and attributes (color, material, size). The questions are generated by instantiating the question templates based on the image scenes, using a pipeline similar to Super-CLEVR. In , three types of 3D-aware questions are introduced: part questions, 3D pose questions, and occlusion questions. In the following, we will describe these three types of questions, and show the new operations we introduced for our 3D-aware questions about object parts, 3D poses, and occlusions. Examples of the dataset are shown in <ref>.Part questions. While in the original Super-CLEVR dataset refers to objects using their holistic names or attributes, objects are complex and have hierarchical parts, as studied in recent works <cit.>. Therefore, we introduce part-based questions, which use parts to identify objects (“which vehicle has red door”) or query about object parts (“what color is the door of the car”). To enable the generation of part-based questions, we introduce two new operations into the reasoning programs: , which find the objects containing the given part, and , which select all the parts of the given object. We also modify some existing operations (,and ), enabling them to operate on both object-level and part-level. With those reasoning operations, we collect 9 part-based templates and instantiate them with the image scene graph to generate questions. 3D pose questions.asks questions about the 3D poses of objects (“which direction is the car facing in”), or the pair-wise pose relationships between objects (“which object has vertical direction with the red car”). The pose for an individual object (“facing left”) can be processed in a similar way as attributes like colors, so we extend the existing attribute-related operations likeandto have them include pose as well. For pair-wise pose relationship between objects, we add three operations, ,and , to deal with the three types of pose relationships between objects. For example,returns the objects that are in the opposite pose direction with the given object. 17 templates are collected to generate 3D pose questions.Occlusion questions. Occlusion questions ask about the occlusion between entities (objects or parts). Similar to 3D poses, occlusion can also be regarded as either an attributes for an entity (“which object is occluded”), or as a relationship between entities (“which object occludes the car door“). We extend the attribute-related operations, and introduce new operations to handle the pair-wise occlusion relationships:which filters the entities that are being occluded,which finds the entities that are occluded by the given entity, andwhich finds the entities that are occluding the given entity. Using these operations, 35 templates are collected to generate the occlusion questions.§ METHODIn this section, we introduce , which is a parse-then-execute modular model for 3D-aware VQA. The overview of our system is shown in <ref>. We first parse the image into a scene graph representation that is aware of 3D information like object parts, 3D poses and occlusion relations, then we parse the question into a reasoning program and execute the program on the derived scene representations in a probabilistic manner. In <ref>, we define the scene representation required; in <ref>, we describe how we parse the image into the scene representation based on a multi-class 6D pose estimation model with non-trivial extensions; in <ref>, we describe how the question is executed on the derived scene representation to predict the answer. §.§ 3D-aware scene representation Given an input image I, we parse it into a 3D-aware scene representation R that contains the objects (O) with attributes (A^o), the parts (P) with attributes (A^p), the hierarchical relationships between objects and parts (H), and the occlusion relationships between them (S). The attributes include the 3D poses and locations of objects or parts, as well as their colors, materials, and sizes.The scene representation R={O, P, A^o, A^p, H, S} is comprehensive and therefore we can directly execute the symbolic reasoning module on this representation without taking into account the image any further.In more detail, objects are represented as a matrix O ∈ℝ^n × N_obj containing the probability scores of each object being a certain instance, where n is the number of objects in the given image and N_obj is the number of all possible object categories in the dataset (vocabulary size of the objects). Similarly, parts are represented as P ∈ℝ^p × N_prt, where p is the number of parts in the image and N_prt is the vocabulary size of the object parts. The object-part hierarchy is represented by a binary matrix H ∈ℝ ^ n × p, where H_ij=1 if the object i contains the part j or H_ij=0 otherwise. The attributes A^o∈ℝ^n × N_att and A^p∈ℝ^p × N_att containing probability scores of each object or part having a certain attribute or the value of bounding box. Here N_att is the number of attributes including the 3D poses, location coordinates, colors, materials and sizes. Occlusion relationships are represented byS ∈ℝ^(n+p) × n, where each element S_ij represents the score of object (or part) i being occluded by object j. §.§ Multi-class 6D Scene ParsingWhile most existing VQA methods <cit.> encode the image using pretrained object detectors like Faster-RCNN <cit.>, we build our 6D-aware scene parser in a different way, based on the idea of analysis-by-synthesis through inverse rendering <cit.> which has the following advantages: first, the model prediction is more robust <cit.> as the render-and-compare process can naturally integrate a robust reconstruction loss to avoid distortion through occlusion; second, while the object parts are usually very challenging for Faster-RCNN to detect due to their small size, they can be much easier located using the 3D object shape, by first finding the object and estimating its 3D pose, and subsequently locating the parts using the 3D object shape (as shown in our experimental evaluation).However, we observe two open challenges for applying existing 6D pose estimators that follow a render-and-compare approach <cit.>: (a) these pose estimators assume that the object class is known, but inthe scene parser must learn to estimate the object class jointly with the pose; and (b) the scenes inare very dense, containing multiple close-by objects that occlude each other. In order to address these two challenges, we introduce several improvements over <cit.> that enable it to be integrated into a 3D-aware VQA model.In the following, we first describe neural meshes <cit.>, which were proposed in prior work for pose estimation of single objects following an analysis-by-synthesis approach. Subsequently, we extend this method to complex scenes with densely located and possibly occluded objects to obtain a coherent scene representation, including object parts and attributes. Preliminaries. Our work builds on and significantly extends Neural Meshes <cit.> that were introduced for 6D pose estimation through inverse rendering.The task is to jointly estimate the 6D pose (2D location, distance to the camera and 3D pose) of objects in an image. An object category is represented with a category-level mesh <cit.> M_y = {v_n ∈ℝ^3}_n=1^N and a neural texture T_y ∈ℝ^N × c on the surface of the mesh M_y, where c is the dimension of the feature and y is the object category. Given the object 3D pose in camera view α, we can render the neural mesh model O_y = {M_y, T_y} into a feature map with soft rasterization <cit.>: F_y(α) = ℜ(O_y, α). Following prior work in pose estimation <cit.> we formulate the render-and-compare process as an optimization of the likelihood model: p(F | O_y, α_y, B) = ∏_i ∈ℱ𝒢 p(f_i | O_y, α_y) ∏_i ∈ℬ𝒢 p(f_i' | B)where ℱ𝒢 and ℬ𝒢 are the set of foreground and background locations on the 2D feature map and f_i is the feature vector of F at location i. Here the foreground and background likelihoods are modeled as Gaussian distributions.To train the feature extractor Φ, the neural texture {T_y} and the background model B jointly, we utilize the EM-type learning strategy as originally introduced for keypoint detection in CoKe<cit.>. Specifically, the feature extractor is trained using stochastic gradient descent while the parameters of the generative model {T_y} and B are trained using momentum update after every gradient step in the feature extractor, which was found to stabilize training convergence.At inference time, the object poses α can be inferred by minimizing the negative log-likelihood the 3D pose α using gradient descent <cit.>. Multi-object competition with 3D-NMS.We extend Neural Meshesto predict the 6D object pose and class label in complex multi-object scenes. In particular, we introduce 3D-Non-Maximum-Suppression (3D-NMS) into the maximum likelihood inference process.This introduces a competition between Neural Meshes of different categories in explaining the feature map. In contrast to classical 2D-NMS, our 3D-NMS also takes into account the distance of each object to the camera and hence naturally enables reasoning about occlusions of objects in the scene. We denote the 6D pose as γ = {x, l}, where x={α, β} represents the 3D object pose α and object distance to the camera β, and l is the 2D object location in the feature map.We first detect the 6D poses of each object category independently and apply 2D-NMS such that for each 2D location l' in a neighborhood defined by radius r, the predicted 6D pose {x, l} yields the largest activation:max_x p(F | x, l) s.t.p(F | x, l) > p(F | x, l'), ∀ l' ∈{ l' | 0 < | l' - l | < r}We enable multi-category 6D pose estimation by extending this formulation to a 3D non-maximum suppression (3D-NMS). Using 𝒴 to represent the set of all object categories, we model the category label y from a generative perspective:max_x p(F | x, l, y)s.t. p(F | x, l, y) > p(F | x, l', y), ∀ l' ∈{ l' | 0 < | l' - l | < r}and p(F | x, l, y) > p(F | x, l, y'), ∀ y' ≠ y ∈𝒴 Dense scene parsing with greedy proposal generation. Typically, object detection in complex scenes requires well chosen thresholds and detection hyperparameters. Our render-and-compare approach enables us to avoid tedious hyperparameter tuning by adopting a greedy approach to maximize the model likelihood (<ref>) using a greedy proposal strategy.In particular, we optimize the likelihood greedily by starting from the object proposal that explains away the most parts of the image with highest likelihood, and subsequently update the likelihood of the overlapping proposals taking into account, that at every pixel in the feature map only one object can be visible <cit.>. Formally, given a list of objects proposals {o_i=(O_y,i,α_y,i)}_i=1^k (with predicted category label y and 6D pose α), we first order the object proposals based on their likelihood score s=p(F|o_i,B) such that s_i ≤ s_j for i < j. Based on the ordering, we greedily update the 6D pose α_j and the corresponding proposal likelihood for object o_j by masking out the foreground regions of previous objects o_i with 1 ≤ i ≤ j-1.In this way, we can largely avoid missing close-by objects or duplicated detection.Part and attribute prediction. Given the predicted location and pose of each object, we project the object mesh back onto the image to get the locations for each part. To predict the attributes for the objects and parts, we crop the region containing the object or part from the RGB image, and train an additional CNN classifier using the cropped patches to predict the attributes (color, size, material) and the fine-grained classes (different sub-types of cars) of each patch using a cross-entropy loss. The reason why this additional CNN classifier is needed instead of re-using the features from the 6D pose estimator is that the pose estimation features are learned to be invariant to scale and texture changes, which makes it unsuitable for attribute prediction.Post-filtering.Finally, we post-process the located objects using the fine-grained CNN classifier. We compare the category labels predicted by the 6D pose estimator with the ones predicted by the CNN classifier, and remove the objects for which these two predictions do not agree. This post-filtering step helps with the duplicated detections that cannot be fully resolved with the 3D-NMS. Summary. <ref> provides an overview of our scene parser and <ref> visualize the intermediate results. With the idea of render-and-compare (shown in the green box of <ref>), the model first computes an activation map for each possible object category (<ref>II). Next, to infer the category for each object, the category-wise competition 3D-NMS is performed (<ref>b) and a post-filtering step is taken to remove mis-detected objects (<ref>c). <ref>d shows the 6D pose estimation results. To predict parts, we project the 3D object mesh back onto the image to locate parts based on projected objects (<ref>e). In this way, the input image can be parsed into a 3D-aware representation, which is ready for the question reasoning with program execution. §.§ Program executionAfter the 3D-aware scene representations are predicted for the given image, the question is parsed into a reasoning program, which is then executed on the scene representation to predict the answer. The question parsing follows previous work <cit.>, where a LSTM sequence-to-sequence model is trained to parse the question into its corresponding program. Like P-NSVQA <cit.>, each operation in the program is executed on the scene representation in a probabilistic way. In the following, we describe the execution of the new operations we introduced.The part-related operators are implemented by querying the object-part hierarchy matrix H, so that the object containing a given part () and the parts belonging to the given object () can be determined. The pose-related operators are based on the estimated 3D pose in the object attributes A^o. For theandoperations regarding pose, the 3D poses are quantified into four direction (left, right, front, back).For the pair-wise pose relationships, the azimuth angle between two objects is used to determine the same/opposite/vertical directions. The occlusion-related operations are implemented by querying the occlusion matrix S. Based on the occlusion scores S_ij representing whether entity i being occluded by entity j, we can compute the score of one entity being occluded ∑_j S_ij (), find the entities that occlude a given entity (), or find the entities that are occluded by a given entity (). § EXPERIMENTS §.§ Evaluated methodsWe compare our model with three representative VQA models: FiLM <cit.>, mDETR <cit.>, and PNSVQA <cit.>. Additionally, we introduce a variant of PNSVQA, , to analyze the benefit of our generative 6D pose estimation approach.FiLM <cit.> Feature-wise Linear Modulation is a representative two-stream feature fusion method. The FiLM model merges the question features extracted with GRU <cit.> and image features extracted with CNN and predicts answers based on the merged features. mDETR <cit.>mDETR is a pretrained text-guided object detector based on transformers. The model is pretrained with 1.3M image and text pairs and shows strong performance when finetuned on downstream tasks like referring expression understanding or VQA. PNSVQA <cit.> PNSVQA is a SoTA neural symbolic VQA model. It parses the scene using MaskRCNN <cit.> and an attribute extraction network, then executes the reasoning program on the parsed visual scenes with taking into account the uncertainty of the scene parser. To extend PNSVQA to the 3D questions in , we add a regression head in the attribute extraction network to predict the 3D posefor each object; parts are detected in a similar way as objects by predicting 2D bounding boxes; the part-object associations and occlusions are computed using intersection-over-union: a part belongs to an intersected object if the part label matches the object label, otherwise it is occluded by this object.Similar with NSVQA, this model predicts the 6D poses, categories and attributes using MaskRCNN and the attribute extraction network. The difference is that the parts and occlusions are predicted by projecting the 3D object models onto the image using the predicted 6D pose and category (same with how we find parts and occlusions in our model). This model helps us ablate the influence of the two components in our model, 6D pose prediction by render-and-compare, and part/occlusion detection with mesh projection.§.§ Experiment setup Dataset.Ourdataset shares the same visual scenes with Super-CLEVR dataset. We re-render the images with more annotations recorded (camera parameters, parts annotations, occlusion maps). The dataset splits follow the Super-CLEVR dataset, where we have 20k images for training, 5k for validation, and 5k for testing. For question generation, we create 9 templates for part questions, 17 templates for pose questions, 35 templates for occlusion questions (with and without parts). For each of the three types, 8 to 10 questions are generated for each image by randomly sampling the templates. We ensure that the questions are not ill-posed and cannot be answered by taking shortcuts, the questions contain no redundant reasoning steps, following the no-redundancy setting in <cit.>. More details including the list of question templates can be found in the Appendix. Implementation details.We train the 6D pose estimator and CNN attribute classifier separately. We train the 6D pose estimator (including the contrastive feature backbone and the nerual mesh models for each of the 5 classes) for 15k iterations with batch size 15, which takes around 2 hours on NVIDIA RTX A5000 for each class. The attribute classifier, which is a ResNet50, is shared for objects and parts. It is trained for 100 epochs with batch size 64. During inference, it takes 22s for 6D pose estimation and 10s for object mesh projection for all the objects in one image. During inference of the 6D pose estimator, we assume the theta is 0. During 3D NMS filtering, we choose the radius r as 2, and we also filter the object proposals with a threshold of 15 on the score map. §.§ Quantitative ResultsWe trained our model and baselines on Super-CLEVR-3D's training split, reporting answer accuracies on the test split in <ref>. Accuracies for each question type are detailed separately.Comparison with baselines. First, among all the baseline methods, the neural symbolic method PNSVQA performs the best (64.4% accuracy), outperforming the end-to-end methods mDETR and FiLM by a large margin (>8%). This shows the advantage of the step-wise modular reasoning procedure, which agrees with the findings in prior works that the modular methods excel on the simulated benchmarks that require long-trace reasoning. Second, our model achieves 75.6% average accuracy, which significantly outperforms all the evaluated models. Especially, comparing ourwith its 2D counterpart NSVQA, we see that the injection of 3D knowledge brings a large performance boost of 11%, suggesting the importance of the 3D understanding. Comparison with PNSVQA variants. By analyzing the results of PNSVQA variants (PNSVQA, , and our ), we show (a) the benefit of estimating object 3D poses using our analysis-by-synthesis method over regression and (b) the benefit of object-part structure knowledge. First, by detecting part using 3D model projection, improves the PNSVQA results by 4%, which indicates that locatingparts based on objects using the object-part structure knowledge is beneficial. Second, by estimating object 6D poses with our generative render-and-compare method, ouroutperformsby 7% (from 68.2% to 75.6%), showing the advantage of our render-and-compare model.Moreover, looking at the per-type results, we find that the improvement of ouris most significant on the part-related questions (21% improvement over PNSVQA) and part-with-occlusion questions (14%), while the accuracy on pose-related questions does not improve. The reason is that part and occlusion predictions require precise pose predictions for accurate mesh projection, while the pose questions only require a rough pose to determine the facing direction.§.§ Analysis and discussionsTo further analyze the advantage ofover other PNSVQA variants, we compare the models on questions of different difficulty levels. It is shown that the benefit our model is the most significant on hard questions. In <ref>, we plot the relative accuracy drop [Relative accuracy drop means the ratio of absolute accuracy drop and the original accuracy. For example, if a model's accuracy drops from 50% to 45%, its relative accuracy drop is 10%.] of each model on questions with different occlusion ratios and questions with different part sizes.Questions with different occlusion ratios.We sort pose-related questions into different sub-groups based on their occlusion ratios and evaluate the models on each of the sub-groups. The occlusion ratio r of a question is the minimum of occlusion ratios for all the objects in its reasoning trace.We choose r from 0% to 30%, in increment of 5%. The results are shown is <ref> (a). Ouris much more robust to occlusions compared to the other two methods: while the performances of all the three models decrease as the occlusion ratio increases, the relative drop of ours is much smaller than others. The results show that our render-and-compare scene parser is more robust to heavy occlusions compared with the discriminative methods.Questions with different part sizes. Questions about small parts are harder than the ones about larger parts. We sort the questions into different part size intervals (s, t), where the largest part that the question refers to has an area (number of pixels occupied) larger than s and smaller than t.We compare the models on the part questions and the part+occlusion questions with different part sizes in <ref> (b) and (c). In (b), the accuracy drop ofis smaller thanand PNSVQA when parts get smaller. In (c),is slightly better than our model and they are both better than the original PNSVQA.In summary, by sorting questions into different difficulty levels based on occlusion ratios and part sizes, we show the advantage of ouron harder questions, indicating that our model is robust to occlusions and small part sizes.Qualitative results. <ref> shows examples of predictions for our model and PNSVQA variants. In (a), the question asks about occlusion, but with a slight error in the pose prediction,misses the occluded bus and predicts the wrong answer, while our model is correct with accurate pose. In (b), the question refers to the heavily occluded minivan that is difficult to detect, but our model gets the correct prediction thanks to its robustness to occlusions.Limitations and failure cases.Due to the difficulties of collecting real images with compositional scenes and 3D annotations, our work is currently limited by its synthetic nature. For PO3D-VQA, it sometimes fails to detect multiple objects if they are from the same category and heavily overlap (see Appendix D for more visualizations). 3D NMS can effectively improve the dense scene parsing results when objects are from different categories, but conceptually it is limited when objects are from the same category. However, 6D pose estimation in dense scenes is a challenging problem, whereas many current works on 6D pose estimation are still focusing on simple scenes with single objects <cit.>.§ FURTHER DISCUSSIONIn this section, we discuss two meaningful extensions of our work: the incorporation of z-direction questions and the application of our model to real-world images.Z-direction questions. While the proposeddataset has been designed with 3D-aware questions, all objects within it are placed on the same surface. Introducing variability in the z direction can further enrich our dataset with more comprehensive 3D spatial relationships.We consider the scenario where aeroplane category, is in different elevations, introducing the z dimension into the spatial relationships (see Fig. <ref>). This allowed us to formulate questions that probe the model's understanding of height relationships and depth perception.We create a subset containing 100 images and 379 questions and test ourmodel directly on it without retraining the 6D parser. On this dataset, our PO3D model achieves 90.33% accuracy on height relationship questions and 78.89% on depth-related questions, suggesting that our model can successfully handle questions about height. As the baseline models only use the bounding box to determine the spatial relationship between objects, they are not able to determine the height relationships. Extension to real-world images While ourmodel has demonstrated impressive performance on the syntheticdataset, an essential research direction is extending it to real images or other 3D VQA datasets (such as GQA and FE-3DGQA). However, it's not trivial to truly evaluate it on these real-world problems, and a primary challenge is the lack of 3D annotations and the highly articulated categories (like the human body) in these datasets.However, we show that ourmodel can, in principle, work on realistic images. We generate several realistic image samples manually using the vehicle objects (e.g. car, bus, bicycle) from ImageNet with 3D annotation (see Fig. <ref>) and real-image background. In this experiment, the pose estimator is trained on the PASCAL3D+ dataset, and is used to predict the poses of objects from the image before pasting, as shown in (b). The attribute (color) prediction module is trained onand the object shapes are predicted by a ResNet trained on ImageNet. Our model can correctly predict answers to questions about the object pose, parts, and occlusions, e.g. “Which object is occluded by the mountain bike”.§ CONCLUSION In this work, we study the task of 3D-aware VQA. We propose thedataset containing questions explicitly querying 3D understanding including object parts, 3D poses, and occlusions. To address the task, a 3D-aware neural symbolic modelis proposed, which enhances the probabilistic symbolic model with a robust 3D scene parser based on analysis-by-synthesis. With the merits of accurate 3D scene parsing and symbolic execution, our model outperforms existing methods by a large margin. Further analysis shows that the improvements are even larger on harder questions. With the dataset, the model, and the experiments, we highlight the benefit of symbolic execution and the importance of 3D understanding for 3D-aware VQA.§ ACKNOWLEDGEMENTSWe thank the anonymous reviewers for their valuable comments. We thank Qing Liu, Chenxi Liu, Elias Stengel-Eskin, Benjamin Van Durme for the helpful discussions on early version of the project. This work is supported by Office of Naval Research with grants N00014-23-1-2641, N00014-21-1-2812. A. Kortylewski acknowledges support via his Emmy Noether Research Group funded by the German Science Foundation (DFG) under Grant No.468670075. plain§ DATASET DETAILS§.§ Part list In , the parts of each objects are listed in Tab. <ref> §.§ Question templates Part Questions we collect 9 part-based templates when generating the part-based questions, as shown in Tab. <ref>. In the table,means one attribute from shape, material, color or size to be queried,(or , ) means one object to be filtered with a combination of shape, material, color, and size. Different from the pose and occlusion question, we don't query the size of the object.3D Pose questions We design 17 3D pose-based templates in question generation (as shown in table <ref>). The 17 templates consist of: 1 template of the query of the pose; 4 questions of the query of shape, material, color, size, where the pose is in the filtering conditions; 12 templates about the query of shape, material, color, size, where the relationship of the pose is the filtering condition. Occlusion Questions There are 35 templates in the occlusion question generation as shown in table <ref>, which consists of occlusion of objects and occlusion of parts. The occlusion of objects consists of occlusion status and occlusion relationship. For the occlusion status of the object, there are 4 templates to query the shape, color, material, and size respectively. There are 2 occlusion relationships of objects (occluded and occluding), and each of them has 4 templates.Similarly, we then create a template about occlusion status and occlusion relationship for the parts. The only difference between object and part is that the parts only have 3 attributes to be queried: shape (name), material and color. §.§ StatisticsAs a result, a total of 314,988 part questions, 314,986 pose questions, and 228,397 occlusion questions and 314,988 occlusion questions with parts.In Fig. <ref>, we show the distributions of all attributes of objects including categories, colors, sizes, and materials § IMPLEMENTATION DETAILS FOR THE BASELINESThe FiLM and mDETR are trained with default settings as in the official implementation. FiLM is trained for 100k iterations with batch size 256. mDETR is trained for 30 epochs with batch size 64 using 2 GPUs for both the grounding stage and the answer classification stage. For P-NSVQA, we first train a MaskRCNN for 30k iterations with batch size 16 to detect the objects and parts, then train the attribute extraction model (using Res50 backbone) for 100 epochs with batch size 64. Different fully connected(FC) layers are used for a different type of question: the part questions and occlusion questions have 4 FC layers for the shape, material, color, and size classification (as the parts also have size annotations in the dataset when generating scene files, but they are meaningless in the question answering). The pose question includes pose prediction of an object, so we add a new FC layer with 1 output dimension to predict the rotations, followed by an MSE loss during training. For different types of questions (part, pose and occlusion), the MaskRCNN and attribute extraction model are trained separately.In the PNSVQA+Projection baseline, we first train a MaskRCNN to detect all of the objects and predict their 3D pose (azimuth, elevation and theta) without category labels in the scene. This MaskRCNN is trained with batch size 8 and iteration 15000. We use an SGD optimizer with a learning rate of 0.02, momentum of 0.9 and weight decay 0.0001. Then, we use the same setting as ourto train a CNN to classify the attributes of objects and parts.§ DETAILED RESULTS OF ANALYSISAs an extension for section 5.4 in main paper, here we include the numerical value of accuracy and drop for the pose, part, occlusion + part question with reference to occlusion ratio or part size. The result is shown in Tab. <ref>, Tab. <ref> and Tab. <ref>. § FAILURE CASESExamples of failure cases of our PO3D-VQA, as described in Section 5.4 in main paper. In (a) and (b), PO3D-VQA misses the bicycle behind when two bicycles have a heavy overlap, the same for the two motorbikes in (c) and (d). | http://arxiv.org/abs/2310.17914v1 | {
"authors": [
"Xingrui Wang",
"Wufei Ma",
"Zhuowan Li",
"Adam Kortylewski",
"Alan Yuille"
],
"categories": [
"cs.CV",
"cs.CL"
],
"primary_category": "cs.CV",
"published": "20231027061530",
"title": "3D-Aware Visual Question Answering about Parts, Poses and Occlusions"
} |
Habib Slim, Xiang Li, Yuchen Li, Mahmoud Ahmed, Mohamed Ayman, Ujjwal Upadhyay,Ahmed Abdelreheem, Arpit Prajapati, Suhail Pothigara, Peter Wonka, Senior Member, IEEE, and Mohamed Elhoseiny, Senior Member, IEEECorresponding authors: H. Slim and M Elhoseiny with the Department of Computer Science, KAUST, Thuwal, Saudi Arabia.E-mail: [email protected]; [email protected] A. Prajapati, S. Pothigara are with Polynine, San Francisco, California. X. Li, Y. Li, M. Ahmed, M. Ayman, U. Upadhyay, A. Abdelreheem, P. Wonka are with the Department of Computer Science, KAUST, Thuwal, Saudi Arabia.January 14, 2024 ==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================In this paper, we rigorously prove the existence of self-similar converging shock wave solutions for the non-isentropic Euler equations for γ∈ (1,3]. These solutions are analytic away from the shock interface before collapse, and the shock wave reaches the origin at the time of collapse. The region behind the shock undergoes a sonic degeneracy, which causes numerous difficulties for smoothness of the flow and the analytic construction of the solution. The proof is based oncontinuity arguments, nonlinear invariances, and barrier functions. § INTRODUCTIONThe convergingshock wave problem is a classical hydrodynamical problem in gas dynamics, where a spherical shock originates frominfinity or a large radius (for example, by a spherical piston) in a spherically symmetric medium and propagates towards the center of symmetry, becoming stronger as it approaches the origin. In finite time, the spherical shock collapses at the center. The problem was first discussed by Guderley in his seminal work <cit.>(see also Landau <cit.> and Stanyukovich <cit.>).Due to a wide range of applications such as detonation, laser fusion and chemical reactions, the theory of converging shocks has attracted a lot of attention in the mathematics and physics communities over several decades <cit.>, andis still an active area of research <cit.>. In addition, imploding shock wavesare frequently used as a test problem in scientific computing and algorithms for compressible flows <cit.>. A rigorous analysis, therefore, is not only of mathematical interest but also of practical importance as it lays out foundational evidence in support of these applications. It has been long known since Guderley that for an inviscid perfect gas, only a particular choice of similarity exponent would lead to a converging self-similar radially symmetric shock wave. Despite many works <cit.> regarding the numerical value of such a similarity exponent and the corresponding self-similar solutions based on phase portraits and numerics, a rigorous construction of self-similar converging shock wave solutions that are smooth away from the shock interface has remained elusive. In this paper, we give a rigorous construction of self-similar converging shock wave solutions described by the non-isentropic compressible Euler equations for an ideal perfect gas. The Euler system for compressible gas flows in radial symmetry is given by the system of PDEs ρ_t+1/r^m(r^mρ u)_r =0,(ρ u)_t +1/r^m(r^m(ρ u^2))_r + p_r=0, [ρ(e +u^22)]_t + 1/r^m[r^mρ u( e +u^22+pρ)]_r=0,where ρ = ρ(t,r)≥ 0 is the density, u=u(r,t) is the radial fluid velocity, p(t,r)≥ 0 is the pressure, and e(t,r) is the specific internal energy. Here (t,r)∈ℝ×ℝ_+ and m=1,2 distinguishes flows with cylindrical or spherical symmetry. The equations in (<ref>)stand for the conservation of mass,momentum, andenergy respectively.We consider an ideal perfect gas whose equation of state is given by p = (γ-1)ρ e = (γ-1)c_vρθ,where γ>1 and c_v are positive constants. The specific entropy S is related topρ^-γ = Constant·exp(S/c_v). By the conservation laws (<ref>), the entropy S remains constant along particle trajectories in smooth regions of the flow:S_t+uS_r=0.The sound speed is given by c = √(γ pρ).By taking u, ρ and c to be the main unknowns, the system (<ref>) takes the form away from vacuum ρ_t+(ρ u)_r+mρ ur =0, u_t +uu_r+1/γρ(ρ c^2)_r=0, c_t+uc_r+γ-1/2c(u_r+mu/r)=0. The system (<ref>)–(<ref>) admits a three-parameter family of invariant scalings: the scaling transformation ρ(t,r) →ν^κρ(t/ν^λ,r/ν), u(t,r) →ν^1-λ u(t/ν^λ,r/ν), c(t,r)→ν^1-λc(t/ν^λ,r/ν),for ν>0,λ>0,κ∈ℝ,leaves the system invariant.This scaling symmetry is intimately connected to the existence of self-similar solutions. Self-similarity is an important concept in hydrodynamics due to its universal nature and the possibility that self-similar solutions areattractors for different physical phenomena in fluid and gas dynamics<cit.>. In the physics literature<cit.>, two kinds of self-similar solutions have been discussed:Type I if all self-similar parameters are completely determined from a dimensional analysis and Type II otherwise. Converging self-similar shock waves emerge as Type II solutions as the speed of collapse, which is a free parameter, is determined only a posteriori through the regularity requirement of solutions. To analyze the converging shock wave problem, inspired by the scaling symmetry (<ref>), we introduce the similarity variable[This is consistent with some of the literature, for instance by Morawetz <cit.> and Lazarus <cit.>, while other authors use the equivalent similarity variable y = r/|t|^1/λ (see <cit.>).] x = t/r^λ,and the ansatzu(t,r) = -r/λ tV(x) = -r^1-λ/λV(x)/x,c(t,r) = -r/λ tC(x) = -r^1-λ/λC(x)/x, ρ(t,r) = r^κR(x)where λ>1 and κ are free parameters. This self-similar ansatz applied to (<ref>) in any region where the flow is smooth leads to an algebraic relation between V, C and R: R(x)^q+1-γ(C(x)x)^2|1+V(x)|^q ≡constant,where q = 2(λ-1)m+1.Therefore by plugging (<ref>)–(<ref>) to the Euler system(<ref>)–(<ref>) and using (<ref>), we obtain the system of ODEs for two unknowns V(x), C(x): d Vd x = -1/λ xG(V(x),C(x);γ,z)/D(V(x),C(x)) and d Cd x = -1/λ xF(V(x),C(x);γ,z)/D(V(x),C(x)),whereD(V,C)=(1+V)^2 - C^2,G(V,C;γ,z)= C^2[(m+1)V+2mz]-V(1+V)(λ+V), F(V,C;γ,z)= C{C^2[1+mz/(1+V)]- a_1(1+V)^2+a_2(1+V)-a_3},andz = λ -1 mγ,a_1 = 1+m(γ-1)2,a_2 =m(γ-1)+m zγ (γ-3)2,a_3 = m zγ (γ-1)2 .The derivation of the ODE system is standard and we have adopted the notation used by Lazarus <cit.>. We seek a solution for which the shock converges towards the origin for t<0 along a self-similar pathwhich is described by a constant value of the similarity variable x,x ≡ -1 so that r_shock = (-t)^1/λ,t<0,andthe shock reaches the origin at t=0. Moreover, the flows on either side of the shock are assumed to be similarity flows with the same values of γ, λ, and κ in (<ref>)-(<ref>). Under this assumption, we still require that the jump in the similarity variables is consistent with the standard Rankine-Hugoniot jump conditions across the shock. Let the subscript 0 and 1 denote evaluation immediately ahead of and behind the shock. The Rankine-Hugoniot conditions and Lax entropy condition, reformulated in the self-similar variables, are 1+V_1 = γ-1/γ+1(1+V_0) + 2C_0^2/(γ+1)(1+V_0), C_1^2= C_0^2 + γ-1/2[(1+V_0^2)-(1+V_1)^2], R_1(1+V_1)= R_0(1+V_0), C_0^2 <(1+V_0)^2.We assume that the fluid ahead of the shock is at rest and at a constant density and pressure. Then, by (<ref>), we have κ = 0 and R(x) is a constant. For convenience, we letR(x)≡ 1for -∞<x<-1.Also, by (<ref>), the sound speed c is also constant ahead of the shock. As we assume λ>1, it implies that C must vanish identically there. By the assumption that the fluid is at rest before the shock, (<ref>) implies that V also must vanish identically there. Therefore, we haveV(x) = C(x) ≡ 0for -∞<x<-1so that (V_0,C_0,R_0)=(0,0,1). Obviously, (<ref>) is satisfied. Then, applying (<ref>), we get V_1= -2/γ+1, C_1= √(2γ(γ-1))/γ+1, R_1 = γ+1/γ-1. As we are interested in solutions such that u, c and ρ are well-behaved at any location r>0 at t=0, we seek solutionssuch that u(0,r) = -r^1-λ/λlim_x→ 0V(x)/x<∞, c(0,r)=-r^1-λ/λlim_x→ 0C(x)/x<∞, ρ(0,r)= R(0).In particular, we require V(0)=C(0)=0.The converging shock wave problem is, for given adiabatic index γ, to find a smooth solution to (<ref>) for -1<x<0 connecting the shock interface represented by (V_1,C_1) at x=-1 to the ultimate collapsed state (0,0) at x=0. Together with the pre-shock state (<ref>), such a piecewise smooth solution to (<ref>) gives rise to a collapsing shock solution to the Euler system(<ref>)–(<ref>). A key difficulty in solving the collapsing shock wave problem is that singularities of the dynamical system (<ref>) may occur when D=0 or x=0. The moveable singularity D=0 is associated with the so-called sonic singularity (the condition D=0 means exactly that the fluid speed and sound speed coincide), while the singularity at x=0 is a removable singularity which is due to the symmetry assumption. For smooth solutions, if D=0 at some point x=x_sonic, G and F must vanish at x=x_sonic. For our problem, D(V_1,C_1)<0 (cf.(<ref>)) and D(0,0)=1>0 and hence any smooth solution must pass through a sonic point (D=0) at which G=F=0. This triple vanishing property is not satisfied by generic values of λ, but it is expected that there exists a particular value ofallowing smooth passage through the sonic point. The main result of this paper is the existence (and, for a certain range of , uniqueness) of thiswhich yields a converging shock wave solution.(i) Let γ∈(1,3].Then there exists a collapsing shock solution to the non-isentropic Euler equations(<ref>)-(<ref>).(ii) Moreover, suppose γ∈(1,5/3]. Then there is a unique blow-up speedsuch that the aforementioned solution exists. The precise statement of Theorem <ref> will be given in Theorem <ref> after we discuss the basic structure of the phase portrait plane associated with the ODE system (<ref>) and introduce the set of important parameters appearing in our analysis in Section <ref>. We remark that self-similar collapsing shock waves are solutions of unbounded amplitude (cf.(<ref>)) and their continuation to expanding shock solutions are genuine weak solutions to the Euler system(<ref>)–(<ref>), as shown by Jenssen-Tsikkou <cit.>. Before moving forward, we mention some works on compressible Euler flows with a focus on weak solutions and singularities.The study of the compressible Euler equations has a long history and a correspondingly vast literature, much of it focused on the one-dimensional problem. As is well known, a fundamental difficultyin the analysis of the compressible Euler equations stems from the expected formation of singularities in the solutions, a phenomenon known since the time of Riemann and Stokes. For a survey of the literature on the 1D Euler equations, including existence of weak solutions and formation of singularities, we refer to <cit.> and the references therein. Although there is no general theory for the existence of weak solutions for the multi-dimensional problem, in recent years, the existence of weak entropy solutions for the isentropic system under the assumption of spherical symmetry has been established in <cit.> using the vanishing viscosity method from artificial viscosity solutions of certain auxiliary problems. This has been extended to cover more physical, density-dependent viscosities in <cit.>. The weak solutions constructed in these works are based on a finite energy method that allows for discontinuous and unbounded solutions to arise, especially at the origin. Earlier results, <cit.>, gave existence results on gases in an exterior region surrounding a solid ball, and relied on boundedness of solutions. The formation of singularities in the multi-dimensional compressible Euler equations was first rigorously established in <cit.>.To betterunderstand the structure of the singularities, there has been much interest in the study of shock formation in solutions of the multi-dimensional compressible Euler equations. The first rigorous results are those in spherical symmetry of <cit.>, which studies the formation and development of shocks in spherical symmetry for perturbations of constant data for the non-isentropic system. Thework <cit.> on shock formation for irrotational, isentropic, relativistic gases gives a truly multi-dimensional result and sharp understanding of the geometry of the solution at the blow-up time (see also <cit.>). In recent years, there have been further exciting developments onshock formationto allow for non-trivial vorticity and entropy and to remove symmetry assumptions <cit.> while still showing the finite time formation of a singularity with sharp asymptotic behavior on approach to the blowup.Moreover, in <cit.> the authors have established the local-in-time continuation of a shock solution from the first blow-up time for the full, non-isentropic Euler equations; see also a recent work <cit.> for the maximal development problem. As well as these shock solutions, other kinds of strong singularity have also been areas of active interest, especially the implosion solutions of Merle–Raphaël–Rodnianski–Szeftel, constructed in <cit.> and whose finite-codimension stability is established in <cit.>.These solutions of the isentropic Euler equations with -law pressure (excluding a countable set of ∈(1,∞)) have been constructed using a self-similar ODE analysis, and the authors must also handle the presence of triple points in the phase plane (the sonic points), through which the solutions must pass smoothly. The existence of these solutions has been extended to cover a wider range ofin <cit.> and to allow for non-radial perturbations in<cit.>. Following these works, the construction of continuous (but not necessarily smooth) implosion solutions to the non-isentropic Euler equations has been achieved in <cit.> using a combination of analytic and numerical techniques. This result also discusses the continuation of the blowup solution past the first blowup time with an expanding shock wave solution. § BASIC STRUCTURE OF PHASE PORTRAIT AND MAIN RESULT In this section, we discuss the basic structure of the phase portrait of the ODE system(<ref>) and the main result of the paper along with the methodology.In our analysis, we will primarily make use of the following ODE associated with the system (<ref>)dCdV = F(V,C;γ,z)G(V,C;γ,z), which makesthe phase portrait analysis more accessible in the (V,C) plane. We denote the initial data point byP_1=(V_1,C_1)in the (V,C) plane.Forγ∈(1,3], the initial data points V_1() and C_1() given in (<ref>) and (<ref>) are monotone increasing with respect to γ. The result follows from direct computation:V_1'(γ) = 2(γ+1)^2>0, C_1'(γ) = γ(γ+1)^2√(2γ(γ-1))>0.§.§ Roots of F, G and D In this subsection, we summarize the critical points of the dynamical system (<ref>) and some fundamental monotonicity properties with respect to the parameters z and γ.Triple points F=G=D=0. The triple points at which F=G=D=0 are crucial to understanding the dynamics of solutions to the ODE system (<ref>). On the one hand, at these points, generic trajectories will suffer a loss of regularity. On the other hand, at least one such point must be passed through for a trajectory to reach from the initial data P_1 to the origin. The solutions to F=G=D=0 are P_2=(-1,0), P_6= (V_6,C_6) = (-1+(γ-2)z-w/2, 1+V_6), P_7=(V_7,C_7) = (-1+(γ-2)z-w/2,-1-V_7), P_8=(V_8,C_8) = (-1+(γ-2)z+w/2, 1+V_8), P_9=(V_9,C_9) = (-1+(γ-2)z+w/2, -1-V_9),wherew(z) = +√(1-2(γ+2)z+(γ-2)^2z^2). Since w≥ 0, we will always have V_8 ≥ V_6 and C_8≥ C_6. (<ref>) and (<ref>) imply C_1>1+V_1 immediately behind the shock, while the condition(<ref>) implies C(0)<1+V(0). Since we require that u and c are all well behaved at any location away from the origin, the trajectory must at least continuously pass through the line D(V,C)=0 at some x_0∈(-1,0). Comparing this with the ODE system (<ref>), we see that we must have F(x_0)=G(x_0)=0 to ensure continuity. Thus, the trajectory can only pass through the sonic line D =0 at P_6 or P_8. As a consequence, w(z) given by (<ref>) must be a real number, which gives us the constraint z ≤ z_M(γ) = (√(γ) +√(2))^-2. Recall that z = λ-1/mγ from (<ref>). That is, equivalently, we must haveλ≤λ_M = mγ z_M+1 = mγ/(√(γ) +√(2))^2+1.We will henceforth restrict the range of parameters λ and z to (1,λ_M] and (0,z_M], respectively, and will useboth z and λ as convenient. For z∈(0,z_M], the function w(z) defined by (<ref>) is a decreasing function in z and satisfies 0≤ w(z)<1, where w(z_M)=0 and w(z)→ 1 when z→ 0. The following lemma establishes the monotonicity properties ofthe locations of P_6 and P_8 with respect to z. For any γ∈ (1,3] , V_6 and C_6 are strictly increasing, and V_8 and C_8 are strictly decreasing with respect toz↗ z_M. For any fixed γ∈ (1,3], we write V_6 = V_6(z) and so V_6'(z)= dV_6/dz.We compute 2V_6'(z)= γ-2+ (γ+2)-(γ-2)^2z √(1-2(γ+2)z+(γ-2)^2z^2).For anyz∈(0, z_M], we have w(z)=√(1-2(γ+2)z+(γ-2)^2z^2)< 1 and (γ-2)^2z<1<γ+2. Hence2V_6'(z) ≥γ-2+ (γ +2 -(γ-2)^2z)=2γ -(γ-2)^2z>0.Arguing similarly for V_8, we have2V_8'(z) = γ-2- (γ+2)-(γ-2)^2z √(1-2(γ+2)z+(γ-2)^2z^2)< γ-2-(γ+2-(γ-2)^2z) <0.Since C_6 = 1+V_6 and C_8 = 1+V_8, the desired results follow. From the definitions of P_6 and P_8 in (<ref>) and (<ref>) and z_M in (<ref>), we have V_6(z_M) = V_8(z_M) = -√(2)/√(γ)+√(2), C_6(z_M) = C_8(z_M) = √(γ)/√(γ)+√(2).Therefore, by Lemma <ref>, we have that -1≤ V_6 ≤-√(2)/√(γ)+√(2)≤ V_8≤ 0, 0≤ C_6≤√(γ)/√(γ)+√(2)≤ C_8≤ 1.Double roots F=G=0. In addition to the triple points, there are also a number of stationary points of the ODE system (<ref>) at which F=G=0 but D≠0.To simplify notation, we defineH(V) = √(V(1+V)(λ+V)(m+1)V+2mz).Then, the double points of the system may be directly computed as in the following lemma (cf. <cit.>). The solutions to F=G=0 and D≠ 0 are P_0 = (0,0), P_3=(V_3,C_3) = (-λ, 0), P_4 =(V_4,C_4) = (-2λ/γ+1+m(γ-1), H(V_4)), P_5 =(V_5,C_5) = (-2λ/γ+1+m(γ-1), -H(V_5)). Since the solution C of(<ref>) must remain positive before the collapse t<0 in order to be physically meaningful, the points P_2, P_3, P_5,P_7 and P_9 do not play a role in the construction of the solution before the collapse.We observe that D, F, and G at P_1 satisfy the following sign conditions: D(V_1,C_1) <0, F(V_1,C_1)>0, G(V_1,C_1)<0. Further details on the signs of D, F, and G can be found in<cit.>. By (<ref>), P_1 will be always located above the sonic line D(V,C)=0 as inFigure 1.§.§ Main result and methodologyMany authors have claimed that, for each γ∈(1,3], there exists a λ_std or z_std such that the corresponding trajectory exists from P_1 to the origin P_0, analytically passes through the triple point P_6 or P_8 and is monotone decreasing to the origin, therefore describing a collapsing shock solution of the compressible Euler equations (see, for example, <cit.>). The goal of this paper is to prove rigorously the existence of such a z_std and the corresponding analytic solution to (<ref>). The self-similar solutions that weconstruct are built by concatenating two trajectories in the phase-plane in such a way that we obtain an analytic solution of the ODE (<ref>). * The first trajectory connects P_1 to either P_6 or P_8 in the 2nd quadrant of the (V,C)-plane. To ensure the trajectory passes through P_6 or P_8 analytically, we need the trajectory to enter the triple point P_6 or P_8 with a specific slope. * The second trajectory connects either P_6 or P_8 to the origin P_0=(0,0), which is a stable node for (<ref>). Since the first trajectory passes through P_6 or P_8 analytically, this second one is uniquely determined by the slope at P_6 or P_8.Directly solving the initial value problem for(<ref>) posescomplexity due to the non-linearity of F(V,C;γ,z) and G(V,C;γ,z) and the two parameters γ and z. One significant challenge in this problem lies in the non-trivial nature of solutions around the triple points P_6 and P_8, which can be entered either along a primary or a secondary direction by solutions of (<ref>). Along the dominant, primary direction, the solutions will be only of finite regularity, and so we require the solutions to connect along the secondary direction to ensure analyticity. This property of analytic connection fails for generic choices of the parameter z, and so the isolated value, z_std, that enables this analytic connection must be carefully constructed. Moreover, it emerges that, for some ranges of γ∈(1,3], the solution emanating from the initial condition P_1 will converge to P_6, while for other , it will converge to P_8. We must therefore understand which of the triple points the solution from P_1 should connect to in order to identify z_std and establish an analytic connection.To address these challenges effectively, we employ barrier functions for a number of purposes (cf.Definition <ref>).First, to exclude connection from P_1 to P_8 (respectively P_6) for small (respectively large) values of . Second, to establish an appropriate interval of candidate values of z containing z_std. Employing the barrier function B_k_M(V)=√(C^2_6(z_M)/V_6(z_M)V), we exclude connections to P_8 for small γ, and we will also exclude connection to P_6 for ≥ 2. In addition, the function B_1(V)=√(-V) is essential for establishing connection from P_1 to P_8 for intermediate values of . This motivates the following definitions. *is defined to be the value such that P_1 lies on the curve C=B_k_M(V) (see(<ref>)). ≈ 1.7. *is defined to be the value such that P_1 lies on the curve C=B_1(V) (see(<ref>)). = 1+√(2).As mentioned above, we also need to limit the window of possible z values for which we may have an analytic connection from P_1 to either P_6 or P_8. This leads us to the following definitions of key values of z. * For any γ∈(1,3], z_M(γ) is defined to be the value such that V_6(z_M)=V_8(z_M), which means P_6 and P_8 are coincident (see (<ref>)). * For any ∈(1,3], z_m() is defined to be the value such that V_6(z_m)=V_1, so that P_6 lies on the vertical line through P_1 (see (<ref>)). * For any γ∈(1,2], z_g(γ) is defined to be the value such that V_4=V_6, which means P_4 and P_6 are coincident (see (<ref>)). * For any γ∈(,], z_1(γ) is defined to be the value such that the curve C=B_1(V) intersects the sonic line at P_8 (see(<ref>)). * For any γ∈(,3], z_2(γ) is defined to be the value such that the curve C=√(-3/2V) intersects the sonic line at P_8 (see (<ref>)). We now state the main result of the paper. (i) For all γ∈ (1, 3], there is a monotone decreasing analytic solution to (<ref>) connecting P_1 to the origin. (ii) For γ∈ (1, ], the solution is unique (in the sense of unique z) and it connects P_1 to the origin via P_6. The value of z_std lies in (z_g, z_M ]. (iii) For γ∈ ( , 2), if such a solution connects through P_6, then z_std∈ (z_g,z_M] and z_std gives the only such connection through P_6. If a solution connects through P_8, then z_std∈ (z_1,z_M]. (iv) For γ∈ [2, 3], any such solution must connect through P_8 witha z_std valuez_std∈ (z_1,z_M] if γ∈[2,] or z_std∈ (z_2,z_M] if γ∈ (,3].To see that these solutions from Theorem <ref> do indeed give solutions of the original self-similar problem (in the x variable) is straightforward. Note that, given C(V), one can solve for V(x) (and hence C(x)) via the ODE V'(x)=-1/λ xG(V(x),C(V(x)))/D(V(x),C(V(x))) simply by integrating, away from critical points. As C(V) is an analytic function in V and G and D are both analytic in (V,C) with simple zeros at the triple points P_6 and P_8, repeated application of the chain rule establishes that the solution V(x) remains smooth (indeed, analytic), as it passes through a unique sonic point x_ where (V(x_),C(x_)) is either P_6 or P_8. As G(0,0)=0, it is clear that if, for some x_0∈(x_,0), we hit V(x_0)=C(x_0)=0, then the ODE has a local, unique solution, which is the identically zero solution. But this extends backwards for all x, contradicting the initial data and the sonic time. So the solution cannot hit zero except at x=0.Our strategy for proving the existence of the solutions constructed in Theorem <ref> proceeds in three key stages, inspired by recent mathematical constructions of self-similar gravitational collapse <cit.>, where the authors developed the shooting methods for self-similar non-autonomous ODE systems to connect smoothlytwo behaviors at the center and at the far field through the sonic point. First, in Section <ref>, we construct local, analytic solutions around each of the triple points. That is, for all z∈[z_m,z_M], we construct a local solution around P_6 and we construct a local solution around P_8 for all z∈(0,z_M]. In order to show the local existence of such solutions, we first choose a local branchat the triple points along the secondary direction of (<ref>) with a negative slope c_1<0 (cf.Section <ref>) andderive a formal recurrence relation for the Taylor coefficients of a power series C(V;,z)=∑_k=0^∞ c_k(,z)(V-V_*(,z))^k.Once we have found the recurrence relation for the higher order coefficients, a series of combinatorial estimates and an inductive argument allow us to bound coefficients to all orders and establish the convergence of the series in Theorem <ref>.The second main step of the proof is to show the existence, for each ∈(1,3], of a z_std such that the local analytic solution from either P_6 or P_8, extended backwards in V, connects to P_1. This is achieved in Section <ref> via a continuity argument. We show first that the solution from P_6 for z=z_m always passes below P_1 in the phase space, while there always exists a z∈(0,z_M) such that the solution from P_8 passes above P_1. Then, depending on whether the solution for z=z_M passes above or below P_1, we may apply a continuity argument to either P_6 or P_8 to establish the connection.The third main step in the construction is to prove that the solution connecting P_1 smoothly to either P_6 or P_8 then continues to connect to the origin. In fact, the behavior of connecting to the origin is not limited only to the solution that connects to P_1, but holds for a non-trivial interval of z around z_std, as the origin is an attractive point in the phase plane.A key difficulty is that the solution must connect from inside the second quadrant, else the velocity changes sign before collapse. We cannot, a priori, exclude the possibility that, for some range of z, the solution passes through the C-axis for some positive value of C before converging to the origin from the first quadrant. To show that this does not occur, we apply careful barrier arguments to gain an upper bound on the solution which traps it into a region in the second quadrant in which it must converge to the origin. This notion is made precise in the following definition. We say that a differentiable function B(V) is a lower barrier for C(V) on (V_a, V_b) if B(V) < C(V) on (V_a, V_b), and a upper barrier ifB(V) > C(V) on (V_a, V_b).In practice, C(V) will be the solution of (<ref>) and B(V) is a specific differentiable function where we design B such that at one end point, C(V) is greater or less than B(V), and show that the solution C(V) stays above or below B(V) as V moves to the other end point. The latter part will be achieved by nonlinear invariances of (<ref>). Suppose we intend to show that B is a lower barrier for C and that C(V_a)>B(V_a) (respectively C(V_b)>B(V_b)). We assume for a contradiction that there exists V∈(V_a,V_b) such that C(V) = B(V). By simple continuity and compactness arguments, there exists a minimal (respectively maximal) such V, from which we deduce that, at V, we must have d/dV(C-B)|_V≤ 0, respectivelyd/dV(C-B)|_V≥ 0. To derive a contradiction, we therefore prove that, whenever C(V)=B(V), then we must have d/dV(C-B)|_V> 0, respectivelyd/dV(C-B)|_V< 0. As the self-similar blowup speed z varies, the associated solutions from the triple points P_6 and P_8 efficiently explore a large portion of the phase space, with the solutions from P_8 in particular moving far up in the phase plane. In order, therefore, to apply the precise barrier arguments that will force the solution to the right of the triple point to converge to the origin, we in fact require better control on the range of z (depending on ) for which the solution to the left connects to P_1, else we lose effective control on the trajectory to the right and cannot exclude the possibility that the trajectory passes through V=0 away from the origin. This improved control on z also allows us to make more quantitative and qualitative statements concerning the behavior of the imploding shock solution, especially for ∈(1,_⋆]. To this end, we first limit the range offor which the connecting solution may come from P_6 or P_8. This is achieved in Sections <ref>–<ref>, in which we employ our first barrier arguments to the left in order to show that for ∈(1,γ_⋆], the solution must connect to P_6, and for ∈[2,3], it must connect to P_8.Following this, in Section <ref>, we improve the range of z for which the solution from P_6 (given ∈(1,2]) may connect to P_1, tightening the range z∈[z_m,z_M] to the much sharper z∈(z_g,z_M] by showing that the trajectory is bounded from above, for this range of z, by the solution to a simpler ODE that allows for explicit integration and estimation. Thisimprovement ensuring z_std> z_g is essential, as the structure of the phase portrait changes fundamentally as P_4 crosses P_6 at z=z_g.We are then able also to show in Lemma <ref> that, for ∈(1,2], there is at most one value of z∈(z_g,z_M] for which the solution from P_6 may connect to P_1 by studying the derivative ∂/∂ z(dC/dV).The next section, Section <ref>, contains the analogous sharpening of the possible range of z for solutions from P_8. In it, we show that, for ∈(_⋆,_1], solutions with z∈(0,z_1] cannot connect to P_1 by employing the barrier B_1(V)=√(-V), while for ∈(_1,3], solutionswith z∈(0,z_2] cannot connect to P_1 by employing the barrier B_3/2(V)=√(-3/2V)(cf. the definitions of z_1 and z_2 above).Having established these tighter ranges of z, depending on , for the existence of the imploding shock solution, in Section <ref> we are then able to prove that the solution must connect to the origin within the second quadrant. A simple proof in Lemma <ref> shows that the trajectories can never hit the V-axis, and so it suffices to find upper barriers connecting to the origin. Indeed, for ∈(1,2], we show that the solutions from P_6 for all z∈[z_g,z_M] admit B_1(V)=√(-V) as an upper barrier, and the solutions from P_8 for any ∈(_⋆,_1] and z∈(z_1,z_M] admit the same upper barrier. Finally, for the remaining range, ∈(_1,3] and z∈(z_2,z_M], the barrier B_3/2(V)=√(-3/2 V) is an upper barrier for the solution to the right.Finally, in Section <ref>, we put together the earlier results in order to establish the proof of the main theorem.§ LOCAL SMOOTH SOLUTIONS AROUND SONIC POINTS In this section, we show the existence of local analytic solutions around the triple point P_*=P_6orP_8: C(V) = C_*+ ∑_ℓ =1^∞c_ℓ(V-V_*)^ℓ,where the Taylor coefficients c_ℓ=c_ℓ(,z) and with a choice of branch having a negative slope c_1<0. The first step is to show that it is always possible to choose a branch with c_1<0 for the admissible range z∈ (0, z_M] at P_8 and z∈ [z_m,z_M] at P_6 (see Section <ref>). The second step is to derive a recursive formula to define c_ℓ for ℓ≥ 2 and prove the convergence of the Taylor series with positive radius of convergence (seeSection <ref>).§.§ Choice of branch at P_6 and P_8Throughout this section, for ease of notation, we will denote by P_* either P_6 or P_8. From (<ref>), we have dC/dV = 0/0 at P_*. Therefore, for smooth solutions, by using L'Hôpital's rule, we see that the slope c_1 at P_* must solve the quadratic equation-G_C(V_*,C_*)c_1^2+(F_C(V_*,C_*)-G_V(V_*,C_*))c_1+F_V(V_*,C_*) = 0,whereG_C(V_*,C_*)= ∂ G/∂ C|_(V_*,C_*)=2C_*[(m+1)V_*+2mz], G_V(V_*,C_*)= ∂ G/∂ V|_(V_*,C_*)=(m+1)C_*^2-3V_*^2-2(λ+1)V_*-λ, F_C(V_*,C_*)= ∂ F/∂ C|_(V_*,C_*)=3C_*^2[1+mz/(1+V_*)]- a_1(1+V_*)^2+a_2(1+V_*)-a_3, F_V(V_*,C_*)= ∂ F/∂ V|_(V_*,C_*)[t] =C_*{-mz- 2a_1(1+V_*)+a_2}=-mzC_*- 2a_1(1+V_*)^2+a_2(1+V_*)=C_*^2-3a_1(1+V_*)^2+2a_2(1+V_*)-a_3.Solving the quadratic equation (<ref>), we getc_1 = F_C(V_*,C_*)-G_V(V_*,C_*)± R(V_*,C_*)/2G_C(V_*,C_*),whereR(V_*,C_*) = √((F_C(V_*,C_*)-G_V(V_*,C_*))^2+4F_V(V_*,C_*)G_C(V_*,C_*)).Since the first trajectory should be monotone decreasing from P_1 to P_*, we demand the slope c_1 at P_* to be negative. In particular, solutions for(<ref>) must be real, which requires the expression under the square root of (<ref>) to be non-negative. In order to establish the necessary conditions for R to be real and to understand the possible solutions of (<ref>),we analyze the properties of the four partial derivatives in (<ref>). Using G(V_*, C_*)=0, F(V_*, C_*)=0 and C_*= 1+V_*, we see that G_C(V_*,C_*)= G_C(V_*,C_*)C_*/1+V_* = 2V_*(λ +V_*) < 0 since -1<V_*<0,>1, G_V(V_*,C_*)= (m+1)C_*^2-(λ+V_*)-2V_*(λ+V_*)-V_*(1+V_*), F_C(V_*,C_*)= 2C_*^2[1+(λ -1)/γ(1+V_*)] = 2C_*[C_*+(λ -1)/γ] >0, F_V(V_*,C_*)= -(λ -1)/γC_*-a_1C_*^2+a_3- [1+(λ -1)/γ C_*]C_*^2.Summing (<ref>) with (<ref>) and summing (<ref>) with (<ref>) and applying the definitions of a_1 and a_3 from (<ref>), we find the simpler identitiesF_C(V_*,C_*) +F_V(V_*,C_*)= -m(γ-1)/2C_*^2+(γ-1)(λ-1)/2,G_C(V_*,C_*)+G_V(V_*,C_*)= mC_*^2-(λ-1).In turn, these identities imply, recalling (<ref>) and (<ref>),F_C(V_*,C_*) +F_V(V_*,C_*)= -γ-1/2(G_C(V_*,C_*)+G_V(V_*,C_*)),G_C(V_6,C_6)+G_V(V_6,C_6) =-mwC_6<0, G_C(V_8,C_8)+G_V(V_8,C_8) =mwC_8>0. As a direct consequence, we first obtain the following. At P_8, for any γ∈ (1,3] and λ∈(1,λ_M], R(V_8,C_8) is real and strictly positive, and the two solutions of (<ref>) must have different signs. We first obtain the sign of F_V(V_8,C_8) from (<ref>), (<ref>) and (<ref>): F_V(V_8,C_8) = -γ-1/2(G_C(V_8,C_8)+G_V(V_8,C_8))-F_C(V_8,C_8) <0. Thus, as we also have G_C(V_8,C_8) < 0 from (<ref>), it is clear that R is real and positive. Applying (<ref>), the product of the two solutions of (<ref>) is given by -2F_V(V_8,C_8)/G_C(V_8,C_8). As we have just shown that F_V(V_8,C_8) and G_C(V_8,C_8) are both negative, we conclude the proof. The situation at P_6 is different. For λ sufficiently close to 1, R(V_6,C_6)∉, and so we require an appropriate range of(equivalently of z) whichguarantees the above properties at P_6. As the first trajectory connecting P_1 and P_6 is supposed to be monotone decreasing, it is sufficient to consider only those V_6≥ V_1. We therefore denote by _m (equivalently z_m) the value such that V_6(_m)=V_1. By a straightforward calculation, we haveλ_m= mγ(γ-1)(2γ-1)(γ+1)+1, z_m= (γ-1)(2γ-1)(γ+1).It is straightforward to check that _m<_M for any γ∈(1,3]. By Lemma <ref>, wehave V_6()≥ V_1 for any ∈[_m,_M].Moreover, by (<ref>),we have C_6(_m)=1+V_6(_m)=1+V_1<C_1. We now show that within the new sonic window the quadratic equation (<ref>) at P_6 has two real solutions with different signs. At P_6, for any γ∈ (1,3] and λ∈ [λ_m ,λ_M] where λ_m is given by (<ref>) and λ_M is given by (<ref>), the two solutions of (<ref>) are both real and have different signs. If F_V(V_6,C_6)<0, then by the same argument as in Lemma <ref>, R must be real and one of the solutions must be negative. Suppose F_V(V_6,C_6)≥ 0. By (<ref>), as C_6>0, we then have -mz-2a_1(1+V_6)+a_2≥ 0. As a_1=1+m(γ-1)/2>0, this is is equivalent to V_6≤a_2-mz/2a_1-1. We will now show that, in fact, for all ∈[_m,_M], the reverse inequality holds. Given γ∈(1,3], we always have a_2-mz/2a_1-1 - V_6(λ_m)= a_2-mz/2a_1-1+2/γ+1=1/2a_1{m(γ-1)+(γ-3)(λ-1)/2-λ-1/γ-m(γ-1)^2+2(γ-1)/γ+1}= 1/2a_1γ(γ-1)(-mγ+3m-4)+(γ+1)(γ^2-3γ-2)(λ-1)/2γ(γ+1)= 1/2a_1(γ^2-3γ-2)(λ-1)-γ(γ-1)/2γ, when m=11/2a_1(γ+1)(γ^2-3γ-2)(λ-1)-2γ(γ-1)^2/2γ(γ+1), when m=2. In each case, we see that a_2-mz/2a_1-1 - V_6(λ_m)<0, and so which means V_6(λ_m)>a_2-mz/2a_1-1. By Lemma <ref>, V_6 is strictly increasing in λ∈ (1 ,λ_M). We conclude that for γ∈(1,3] and any λ∈[λ_m,λ_M], we always have F_V<0. This means that two slopes at P_6 have different signs and R(V_6,C_6)∈.In conclusion, for each γ∈ (1,3] and for the appropriate range of z at P_*,there exists exactly one negative slope c_1=F_C(V_*,C_*)-G_V(V_*,C_*)+ R(V_*,C_*)/2G_C(V_*,C_*)<0which will be our choice of branch. Here we have used G_C < 0 by (<ref>).§.§ Analyticity at P_6 and P_8 As shown in the previous section, to have the first trajectory with negative slope,the ranges of λ at P_6 and P_8 are taken differently. For notational convenience, we defineΛ = [λ_m, λ_M]ifP_* = P_6, (1, λ_M]ifP_* = P_8. We write the formal Taylor series around the point P_* asC(V) = ∑_ℓ =0^∞c_ℓ(V-V_*)^ℓ,where c_0=C_*. In a neighborhood of (V_*,C_*), we formally have dC/dV = ∑_ℓ =1^∞ℓ c_ℓ(V-V_*)^ℓ-1.Now, to simplify notation, we set{ v = V-V_* (c^2)_ℓ = ∑_i+j = ℓi,j≥ 0c_ic_j, (c^3)_ℓ = ∑_i+j+k = ℓi,j,k≥ 0c_ic_jc_k . .With this notation, the following quantities have a simple expression: C^2= (∑_ℓ =0^∞c_ℓv^ℓ)^2 = ∑_ℓ =0^∞(c^2)_ℓv^ℓ,C^3= (∑_1^∞c_ℓv^ℓ)^3 = ∑_ℓ =0^∞(c^3)_ℓv^ℓ,C'C^2= 1/3(C^3)' =1/3∑_ℓ =1^∞ℓ(c^3)_ℓv^ℓ-1.Suppose C(V) defined by (<ref>) is an analytic solution of (<ref>). Then the following identity holds:∑_ℓ≥ 2(A_ℓ c_ℓ-B_ℓ)v^ℓ + [-G_C(V_*,C_*)c_1^2+[F_C(V_*,C_*)-G_V(V_*,C_*)]c_1+F_V(V_*,C_*)]c_0v = 0,where for each ℓ≥ 2,A_ℓ = C_*[F_C(V_*,C_*)-G_C(V_*,C_*)c_1-ℓ[G_V(V_*,C_*)+G_C(V_*,C_*)c_1]],B_ℓ =(1+V_*)[(m+1)V_*+2mz]/3(ℓ +1)∑_i+j+k = ℓ+1i,j,k≤ℓ-1c_ic_jc_k-[(1+V_*+mz)-(m+1)(1+2V_*)+2mz/3ℓ]∑_i+j+k = ℓi,j,k≤ℓ-1c_ic_jc_k -[1-m+1/3(ℓ -1)]∑_i+j+k = ℓ-1c_ic_jc_k-[[6V_*^2+(3λ + 6)V_*+2λ +1](ℓ -1)-3a_1(1+V_*)^2+2a_2(1+V_*)-a_3]c_ℓ-1-[(λ +2+4V_*)(ℓ -2)-3a_1(1+V_*)+a_2]c_ℓ-2-[ℓ-3-a_1]c_ℓ-3,where we use the convention c_n=0 for n<0.The identity follows by substituting (<ref>), (<ref>), (<ref>), and (<ref>) into (1+V)F(V,C)=(1+V)dC/dVG(V,C)and grouping the coefficients of v^ℓ. For the details, we refer toAppendix <ref>. Since we are seeking an analytic solution around the sonic point P_*, we demand that (<ref>) holds for all |v|<ϵ where ϵ>0 is sufficiently small. We therefore require that the coefficient of v^ℓ should be zero at every order ℓ∈ℕ. In Section <ref>, we have already shown the existence of c_1<0satisfying-G_C(V_*,C_*)c_1^2+[F_C(V_*,C_*)-G_V(V_*,C_*)]c_1+F_V(V_*,C_*) = 0.For ℓ≥ 2, we directly obtain the recursive relation for c_ℓ,A_ℓc_ℓ = B_ℓ,where we note from (<ref>) that B_ℓ involves only coefficients c_i for 0≤ i≤ℓ -1. To ensure the solvability of c_ℓfor all ℓ≥ 2, it is obvious that we require A_ℓ≠ 0, and so we need the following non-vanishing condition:NVC F_C(V_*,C_*)-G_C(V_*,C_*)c_1-ℓ[G_V(V_*,C_*)+G_C(V_*,C_*)c_1]≠ 0for any ℓ≥ 2.In the following lemma, we show that (<ref>) holds for any λ∈Λ. Let γ∈(1,3], P_*∈{P_6,P_8}. Then, for any λ∈Λ and ℓ≥ 2, (<ref>) is satisfied. Recalling (<ref>) and C_*=1+V_*, we first see G_V(V_*,C_*)= (m+1)C_*^2-(λ+V_*)-2V_*(λ+V_*)-V_*(1+V_*)=(m-2)V_*^2+2(m-λ)V_*+(m+1-λ). When m=1, this gives that G_V(V_*,C_*) = -V_*^2+2(1-λ)V_*+2-λ is quadratic with respect to V_*. Furthermore, G_V(-1,0)=λ-1>0 and G_V(0,1) = 2-λ≥ 2-λ_M = 1-γ/√(γ) +√(2)>0 for any γ∈(1,3]. Thus, as V_*∈(-1,0) for all ∈Λ, we obtain G_V(V_*,C_*)>0. When m=2, we have G_V(V_*,C_*) = 2(2-λ)V_*+3-λ=2(2-λ)(1+V_*)+λ-1>0. Therefore, for any γ∈(1,3] and λ∈Λ, G_V(V_*,C_*)>0. Denote f(ℓ)=:F_C(V_*,C_*)-G_C(V_*,C_*)c_1-ℓ[G_V(V_*,C_*)+G_C(V_*,C_*)c_1]. Notice that f(ℓ) is a linear equation with respect to ℓ. When ℓ=1, using (<ref>), we see f(1) = F_C(V_*,C_*)-G_V(V_*,C_*)-2G_C(V_*,C_*)c_1 = -R(V_*,C_*)<0. Thus, f(ℓ)<0 for any ℓ≥ 2 since we have just shown G_V(V_*,C_*)>0 and G_C(V_*,C_*)c_1>0 by (<ref>) and (<ref>). In conclusion,(<ref>) is satisfied for any λ∈Λ at P_*.Since A_ℓ≠ 0 by Lemma <ref>, we can rewritec_ℓ = B_ℓA_ℓ.In the following, we estimate the growth of B_L under the inductive growth assumption on c_ℓ for 2≤ℓ≤ L-1. For any fixed γ∈(1,3] and λ∈Λ, let α∈ (1,2) be given. Then, there exists a constant K_*=K_*(γ)>1 such that if K≥ K_* and L≥ 5, then if also the following inductive assumption holds,|c_ℓ| ≤K^ℓ-αℓ^3, 2≤ℓ≤ L-1,then we have|B_L| ≤βK^L-αL^2( 1K^α-1+1/K)for some constant β=β(γ,λ). For the proof, we will require the following result from <cit.> to estimate certain combinations of coefficients. There exists a universal constant a>0 such that for all L∈ℕ, the following inequalities hold∑_i+j+k = Li,j,k≥ 11i^3j^3k^3 ≤aL^3, ∑_i+j = Li,j≥ 11i^3j^3 ≤aL^3. First, by using the induction assumption (<ref>) and Lemma B.1, we have|∑_i+j+k = Li,j,k≤ L-1c_ic_jc_k|= |6c_0c_1c_L-1+3c_0∑_j=2^L-2c_jc_L-j+ 3c_1^2c_L-2 +3c_1∑_j=2^L-3c_jc_L-1-j+∑_i+j+k = Li,j,k≥ 2c_ic_jc_k|≤ 6|c_0c_1|K^L-1-α(L-1)^3+3|c_0|K^L-2α∑_j=2^L-21/j^3(L-j)^3 + 3c_1^2K^L-2-α(L-2)^3 +3|c_1|K^L-1-2α∑_j=2^L-31/j^3(L-1-j)^3+K^L-3α∑_i+j+k = Li,j,k≥ 21/i^3j^3k^3≲K^L-1-α(L-1)^3+K^L-2αL^3+K^L-2-α(L-2)^3+K^L-1-2α(L-1)^3 + K^L-3αL^3≲K^L-1-αL^3,where we have used that c_0 and c_1 are bounded by a constant depending on γ andas well as the assumptions ∈(1,2) and K≥ 1 and, moreover, L-2> 2, so that the inductive assumption applies still to c_L-2. Note that as L≥ 5, there exists a universal constant C>0 such that L/L-1,L/L-2,L/L-3≤ C.Next, a similar argument yields|∑_i+j+k = L-1c_ic_jc_k|= |3c_0^2c_L-1+6c_0c_1c_L-2+3c_0∑_j=2^L-3c_jc_L-1-j+ 3c_1^2c_L-3+3c_1∑_j=2^L-4c_jc_L-2-j+∑_i+j+k = L-1i,j,k≥ 2c_ic_jc_k|≲K^L-1-α(L-1)^3+K^L-2-α(L-2)^3+K^L-1-2α(L-1)^3+K^L-3-α(L-3)^3+K^L-2-2α(L-2)^3 + K^L-1-3α(L-1)^3≲K^L-1-αL^3,where we again note that as L≥ 5, L-3≥ 2, so that the inductive assumption applies to c_L-3. Again using similar arguments, we bound|∑_i+j+k = L+1i,j,k≤ L-1c_ic_jc_k|=|3c_0∑_j+k = L+1j,k≤ L-1c_jc_k+3c_1^2c_L-1+3c_1∑_j+k = Lj,k≤ L-2c_jc_k+∑_i+j+k = L+1i,j,k≥ 2c_ic_jc_k|≲K^L+1-2α(L+1)^3+ K^L-1-α(L-1)^3+K^L-2αL^3+ K^L+1-3α(L+1)^3≲K^L+1-2αL^3.Now weestimate B_L, recalling the definition in (<ref>), by employing these three combinatorial estimates as|B_L|≲ (L+1)(|∑_i+j+k = L+1i,j,k≤ L-1c_ic_jc_k| + |∑_i+j+k = Li,j,k≤ L-1c_ic_jc_k| + |∑_i+j+k = L-1c_ic_jc_k| +|c_ℓ-1|+|c_ℓ-2|+|c_ℓ-3|) ≲ (L+1)(K^L+1-2αL^3 + K^L-1-αL^3 + K^L-1-αL^3 + K^L-1-αL^3 + K^L-2-αL^3+ K^L-3-αL^3)≲K^L-αL^2(1K^α-1+1/K),where have used that there exists a universal constant C>0 such that L+1/L≤ C for all L≥ 5. We next justify the inductive growth assumption on c_ℓ.For any fixed γ∈(1,3] and λ∈Λ, let α∈(1,2) be given. Let c_ℓ be the coefficients in the formal Taylor expansion of C(V) around (V_*,C_*) solving the recursive relation of Lemma <ref>. Then there exists a constant K=K(,)>1 such that c_ℓ satisfies the bound|c_ℓ| ≤K^ℓ-αℓ^3.We argue by induction on ℓ. When ℓ =2,3,4, it is clear from (<ref>), the forms of A_ℓ and B_ℓ defined by (<ref>)–(<ref>), and the non-vanishing condition (<ref>) that there exists a constant K(γ,λ) such that c_2, c_3, and c_4 satisfy the bounds.Suppose for some L≥ 5, (<ref>) holds for all 2≤ℓ≤ L-1. Then we may apply Lemma <ref>and with the recursive relation (<ref>), we obtain |c_L| ≤β|A_L|K^L-αL^2(1K^α-1+1/K),where A_L = C_*[F_C(V_*,C_*)-G_C(V_*,C_*)c_1-L[G_V(V_*,C_*)+G_C(V_*,C_*)c_1]]. As A_L is linear in L and non-zero for all L, there exists constants η_1 = η_1(γ,λ) and η_2= η_2(γ,λ) such that η_1 L≤ |A_L| ≤η_2 L for all L≥ 5. Therefore,|c_L| ≤β1/η_1 LK^ℓ-αL^2(1K^α-1+1/K).Choosing Ksufficiently large, as >1, it is clear that the estimate (<ref>) holds for ℓ = L, thus concluding the proof. We are now ready to prove the main result of this section.For any fixed γ∈(1,3] and λ∈Λ, there exists ϵ=ϵ(,)>0 such that the Taylor seriesC(V) = ∑_ℓ =0^∞ c_ℓ(V-V_*)^ℓ converges absolutely on the interval (V_*-ϵ,V_*+ϵ). Moreover, C(V) is the unique analytic solution to (<ref>). Let α∈ (1,2) be fixed and suppose |V-V_*|<ϵ, where ϵ>0 is to be chosen later. By (<ref>) in Lemma <ref>, there exists a constant K(α,γ,λ) such that |∑_ℓ =2^∞ c_ℓ(V-V_*)^ℓ| ≤∑_ℓ =2^∞K^ℓ-αℓ^3ϵ^ℓ≤∑_ℓ=2^∞(Kϵ)^ℓ< ∞, provided ϵ <1/K. Thus, ∑_ℓ=0^∞ c_ℓ(V-V_*)^ℓ converges absolutely for V ∈ (V_*-ϵ,V_*+ϵ) with 0<ϵ<1/K. Notice that the local analytic solution C(V) obtained in Theorem <ref> depends on γ and z. Since all the coefficients c_ℓ for any ℓ≥ 1 are continuous functions of (γ,) for γ∈ (1,3] and λ∈Λ by (<ref>) and (<ref>), by standard compactness and uniform convergence, we deduce that C=C(V;γ,) is a continuous function of (γ,) (equivalently continuous in (,z)) on its domain. The local analytic solution (<ref>), propagated by (<ref>) to the left of the sonic point P_*, except at the sonic point itself,remains strictly awayfrom the zeros of F, G, and D. In addition, the solution to the left of the sonic point P_* satisfies F>0, G<0, D<0 and dC/dV <0. The result is owing to ourchoice of the negative branch c_1 (see(<ref>)). The sign conditions then follow from F_C(V_*,C_*)>0 and G_C(V_*,C_*)<0 from(<ref>) and (<ref>). § SOLVING TO LEFT: BASIC SETUP AND CONSTRAINTS ON CONNECTIONS In this section, we introduce the basic setup for the continuity argument for the first trajectoryto the left of the sonic point and also show that the solutions of (<ref>) starting from the initial point P_1 can't connect to P_8 for γ∈ (1,] and can't connect to P_6 for γ∈ [2,3]. We will use z only (instead of ) in the following sections. The corresponding range of z for P_* is given by𝒵(γ;P_*)= [z_m(γ),z_M(γ)]whenP_*=P_6,(0,z_M(γ)] whenP_*=P_8.§.§ Connection from the sonic point to the initial point P_1By Theorem <ref>, the following problem dCdV = F(V,C;γ,z)G(V,C;γ,z), V∈[V_1,V_*],C(V_*) = C_*,dC/dV(V_*)= c_1,where c_1 is as defined in (<ref>), has a local analytic solution. To prove the existence of a trajectory connecting P_1 and P_*, we will first show that the local analytic solution obtained from Theorem <ref> extends smoothly as a strictly monotone decreasing solution to the left to C:[V_1,V_*]→_+. Secondly, we will show that for any γ∈(1,3], there exists a z_std∈𝒵(γ;P_*) such that the local analytic solution from either P_6 or P_8 can be extended smoothly to P_1 by using a continuity argument. Let P_*=(V_*,C_*) be either P_6 or P_8 and suppose C:[V_*-ϵ,V_*]→_+ is the local, analytic solution to (<ref>) guaranteed by Theorem <ref>. Then the solution extends smoothly to the left to C:[V_1,V_*]→_+.We argue by contradiction. Suppose that the maximal time of existence of the solution is (V_0,V_*] for some V_0≥ V_1. By Remark <ref>, C(V)>C(V_*) on (V_0,V_*). Moreover, as the right hand side of the ODE (<ref>) is locally Lipschitz away from zeros of G, we see that the only obstruction to continuation past V_0 is blow-up of C, i.e., lim sup_V→ V_0^+C(V)=∞.Now, from the explicit forms of F and G, we observe that there exists M=M(,z)>0 such that if C≥ M, for all V∈[V_1,V_*-ϵ], we have G(V,C;γ,z) = C^2[(m+1)V+2mz]-V(1+V)(λ+V)≤1/2 C^2[(m+1)V+2mz]<0,F(V,C;,z) = C{C^2[1+mz/(1+V)]- a_1(1+V)^2+a_2(1+V)-a_3}≤3/2 C^3[1+mz/(1+V)]and F>0. Thus, dC/dV=F(V,C;γ,z)/G(V,C;γ,z)≥ 3 C[1+mz/(1+V)]/[(m+1)V+2mz].Thus, as V is contained in a bounded set, we see that there exists a constant A>0 such that whenever C(V)≥ M, we haved log C/dV≥ -A,and hence C is necessarily bounded on (V_0,V_*), contradicting the assumption.By Theorem <ref> and Lemma <ref>, (<ref>) has a smooth solution on [V_1, V_*].We use C(V;γ,z,P_*) to denote the solution of (<ref>) at V∈[V_1,V_*]. By the fundamental theorem of calculus,C(V;γ,z,P_*) = ∫_V_*(γ,z)^VdC(V;γ,z)/dVdV +C_*(γ,z) = ∫_V_*(γ,z)^VF(V,C;γ,z)G(V,C;γ,z)dV +C_*(γ,z).It is clear from the expressions for (V_6,C_6) and (V_8,C_8) in (<ref>), (<ref>) as well as the continuous dependence of the local, analytic solution on , z (cf.Remark <ref>), and the continuity properties of F and G that this is a continuous function with respect to γ and z. Moreover, the initial value V_1 only depends on γ. Hence, for any fixed γ and a sonic point P_*∈{P_6,P_8}, if we can show that there exists a z∈𝒵(γ;P_*) such that C(V_1;γ,z,P_*)≤ C_1 and a z∈𝒵(γ;P_*) such that C(V_1;γ,z,P_*)≥ C_1, then we conclude that there exists a z_std such that z≤ z_std≤z and C(V_1;γ,z_std,P_*)= C_1. This motivates the introduction of upper and lower solutions: (Upper and lower solution). Let γ∈ (1,3] and z∈𝒵(γ;P_*). Let C(·;γ,z,P_*):[V_1,V_*]→_+ be the analytic solution obtained from Theorem <ref> and Lemma <ref>. We say that z(γ;P_*) gives an upper solution for P_* if C(V_1;γ,z(γ;P_*),P_*) > C_1. We say that z(γ;P_*) gives a lower solution for P_* if C(V_1;γ,z(γ;P_*),P_*) < C_1. The proof of the existence of an analytic solution connecting P_1 and either P_6 or P_8 proceeds as follows.We will first show that P_6 always admits a lower solution and P_8 always admits an upper solution. It will then follow that, depending on whether C(V;,z_M,P_6)=C(V;,z_M,P_8) gives a lower or an upper solution (or connects to P_1), at least one of P_6 and P_8 has both an upper and a lower solution, thus concluding the proof. First we show the existence of a lower solution for P_6. Let γ∈(1,3]. Then there exists z(γ;P_6)∈𝒵(γ;P_6) such thatC(V;γ,z(γ),P_6) is a lower solution for P_6. Thisfollows simply fromRemark <ref> and the monotonicity of C(V;γ,z,P_6). When z=z_m(γ), we have V_6(z_m(γ)) = V_1(γ) and C(V_1;γ,z_m(γ),P_6)=C_6(z_m(γ)) < C_1(γ) by (<ref>). Thus, C(V;γ,z_m(γ),P_6) gives a lower solution for P_6. Next we showthe existence of an upper solution for P_8. Let γ∈(1,3]. Then there exists z(γ;P_8)∈𝒵(γ;P_8) such thatC(V;γ,z(γ;P_8),P_8) is an upper solution for P_8. By Lemma <ref>, C_1(γ)≤ C_1(3)=√(3)/2<1 for each γ∈(1,3]. By Lemma <ref>, C_8(,z) is monotone decreasing with respect to z for 0<z≤ z_M. Since C_8(γ,0) = 1, there exists a sufficiently small z(γ;P_8)∈𝒵(γ;P_8) such that C_8(z(γ;P_8))>C_1. By the monotonicity of C(V;γ,z(γ;P_8),P_8) with respect to V, we conclude that C(V;γ,z(γ;P_8),P_8)> C_1(). Thus, C(V;γ,z(γ;P_8),P_8) is an upper solution for P_8. Wenow prove the main result of this section. Let γ∈(1,3]. Then there exists a P_*∈{P_6,P_8} and a corresponding z_std(γ;P_*)∈𝒵(γ;P_*) such that the local analytic solution obtainedfromTheorem <ref>extends smoothly from P_* to P_1. By Lemma <ref>, the domain of the local analytic solution extends smoothly (analytically) to V=V_1. It remains to show that for each γ∈ (1,3] there exist P_* and z_std(γ;P_*)∈𝒵(γ;P_*) such that C(V_1;γ,z_std(γ),P_*) = C_1. Recall that when z=z_M, P_6 coincides with P_8. Therefore, for P_*= P_6=P_8 with z=z_M, there are three possibilities: * If C(V_1;γ,z_M(γ),P_*) < C_1(γ), then z=z_M(γ) gives a lower solution for P_8. Then, by using the continuity argument and Lemma <ref>, there exists a z_std∈(z(γ;P_8),z_M(γ)) such that C(V_1;γ,z_std,P_8)=C_1. * If C(V_1;γ,z_M(γ),P_*) = C_1(γ), then z=z_M(γ) gives the solution. * If C(V_1;γ,z_M(γ),P_*) > C_1(γ), then z=z_M(γ) gives an upper solution for P_6. Then, by using the continuity argument and Lemma <ref>, there exists a z_std∈(z(γ;P_6),z_M(γ)) such that C(V_1;γ,z_std,P_6)=C_1. This concludes the proof. §.§ No connection to P_8 for γ∈ (1,]Now that we have established the existence of an analytic solution to (<ref>) connecting P_1 to either P_6 or P_8 for each γ∈(1,3], we seek to understand better the nature of the solutions in order to connect the solution through the triple point to the origin. The first step in showing this is to prove that, for ∈(1,_⋆], for _⋆ defined below in (<ref>), the connection must be to P_6. For notational convenience, we define a constantk(γ,z) = -(1+V_6(,z))^2/V_6(,z),and, for each ∈(1,2], z∈[z_m,z_M], we define a barrier function B_k(V)=√(-k(,z)V). For any γ∈ (1,2] and z∈[z_m,z_M], the curve C=B_k(V) is a lower barrier (in the sense of Definition <ref>) for the solution of dCdV = F(V,C;γ,z)G(V,C;γ,z), V∈[V_1,V],C(V)= C, where V∈(V_1,V_6) and C>B_k(V). In particular, for any z∈[z_g,z_M] the curveC=B_k(V) is a lower barrier for the solution of the problem (<ref>) with P_*=P_6. Recall that z_g∈𝒵(;P_6) is the value of z such that P_4=P_6, see (<ref>). We begin by showing that the second claim follows from the first one. Observe that, assuming the first part of the lemma is proved, it is sufficient to verify that there exists some interval [V,V_6) such that the solution C(V) to the problem (<ref>) satisfies C(V)>B_k(V) for V∈(V,V_6). This claim follows one we verify that the derivative at V_6 satisfies the inequality d C/d V(V_6)=c_1 < -1/2√(k(γ,z)-V_6) = 1+V_6/2V_6. The proof of this inequality for ∈(1,2] and z∈[z_g,z_M] is given in Appendix <ref>. We therefore focus on proving the first claim. Suppose that C(V) is a solution to the problem (<ref>). We will apply the barrier argument (cf. <ref>) to show that as the initial point C(V)>B_k(V), this inequality is propagated by the ODE. As the solution to the ODE (<ref>) remains monotone by Lemma <ref>, it is clear that it cannot meet a sonic point. Our goal is to show that for any γ∈ (1,2], z∈[z_g,z_M] and V∈[V_1,V_6), F(V,√(-k(γ,z)V);γ,z)G(V,√(-k(γ,z)V);γ,z)+1/2√(k(γ,z)-V)<0. Since G(V,√(-k(γ,z)V);γ,z)<0 for any γ∈ (1,2], z∈[z_m,z_M] and V∈[V_1,V_6) by Lemma <ref>, it is sufficient to show that F(V,√(-k(γ,z)V);γ,z)+1/2√(k(γ,z)-V)G(V,√(-k(γ,z)V);γ,z)>0. By direct computations, we obtain 2√(-k(γ,z)V)( F(V,√(-k(γ,z)V);γ,z)+1/2√(k(γ,z)-V)G(V,√(-k(γ,z)V);γ,z))=(m-1-mγ)V^2+[-2-m(γ-1)+m γ z(γ-2)+(m-1)k(γ,z)]V+2mk(γ,z)z1+V-1-mγ z =: _k(V,z,m). Since √(-k(γ,z)V)2>0, (<ref>) is equivalent to the positivity of _k(V,z,m): _k(V,z,m) >0 for any γ∈ (1,2], z∈[z_m,z_M] and V∈[V_1,V_6). As_k(V_6,z,m)=0 for any γ∈ (1,2] and z∈[z_m,z_M] (due to B_k(V_6)=C_6 and the vanishing of F and G at (V_6,C_6)), we will conclude that _k(V,z,m)>0 by demonstrating that for V∈[V_1,V_6), ∂_k(V,z,m)/∂ V<0. The V derivative of _k is given by ∂_k(V,z,m)/∂ V = 2(m-1-mγ)V-2-m(γ-1)+mγ z(γ-2)+(m-1)k(γ,z)-2mk(γ,z)z(1+V)^2. When m=1, for any V∈[V_1,V_6), ∂_k(V,z,1)/∂ V = -2γ V-1-γ+(γ-2)γ z+2(1+V_6)^2zV_6(1+V)^2<-2γ V_1-1-γ+(γ-2)γ z+2zV_6 =: I, where we have used -2γ V<-2γ V_1 and 2(1+V_6)^2zV_6(1+V)^2<2zV_6 for any V∈(V_1,V_6). Recalling (<ref>) and (<ref>), we deduce I = -(γ-1)^2/γ+1+[(γ-2)γ+2/V_6]z <-(γ-1)^2/γ+1+[(γ-2)γ+2/V_1]z =-(γ-1)^2/γ+1+[γ(γ-3)-1]z<0 for any γ∈(1,2] and z∈[z_m,z_M], which in turn leads to (<ref>) for m=1. When m=2, for any V∈[V_1,V_6), ∂_k(V,z,2)/∂ V = 2(1-2γ)V-2γ+2(γ-2)γ z-(1+V_6)^2/V_6+4(1+V_6)^2z/V_6(1+V)^2<2(1-2γ)V_1-2γ+2(γ-2)γ z-(1+V_6)^2/V_6+4z/V_6 =: II + III since (1-2γ) V<(1-2γ) V_1 and 2(1+V_6)^2zV_6(1+V)^2<2zV_6 for any V∈(V_1,V_6), where II and III denote II :=2(1-2γ)V_1-2γ+2(γ-2)γ zand III:=-(1+V_6)^2/V_6+4z/V_6. By (<ref>), II = -2(γ-1)(γ-2)/γ+1+2(γ-2)γ z. Using (<ref>) and (<ref>), we rewrite III as III =-1-(γ-2)^2z^2-w^2-2(γ-2)z+2w+2(γ-2)zw+16z/4V_6=-(1+(γ-2)^2z^2-2(γ+2)z)-w^2+2w+2(γ-2)zw+(12-4γ)z/4V_6=2(1-w)w+4(3-γ)z+2[(γ-2)w+2]z/4V_6.By Remark <ref> and the fact that -1<V_6<0for any γ∈(1,2] and z∈[z_m,z_M], we have III <-2(1-w)w+4(3-γ)z+2[(γ-2)w+2]z/4<-4(3-γ)+2[(γ-2)w+2]/4z <(γ-7/2)z. Then, II+III is bounded by II+III < -2(γ-1)(γ-2)/γ+1+2(γ-2)γ z+(γ-7/2)z< -2(γ-1)(γ-2)/γ+1+4(γ-2)γ z where we have used γ-7/2<2(γ-2)γ for any γ∈(1,2]. Since z_m = γ-1/(2γ-1)(γ+1) by (<ref>), II+III < -2(γ-1)(γ-2)/γ+1+4(γ-2)γ z ≤ -2(γ-1)(γ-2)/γ+1+4γ(γ-1)(γ-2)/(2γ-1)(γ+1)= 2(γ-1)(γ-2)/γ+1(2γ/2γ-1-1)≤ 0. Therefore,(<ref>) holds for m=2, γ∈ (1,2], z∈[z_m,z_M] and V∈[V_1,V_6), thereby completing the proof. Next, we establish a uniform upper barrier for the forward solution trajectory of (<ref>) with the initial value P_1 for a particular range of γ and z∈[z_m(γ),z_M(γ)] to demonstrate there is no connection from P_1 to P_8. By (<ref>) and (<ref>),k(γ, z_M) = -(1+V_6(z_M))^2/V_6(z_M) = γ/2+√(2γ). We defineto be the value such thatC_1()=√(-k(, z_M)V_1()). For any γ∈ (1,] and z∈[z_m,z_M], the curve B_k_M(V)=√(-k(γ,z_M)V) is an upper barrier for the solution of dCdV = F(V,C;γ,z)G(V,C;γ,z), V∈[V_1,V_6),C(V_1) = C_1.We begin by verifying that P_1=(V_1,C_1) lies on or below the curve defined by B_k_M(V). Note that C_1 - √(-k(γ, z_M)V_1) =√(2γ(γ-1))/γ+1-√(√(2)γ/(√(γ)+√(2))(γ+1)) =2γ(γ-1)/γ+1-√(2)γ/√(γ)+√(2)√(2γ(γ-1))+√(√(2)γ(γ+1)/√(γ)+√(2)) . Since d/dγ[2γ(γ-1)/γ+1-√(2)γ/√(γ)+√(2)]= 2(γ^2+2γ-1)/(γ+1)^2-√(2γ)+4/2(√(γ)+√(2))^2=-4/(γ+1)^2-1/√(2)(√(γ)+√(2))-1/(√(γ)+√(2))^2+2>0 because -4/(γ+1)^2>-1, -1/√(2)(√(γ)+√(2))>-1/3, and -1/(√(γ)+√(2))^2>-1/2 for any γ∈(1,], and sinceC_1()=√(-k(, z_M)V_1) by the definition of , we deduce that C_1 -√(-k(γ,z_M)V_1)≤ 0 when γ≤ with equality only when γ=. Hence, P_1 is located below the curve B_k_M(V) for γ<, and P_1 lies on the curveB_k_M(V) when γ=. We will now employ a barrier argument (cf.(<ref>)) to establish that the curve B_k_M(V) serves as an upper barrier for the solution of (<ref>). Specifically, we will show that for all γ∈ (1,], z∈[z_m,z_M], and V∈[V_1,V_6), F(V,√(-k(γ,z_M)V);γ,z)G(V,√(-k(γ,z_M)V);γ,z)+1/2√(k(γ,z_M)-V)<0. By Lemma <ref>, G(V,√(-k(γ,z_M)V);γ,z)<0 for any γ∈ (1,], z∈[z_m,z_M] and V∈[V_1,V_6). Hence, using the same procedure as outlined in Lemma <ref> and recalling (<ref>), it is enough to show that _k(z_M)(V,z,m) :=(m-1-mγ)V^2+[-2-m(γ-1)+m(γ-2)γ z+(m-1)k(γ,z_M)]V+2mk(γ,z_M)z1+V-1-mγ z>0. As-1<V_6<0 for any z∈[z_m,z_M],by Lemma <ref>, we have ∂ k(γ,z)/∂ z = (1/V_6^2-1)∂ V_6/∂ z>0. Thus, we have k(γ,z_M)≥ k(γ,z)for any z∈[z_m,z_M]. When m=1, recalling (<ref>) and using (<ref>) and (<ref>), we deduce that _k(z_M)(V,z,1) ≥_k(V,z,1) >0 for any z∈[z_m,z_M] and V∈[V_1,V_6).When m=2, by (<ref>), we have -1<V_6≤-√(2)/√(γ)+√(2)≤ -1/2 for any γ∈(1,2] and z∈[z_m,z_M]. Thus, k(γ,z_M)V + 4zk(γ,z_M)/1+V = k(γ,z_M)/1+V(V^2+V+4z) > k(γ,z_M)/1+V(V_6^2+V_6+4z) for any V∈[V_1,V_6). By direct computations, we obtain V_6^2+V_6+4z =2(γ-2)^2z^2+2(2-γ)zw+2(6-γ)z/4>0. Therefore, by (<ref>) and and (<ref>), we obtain _k(z_M)(V,z,2)≥_k(V,z,2)> 0, thereby completing the proof. For any γ∈(1,], the analytic solution to (<ref>) which connects P_1 to either P_6 or P_8, guaranteed by Theorem <ref> with the initial condition C(V_1)=C_1, can only connect to P_6. When z=z_M, we have P_6 = P_8, thus obviating the need for further discussion.If z∈(0,z_M), by Theorem <ref>, it is equivalent to demonstrating that the solution trajectory can not connect to P_8. We will discuss z∈(0,z_m] and z∈[z_m,z_M) separately.Let z∈(0,z_m] be given. We observe that when C_8(z)≥ C_1|_=2, the solution trajectory cannot connect to P_8, since the solution of (<ref>) with C(V_1)=C_1 is decreasing byLemma <ref>. We further note that z = 2/3(γ+4) leads to C_8(z) = C_1(2). By Lemma <ref> and Lemma <ref>,C_8(z)≥ C_1|_=2>C_1(γ) for any γ∈(1,] and z≤2/3(γ+4). On the other hand, it is easy to check z_m<2/3(γ+4): z_m-2/3(γ+4) = γ-1/(2γ-1)(γ+1)-2/3(γ+4) = (2-γ)(γ-5)/(γ-1/2)(γ+1)(γ+4) <0,and hence, the conclusion follows for z∈(0,z_m].When z∈[z_m,z_M), we have C(V_6;γ, z)<√(-k(γ,z_M) V_6(z)) by Lemma <ref>.Therefore, in order to show that this solution can not connect to P_8, it is sufficient to show that √(-k(γ,z_M)V_6(z)) < C_8(z), equvalently -k(γ,z_M)V_6(z)-C_8^2(z)<0. Since -k(γ,z_M)V_6(z_M)-C_8^2(z_M)=C_6(z_M)^2-C_8(z_M)^2=0 by (<ref>) and (<ref>), the proof will be complete upon showing that -k(γ,z_M)V_6(z)-C_8^2(z) is monotone increasing in z. Now, differentiating with respect to z (for any fixed γ∈ (1,)), and recalling dC_8/dz<0, d/dz(-k(γ,z_M)V_6(z)-C_8^2(z)) = -√(γ/2)C_8(z_M)dV_6/dz-2C_8(z)dC_8/dz >C_8(z_M)(-√(γ/2)dV_6/dz-2dC_8/dz). The inner bracket is -√(γ/2)dV_6/dz-2dC_8/dz = -√(γ/2)(γ-2/2+1/2(γ+2)-(γ-2)^2z/w)-(γ-2-(γ+2)-(γ-2)^2z/w)= (2-γ)(1/2√(γ/2)+1)+(γ+2)-(γ-2)^2z/w(1-1/2√(γ/2))>0 where we have used z<z_M=1/γ+2+2√(2γ)<1/4 and <2 to conclude the positivity. §.§ No connection to P_6 for γ∈ [2,3]In this subsection, we shall employ another barrier function B_s(V)to demonstrate that for γ∈[2,3], the solution trajectory originating at P_1 and propagatedby (<ref>) can only establish a connection with P_8. We defineB_s(V) = -√(γ/2)V.From(<ref>), we observe thatC_8(z_M)/V_8(z_M) = -√(γ/2).First we will show that the solution trajectory of (<ref>) starting from P_1remains above the curve B_s(V) forV∈[V_1,-√(2/γ)C_8(z)). For any γ∈ [2,3] and z∈ (0,z_M], the curve B_s(V)=-√(γ/2)V is a lower barrier for the solution of dCdV = F(V,C;γ,z)G(V,C;γ,z), V∈[V_1,-√(2/γ)C_8(z)),C(V_1) = C_1. To show that B_s(V) is a lower barrier of the solution of (<ref>), we first verify that the initial point P_1=(V_1,C_1) lies on or above the curve B_s(V). This follows from C_1+√(γ/2)V_1 = √(2γ(γ-1))/γ+1-√(γ/2)2/γ+1 = √(2γ)/γ+1(√(γ-1)-1)≥ 0 for any γ∈[2,3] where the equality holds when γ=2. Next, weemploy a barrier argument (cf.(<ref>)) to show that B_s(V) is a lower barrier for the solution trajectory of (<ref>). Specifically, we aim to prove that for any γ∈ [2,3], z∈(0,z_M], and V∈[V_1,-√(2/γ)C_8(z)), F(V,-√(γ/2)V;γ,z)G(V,-√(γ/2)V;γ,z)+√(γ/2) >0. By Lemma <ref>, G(V,-√(γ/2)V;γ,z)<0 for any γ∈ [2,3], z∈(0,z_M] and V∈[V_1,-√(2/γ)C_8(z)), and so it is sufficient to prove that F(V,-√(γ/2)V;γ,z)+√(γ/2) G(V,-√(γ/2)V;γ,z)<0. By (<ref>) and (<ref>), we have F(V,-√(γ/2)V;γ,z)+√(γ/2)G(V,-√(γ/2)V;γ,z) = -m√(γ/2)V^2[(-γ+1/2)V+zγ V/2(1+V)+(γ z-1)(γ-1)/2-zγ]. Since -m√(γ/2)V^2<0, it is sufficient to show that (V,z):=(-γ+1/2)V+zγ V/2(1+V)+(γ z-1)(γ-1)/2-zγ>0 for any γ∈ [2,3], z∈(0,z_M] and V∈[V_1,-√(2/γ)C_8(z)). By Lemma <ref>, C_8(z) > C_8(z_M) for any z∈(0,z_M). Thus, -√(2/γ)C_8(z) < -√(2/γ)C_8(z_M) = V_8(z_M). Therefore, if we can establish the validity of (<ref>) for all V∈[V_1,V_8(z_M)), it trivially holds for all V∈[V_1,-√(2/γ)C_8(z)). Notice that for any z∈(0,z_M] and V∈[V_1,V_8(z_M)), we have ∂(V,z)/∂ V = -γ+1/2+zγ/2(1+V)^2≤ -γ+1/2+γ z_M/2(1+V_1)^2 =((γ+1)^2/2(γ-1)^2z_M-1)γ+1/2. Given that z_M(γ) = 1/γ+2+2√(2γ) and γ+1/γ-1 = 1+2/γ-1 are both positive and monotonically decreasing functions in γ, it follows that (γ+1)^2/(γ-1)^2z_M(γ)-1 is also monotone decreasingin γ. Hence for all γ∈[2,3], we have (γ+1)^2/2(γ-1)^2z_M-1 ≤ -7/16 and ((γ+1)^2/2(γ-1)^2z_M-1)γ+1/2≤ -7/8+1/2 = -3/8<0, which implies that for any γ∈[2,3], z∈(0,z_M] and V∈[V_1,V_8(z_M)), (V,z) > (V_8(z_M),z). To finish the proof of (<ref>), it is now sufficient to show that (V_8(z_M),z)≥ 0. By (<ref>), V_8(z_M) is independent of z so that ∂(V_8(z_M),z)/∂ z = γ V_8(z_M) /2(1+V_8(z_M))+γ(γ-3)/2 <0. Hence, we obtain (V_8(z_M),z) ≥(V_8(z_M),z_M), where the equality holds when z=z_M. By (<ref>) and Lemma <ref>, (V_8(z_M),γ,z_M) = F(V_8(z_M),C_8(z_M),γ,z_M)+√(γ/2)G(V_8(z_M),C_8(z_M),γ,z_M)-m√(γ/2)V_8(z_M)^2 =0. In conclusion, we have shown that for γ∈[2,3], z∈(0,z_M] and V∈[V_1,V_8(z_M)), (V,z) > 0, thereby completing the proof. For any γ∈[2,3], the analytic solution to (<ref>) connecting P_1 to either P_6 or P_8, guaranteed by Theorem <ref>, can only connect to P_8. When z=z_M, the points P_6 and P_8 coincide, rendering any further discussion unnecessary. For z∈(0,z_M), by Theorem <ref>, it is equivalent to showing that the solution trajectory cannot connect to P_6. By (<ref>) and Lemma <ref>, for any γ∈[2,3] and z∈(0,z_M), it holds C_6(z)< C_8(z_M). Moreover, from (<ref>) and (<ref>), V_6(z)+√(2/γ)C_8(z) =√(2/γ)-1+(√(2/γ)+1)(γ-2)z+(√(2/γ)-1)w/2<√(2/γ)-1+(√(2/γ)+1)(γ-2)z_M/2=0 since z_M = 1/(√(γ)+√(2))^2, which shows that (V_1,V_6(z)) ⊂ (V_1,-√(2/γ)C_8(z)). Thus, P_6always lies below the curve {B_s(V) | V∈ (V_1,-√(2/γ)C_8(z))} for z∈(0,z_M). On the other hand, by Lemma <ref>, the solution trajectory of (<ref>) is always above B_s(V) on (V_1,-√(2/γ)C_8(z)). Therefore, we conclude that the solution cannot connect to P_6. § SOLVING TO LEFT: P_6In this section, we will refine our analysis and the existence result around P_6for γ∈(1,2] by deriving an appropriate upper bound for the backwards solution of (<ref>) starting from P_6 to determine a more precise sonic window for z_std(γ;P_6). In addition, we will prove that, for ∈(1,2], the value z_std(γ;P_6) is unique when it exists.§.§ Existence for γ∈ (1,2]Recall from Section <ref> that, for ∈(1,2], z_g is defined to be the value of z such that V_4(z_g)=V_6(z_g). In this subsection, we rigorously demonstrate that for γ∈ (1,2] and z ∈ [z_m,z_g], the analytic solution to (<ref>) backwards from P_6guaranteed by Theorem <ref> and defined on the domain [V_1,V_6] by Lemma <ref> is indeeda lower solution for P_6, which yields an improvement of the range of z_std(γ;P_6) to [z_g, z_M]. We remark thatz_m<z_gfor any γ∈(1,2] and m=1,2. A proof of this simple fact may be found inAppendix <ref>.In what follows, recalling the definitions (<ref>)–(<ref>),we use the notationG(V,C)= C^2g_1(V)-g_2(V), F(V,C)= C(C^2f_1(V)-f_2(V))where g_1(V)= (m+1)V+2mz, g_2(V)=V(1+V)(mγ z+1+V), f_1(V)= 1+mz/(1+V), f_2(V) = a_1(1+V)^2-a_2(1+V)+a_3.We rewrite dC/dV = F(V,C)/G(V,C) as dlogCdV=1/CdC/dV= C^2f_1(V)-f_2(V)C^2g_1(V)-g_2(V). For ∈(1,2], z_g=z_g(γ)is defined to be the value such thatV_4(γ,z_g)=V_6(γ,z_g).In fact, there exists _g∈(2,3) such that z_g defined in this way is well-defined for γ∈ (1,γ_g], while for ∈(_g,3], V_4 meets V_8 at z_g (defined in an equivalent manner). A detailed discussion ofγ_g and z_g is given in<cit.>. However, for our analysis, we require an understanding of z_g only in the range ∈(1,2]. The value z_g admits an explicit representation as z_g =√(γ^2+(γ-1)^2)-γγ(γ-1)whenm=1, √((2γ^2-γ+1)^2 +2γ(γ-1)[4γ(γ-1)+8/3])-(2γ^2-γ+1)γ[4γ(γ-1)+8/3]whenm=2.We claim that for any γ∈(1,2], z∈[z_m,z_g] gives a lower solution for P_6. Recalling the definition of a lower solution, (<ref>),it is enough to show that log C(V_1;γ,z,P_6):=-∫_V_1^V_6(z)dlog C/dV dV + log C_6(z) <log C_1.Solving this inequality directly is not a trivial task, since the integral is implicit as the integrand involves not only V but also C (cf. (<ref>)). To simplify our approach and avoid the complications associated with this implicit integral, we will derivean explicit lower bound for dlog C/dV for any γ∈(1,2], z∈[z_m,z_g] and V∈[V_1,V_6(z)). For any γ∈(1,2] and z∈[z_m,z_g], the solution obtained from Theorem <ref> and Lemma <ref> satisfies -∫_V_1^V_6(z)dlog C/dV dV <-∫_V_1^V_6(z)f_1(V)/g_1(V) dV. By direct computations, we have dlogCdV - f_1(V)/g_1(V) = C^2f_1(V)-f_2(V)C^2g_1(V)-g_2(V)-f_1(V)/g_1(V) =-g_1(V)f_2(V)-f_1(V)g_2(V)/[C^2g_1(V)-g_2(V)]g_1(V). We will show this function is positive for any γ∈(1,2], z∈[z_m,z_g], and V∈[V_1,V_6(z)). By (<ref>) and the fact that for ∈(1,2], z<z_M<1/5, we have g_1(V)<0 for V∈[V_1,V_6). On the other hand, by Lemma <ref>, G(V,C)=C^2g_1(V)-g_2(V)<0 for any V∈[V_1,V_6). Therefore, it is sufficient to show that q(V):=g_1(V)f_2(V)-f_1(V)g_2(V) < 0. Note that q(V) is a cubic polynomial in V. Also, F(V,C)=G(V,C)=0 at P_4, P_6 and P_8 which implies C_k^2g_1(V_k)=g_2(V_k) and C_k^2f_1(V_k)=f_2(V_k) for k=4,6,8. Consequently, V_4, V_6 and V_8 are three roots of g_1f_2-g_2f_1=0. Thus, q(V) =m[(m + 1)(γ- 1) + 2]/2 (V-V_4)(V-V_6)(V-V_8). According to (<ref>), we have V_6≤ V_8 with the equality when z=z_M. Therefore, the sign of q(V) depends on the location of V_4. If we can show that V_4≥ V_6 for z∈[z_m,z_g],then q(V)<0 for any V∈[V_1,V_6). We claim that V_4(z)≥ V_6(z) for z∈[z_m,z_g] where the equality holds when z=z_g. By using (<ref>), we have dV_4(z)/dz = d/dz( -2mγ z-2/(m+1)γ+1-m)= -2mγ/(m+1)γ+1-m <0, which implies V_4 is a decreasing function in z. By Lemma <ref>, V_6(z) is an increasing function in z. From the definition of z_g (<ref>), V_4(z_g)=V_6(z_g) for any γ∈(1,2]. We have shown(<ref>), which leads to dlogCdV - f_1(V)/g_1(V) >0. This completes the proof of (<ref>). Motivated by Lemma <ref>, we define δ(V_1;z) := -∫_V_1^V_6(z)f_1(V;z)/g_1(V;z)dV + log C_6(z)where we have used the notations f_1(V;z) and g_1(V;z) for f_1(V) and g_1(V) to emphasize the dependence of f_1 and g_1 on z. By Lemma <ref>, we have for any γ∈(1,2] and z∈(z_m,z_g],log C(V_1;γ,z,P_6) < δ(V_1;z).Our next step is to show that for any γ∈(1,2], z=z_g gives a lower solution for P_6. For any γ∈ (1,2] and z=z_g, δ(V_1;z_g)<log C_1. We first evaluate δ(V_1;z) in (<ref>) by using (<ref>) and (<ref>)to calculate the integral explicitly asδ(V_1;z) = (m^2-m)z+(m+1)(2mz-m-1)(m+1)log(m+1)V_6+2mz/(m+1)V_1+2mz+mz2mz-m-1log1+V_1/1+V_6+log (1+V_6).By (<ref>) and (<ref>), 1+V_1= γ-1/γ+1 and log C_1 = 1/2log (1+V_1) + 1/2log2γ/γ+1.To study the remainder of the expression for δ(V_1;z), we treat m=1 and m=2 separately.When m=1, by (<ref>) and (<ref>),V_6(z_g) =V_4(z_g)= -γ z_g+1/γ = -z_g-1/γ, z_g= √(γ^2+(γ-1)^2)-γ/γ(γ-1) = γ-1/γ(√(γ^2+(γ-1)^2)+γ)<1/2.Together with (<ref>) and (<ref>), we then haveδ(V_1;z_g)-log C_1=1/2log(1+V_4)(γ+1)/2γ+1/2(z_g-1)log[(V_4+z_g)(1+V_1)/(V_1+z_g)(1+V_4)] =: 1/2(I+II).For I, we have 1+V_4<1 and γ+1/2γ<1 for any γ∈ (1,2]. Thus, I<0. From (<ref>), 1/z_g-1>-2. Moreover,I+II< log(1+V_4)(γ+1)/2γ-2log[(V_4+z_g)(1+V_1)/(V_1+z_g)(1+V_4)]=log [(γ+1)(γ-1)/2γ^2(2-z_g(γ+1))^2(1-1/(√(γ^2+(γ-1)^2)+γ))^3]<log(γ+1)(γ-1)/γ^2+log2-z_g(γ+1)/2+log[(2-z_g(γ+1))(1-1/√(5)+2)^3]<log2(√(5)+1)^3/(√(5)+2)^3<0where we have used that 2-z_g(γ+1)<2 and 1(√(γ^2+(γ-1)^2)+γ) is a decreasing function. This concludes the proof in the case m=1. When m=2, for any γ∈(1,2], by (<ref>), (<ref>), (<ref>) and (<ref>),V_4 = -4γ z_g+2/3γ-1, z_g=2(γ-1)/√((2γ^2-γ+1)^2 +2γ(γ-1)[4γ(γ-1)+8/3])+(2γ^2-γ+1).We first claim that z_g<1/8. By direct computations, for any γ∈(1,2],(16(γ-1)-(2γ^2-γ+1))^2-(2γ^2-γ+1)^2 -2γ(γ-1)[4γ(γ-1)+8/3] = 8(γ-1)(-γ^3-7γ^2+106/3γ-36)<0where the cubic polynomial is negative for any γ∈(1,2] as shown in Proposition <ref>. This then implies 16(γ-1)-(2γ^2-γ+1) < √((2γ^2-γ+1)^2 +2γ(γ-1)[4γ(γ-1)+8/3]), and hence 2(γ-1)/√((2γ^2-γ+1)^2 +2γ(γ-1)[4γ(γ-1)+8/3])+(2γ^2-γ+1) < 1/8,that is,z_g<1/8, as claimed. By(<ref>), (<ref>), (<ref>) and (<ref>), we haveδ(V_1;z_g)-log C_1 =2z_g+3/2(12z_g-9)log [(3V_4+4z_g)^2/(3V_1+4z_g)^22γ/γ-1]+6z_g-9/4(12z_g-9)log(1+V_4)^4/(1+V_1)^4 +7z_g-3/12z_g-9logγ-1/2γ<2z_g+3/2(12z_g-9)log [(3V_4+4z_g)^2/(3V_1+4z_g)^22γ/γ-1]+6z_g-9/4(12z_g-9)log(γ-1)(1+V_4)^4/2γ(1+V_1)^4where we have used (<ref>) in the inequality. Now, weshow that both terms are negative. We compute(3V_4+4z_g)^2/(3V_1+4z_g)^22γ/γ-1 =(-12γ z_g-6/3γ-1+4z_g-6/γ+1+4z_g)^22γ/γ-1=(-6-4z_g)^2/(-6+4z_g(γ+1))^22γ(γ+1)^2/(3γ-1)^2(γ-1).Notethat 2γ(γ+1)^2/(3γ-1)^2(γ-1)>1 for γ∈(1,2]. Also, z_g>0 implies(-6-4z_g)^2(γ+1)^2/(-6+4z_g(γ+1))^2>1.Hence, 2z_g+3/2(12z_g-9)log [(3V_4+4z_g)^2/(3V_1+4z_g)^22γ/γ-1]<0because 12z_g-9<0. As for the second term, we first note that (γ-1)(1+V_4)^4/2γ(1+V_1)^4 = (γ+1)^4/2γ(3(γ-1)-4γ z_g)^4(γ-1)^3(3γ-1)^4=81(γ+1)^4/(3γ-1)^4γ-1/2γ(1-8/3(√((2γ^2-γ+1/γ)^2 +8[(γ-1)^2+2(γ-1)/3γ])+2γ^2-γ+1/γ)^4. Moreover, for any γ∈(1,2], (2γ^2-γ+1/γ )'= 2-1/γ^2>0and ((γ-1)^2+2(γ-1)/3γ)' = 2/3γ^2+2γ-2>0,which implies (1-8/3(√((2γ^2-γ+1/γ)^2 +8[(γ-1)^2+2(γ-1)/3γ])+2γ^2-γ+1/γ)^4< (1-8/3(√((7/2)^2 +8[1+1/3])+7/2)^4 < 1/4.Therefore, for any γ∈ (1,2], we deduce that(γ-1)(1+V_4)^4/2γ(1+V_1)^4 <81(γ-1)(γ+1)^4/8γ(3γ-1)^4<1where the last inequality is shown in Proposition <ref>. We then have6z_g-9/4(12z_g-9)log(γ-1)(1+V_4)^4/2γ(1+V_1)^4 <0,thereby completing the proof. For any γ∈(1,2], z∈[z_m,z_g] gives a lower solution for P_6. We first show δ(V_1;z) is an increasing function. From (<ref>) and (<ref>) d δ(V_1;z)/d z =-∫_V_1^V_6(z)-2m+m(m-1)V/(1+V)((m+1)V+2mz)^2 dV -dV_6(z)/dzf_1(V_6(z);z)/g_1(V_6(z);z) +1/C_6(z)dC_6(z)/dz>0 where we have used dV_6(z)/dz=dC_6(z)/dz>0 by Lemma <ref> and f_1(V_6(z);z)/g_1(V_6(z);z)<0 by (<ref>). ByLemma <ref> and Lemma <ref>, we then have log C(V_1;γ,z,P_6) <δ(V_1;z) ≤δ(V_1;z_g) < log C_1 for any γ∈(1,2] and z∈[z_m,z_g]. This finishes the proof. §.§ Uniqueness of z_std for P_6 when γ∈ (1,2] Recall z_std(γ;P_6) is the value of z such that the solution C(V;γ,z_std(γ;P_6),P_6 ) (cf. (<ref>)) satisfies d Cd V = F(V,C;γ,z_std)G(V,C;γ,z_std), C(V_1) = C_1, C(V_6) = C_6,d C/d V(V_6,C_6)=c_1. By Proposition <ref>, we know that for γ∈ [2,3] the solution can only connect to P_8 and therefore, we focus onγ∈(1,2] for further analysis of z_std(γ;P_6).Within this range of γ, we demonstratethe uniqueness of z_std for P_6.This is achieved by showingthat for any fixed γ∈(1,2], the solution trajectories C(V;γ,z,P_6) of (<ref>) starting from P_6do not intersect for different values of z∈ [z_g,z_M]. In particular, at most one such trajectory can connect to P_1. For any γ∈(1,2] fixed, the solution trajectoriesC(V;γ,z,P_6) do not intersect for different z∈[z_g,z_M] in the interval [V_1,V_6). We argue by contradiction.For any fixed γ∈(1,2], we write C'(V,C,z)= dC(V,C,z)/dV. We suppose that there exist z_s,z_t∈[z_g,z_M] and z_s<z_t such thatC(V;γ,z_s,P_6) andC(V;γ,z_t,P_6) intersect at a point (V_0,C_0) where V_1≤ V_0<V_6(z_s). By the continuity of the solution curves with respect to both V and z (see Remark <ref>), we may assume without loss of generality that (V_0, C_0) is the first such intersection point to the left of P_6(z_s) and P_6(z_t). In particular,there are no other intersection points within the triangular region enclosed by the curves {(V,C(V;γ,z_s,P_6)) |V∈[V_0,V_6(z_s)]}, {(V,C(V;γ,z_t,P_6)) |V∈[V_0,V_6(z_t)]} and {(V_6(z),C_6(z)) |z∈[z_s,z_t]}. Then, we have C(V_0,z) = C_0 for all z∈[z_s,z_t] so that for all z∈(z_s,z_t), ∂ C/∂ z(V_0,C_0,z)=0 and C'(V_0,C_0,z_s) ≤ C'(V_0,C_0,z_t). By the Mean Value Theorem, there exists a z̃∈(z_s,z_t) such that ∂ C'/∂ z (V_0, C_0,z̃)≥ 0. We will show ∂ C'/∂ z(C_0,V_0,z)< 0 for all z∈(z_s,z_t) to reach the contradiction. By direct computations from the explicit forms of F and G from (<ref>)–(<ref>) and using(<ref>), we have, for anyz∈(z_s,z_t) ∂ C'/∂ z(C_0,V_0,z) =C_0(mC_0^2/1+V_0+mγ(γ-3)/2(1+V_0)-mγ(γ-1)/2)G(V_0,C_0,z)-(2mC_0^2-mγ V_0(1+V_0))F(V_0,C_0,z)G^2(V_0,C_0,z). Since z_s,z_t∈[z_g,z_M], it is sufficient to show that ∂ C'/∂ z(C_0,V_0,z) <0 for any z∈(z_g,z_M). Substituting (<ref>) and (<ref>) into the above formula and simplifying the expression, we arrive at ∂ C'/∂ z(C_0,V_0,z) = mC_0(C_0^2-(1+V_0)^2)[((m-1)V_0-2)C_0^2+(m+1)γ-1/2γ V_0^2 (1+V_0)] G^2(V_0,C_0,z)(1+V_0). Notice that, for any V_0∈[V_1,V_6) and z∈ (z_g,z_M) C_0>0 , C_0^2-(1+V_0)^2>0, G^2(V_0,C_0,z)(1+V_0)>0. Thus in order to show ∂ C'/∂ z(C_0,V_0,z)<0, it is enough to show h(V_0,C_0,z):= ((m-1)V_0-2)C_0^2+(m+1)γ-1/2γ V_0^2 (1+V_0)<0. By Lemma <ref>, C=√((1+V_6)^2/V_6V) is a lower barrier of C(V;γ, z, P_6) with z∈ (z_g,z_M). Hence, h(V_0,C_0,z)≤(1+V_6)^2/V_6((m-1)V_0-2)V_0+(m+1)γ-1/2γ V_0^2 (1+V_0)=V_0[(1+V_6)^2/V_6((m-1)V_6-2) +(m+1)γ-1/2γ V_0 (1+V_0) ]< V_0(1+V_6)/V_6[(1+V_6)((m-1)V_6-2) +(m+1)γ-1/2γ V_6^2 ] because V_0<V_6< V_6(z_M|_γ=2)=-1/2 by Lemma <ref> and m=1, 2. Denote q(V_6) :=(1+V_6)((m-1)V_6-2) +(m+1)γ-1/2γ V_6^2 . Our goal is to show q(V_6)<0. Again, by Lemma <ref>, we have -1/2=V_6(z_M|_γ=2)> V_6> V_6(z_g) = V_4(z_g) for any z∈ (z_g,z_M). Thus, if q(V_4(z_g))≤ 0 and q(-1/2)≤ 0, then q(V_6)<0 for all z∈ (z_g,z_M) since the coefficient of V_6^2 is positive. We first note that q(-1/2)=-1-m-1/4+(m+1)(γ-1)γ/8 <0 for any γ∈(1,2] and m=1,2. For q(V_4(z_g)), we claim that V_4(z_g) is a negative zero of q. To this end, by using V_4(z_g)= V_6(z_g) and C_4(z_g)= C_6(z_g), we rewrite q(V_4(z_g)) as q(V_4(z_g)) =1/1+V_4(z_g)[ (1+V_6(z_g))^2 ((m-1) V_4(z_g) -2) + (m+1)γ-12γ (1+V_4(z_g)) V_4(z_g)^2] =1/1+V_4(z_g)[ C_4(z_g)^2( (m-1) V_4(z_g) -2) + (m+1)γ-12γ (1+V_4(z_g)) V_4(z_g)^2] . Using (<ref>), we replace C_4(z_g) by H( V_4(z_g)) to obtain q(V_4(z_g))=V_4(z_g)/ (m+1) V_4(z_g) + 2mzq̃ (V_4(z_g)), where q̃ (V_4(z_g))= (V_4(z_g)+1+mγ z_g) ( (m-1) V_4(z_g) -2) + (m+1)γ-12γ V_4(z_g) ((m+1) V_4(z_g) + 2mz_g) = 2γ (γ-1)V_4(z_g)^2 +2 (γ(γ-1)z_g -1) V_4(z_g) - 2(1+γ z_g), m=1, (92γ(γ-1)+1) V_4(z_g)^2 + (2γ(3γ-2) z_g -1) V_4(z_g) - 2(1+2γ z_g), m=2. It is routine to check that q̃ has two roots -z_g -1/γ, 1/γ-1 when m=1 and - 4γ z_g+2/3γ-1 and 2/3γ-2 when m=2. Therefore by (<ref>) and (<ref>), q̃ (V_4(z_g))=0 and q (V_4(z_g))=0.This finishes the proof of q(V_6)<0 and∂ C'/∂ z(C_0,V_0,z)<0 for all z∈ (z_g, z_M), which contradicts our assumption.Therefore we conclude that C(V;γ,z_s,P_6) can not intersect C(V;γ,z_t,P_6) if z_s ≠ z_t. § SOLVING TO LEFT: P_8In this section, we again employ suitable barrier functions to delineate a more precise range for z in which z_std(γ;P_8) resides. By Proposition <ref>, we may focus on γ∈ (,3] for further analysis of z_std(γ;P_8). §.§ Conditional existence for γ∈(, ]As described in Section <ref>, we defineto be the value such that C_1 = √(-V_1). A simple calculation then establishes that= 1+√(2). By Lemma <ref>, P_1 is below B_1(V)=√(-V) when γ≤. We will show that the solution trajectory, originating at P_1 and propagated by (<ref>), remains below C=B_1(V) within a specific range of V, the exact bounds of which will be established subsequently, when γ≤. For each , we define z_1 to be the value such that C_8(z_1) = √(-V_8(z_1)). Then z_1 = √(5)-1/2(1+√(5)+γ).Since V_8(z_1)=C_8(z_1)-1 and 1-C_8^2(z_1)=C_8(z_1), V_8(z_1) = √(5)-3/2, C_8(z_1) = √(5)-1/2.For any fixed γ∈(1,3], we write C_8=C_8(z), C'_8(z) = d C_8(z)/dz and C”_8(z) = d^2 C_8(z)/dz^2. We first show the concavity of C_8^2 with respect to z, which will be crucial for subsequent arguments. For any fixed γ∈(1,3], and any z∈(0,z_M), C_8C”_8+(C'_8)^2<0. By direct computations, we have C”_8 = -4γ/w^3<0. For any γ∈ (1,3] and z∈(0,z_M), recalling (<ref>), we have 1+(γ-2)z+w = 2C_6+2w>2 w . Hence, 4C_8C_8”+4(C_8')^2= (1+(γ-2)z+w)-8γ/w^3 + 4(C_8')^2 ≤(2C'_8)^2-16γ/w^2 = (2C'_8-4√(γ)/w)(2C'_8+4√(γ)/w). By Lemma <ref>, C_8'<0 and thus 2C'_8-4√(γ)/w<0. We will show 2C'_8+4√(γ)/w>0. By using 0<w<1 (cf. Remark <ref>), we see that (2C'_8+4√(γ)/w)w= 4√(γ)-γ-2+(γ-2)w+(γ-2)^2 z>4√(γ)-γ-3= (3-√(γ))(√(γ)-1)>0 for any γ∈ (1,3], thereby completing the proof. For any γ∈ (,], z∈(0,z_1(γ)] gives a upper solution for P_8. In view of Lemma <ref>, we may determine the existence of z(γ;P_8) such that for all z≤z(γ;P_8), it holds that C_8(γ,z)≥ C_1(γ) and each z∈ (0, z(γ;P_8)] gives rise to an upper solution for P_8. However, the expression for such z(γ;P_8), derived from the equation C_8(γ, z(γ;P_8))=C_1(γ), is intricate and inconvenient. For that reason, weuse another value of z, which has a closed-form representation. Specifically, we define z̃_m(γ) to be the value satisfying C_8(γ,z̃_m(γ)) = C_1()=√(2-√(2)) so that z̃_m(γ)=-2^1/4√(1+√(2))+2^3/4√(1+√(2))/2√(2)+ 2^5/4√(1+√(2))+γ. It is easy to check that for any γ∈ (,], z̃_m(γ) ≥z̃_m()>2/25. By Lemma <ref> and Lemma <ref>, for any γ∈ (,] and z≤z̃_m(γ), we have C_8(γ,z̃_m(γ)) = C_1() ≥ C_1(γ) = C_8(γ,z(γ;P_8)). It then follows that z̃_m(γ)≤z(γ;P_8) and hence, each z∈(0,z̃_m(γ)] serves as an upper solution for P_8. To complete the proof, we must demonstrate that each z∈(z̃_m(γ),z_1(γ)] serves as an upper solution for P_8. To achieve this, we employ the barrier function B_1(V)=√(-V). It suffices to establish that,for any γ∈ (,] and z∈(z̃_m(γ),z_1(γ)], B_1(V)=√(-V) is a lower barrier for the solution C(V) of (<ref>) with P_*=P_8. By Lemma <ref>, d C/dV<0 to the left of the triple point P_8. Therefore,C(V)> C_8=B_1(-C_8^2)≥ B_1(V) forV∈[-C_8^2,V_8) andC(V_8)>B_1(V_8) for z∈(z̃_m(γ),z_1(γ)),C(V_8)=B_1(V_8) for z=z_1(γ),andd C/d V|_V=V_8(z_1),C=C_8(z_1),z=z_1<- 1/2√(-V_8(z_1)),whichguarantees the existence of V<V_8 sufficiently close to V_8so that also for z=z_1, the solution C(V) to (<ref>) enjoysC(V)>B_1(V) for V∈[V,V_8). The proof of(<ref>) is given in Lemma <ref>. Hence, by the barrier argument (<ref>), we want to show F(V,√(-V);γ,z)/G(V,√(-V);γ,z) + 1/2√(-V)<0. Since this inequality is nothing but (<ref>) with k(γ,z) replaced by 1, and G(V,√(-V);γ,z)< 0 for any V∈[V_1,-C_8^2(z)) by Lemma <ref>, our goal is to show the positivity of the following function (cf. (<ref>)): _1(V,z,m):= (m-1-mγ)V^2+(-3+2m-mγ+m(γ-2)γ z)V-mγ z-1+2mz/1+V for each γ∈(,], z∈(z̃_m(γ),z_1(γ)], V∈[V_1,-C_8^2(z)) and m=1,2. Our strategy is the following: * For m=1,2, we will show that it is enough to check the sign of (1-C^2_8(z))_1(-C^2_8(z),z,m). * For m=1,2, we will show that d^2/dz^2[(1-C^2_8(z))_1(-C^2_8(z),z,m)]<0 so that (1-C^2_8(z))_1(-C^2_8(z),z,m) is a concave function. Since _1(-C^2_8(z_1),z_1,m)=0 by the definition of z_1,it is sufficient to check the sign of (1-C^2_8(z̃_m(γ)))_1(-C^2_8(z̃_m(γ),z̃_m(γ),m)>0. For any fixed γ∈(,], we write C_8=C_8(z), C'_8(z) = d C_8(z)/dz and C”_8(z) = d^2 C_8(z)/dz^2. Step 1: When m=1, for each z∈(z̃_m(γ),z_1(γ)] and V∈[V_1,-C_8^2(z)), using -γ(2V+1) ≤ -γ(2V_1+1) and -2z/(1+V)^2 < -2 z, we derive ∂_1(V,z,1)/∂ V <-γ(2V_1+1)-1+[(γ-2)γ-2]z =-(γ-1)^2/γ+1+[(γ-2)γ-2]z<0 where we have used (γ-2)γ-2<0 for any γ∈(,]. Since _1(V,z,1) is a decreasing function in V and 1-C^2_8(γ,z)>0, it is sufficient to check the sign of (1-C^2_8(z))_1(-C^2_8(z),z,1). When m=2, we compute (1+V)_1(V,z,2) to obtain (1+V)_1(V,z,2) = (1-2γ)V^3+2(1-2γ+(γ-2)γ z)V^2+(-2γ+2(γ-3)γ z)V-2γ z-1+4z. We next show that ∂/∂ V[(1+V)_1(V,z,2)]<0. Note that ∂/∂ V[(1+V)_1(V,z,2)] = 3(1-2γ)V^2+4(1-2γ+(γ-2)γ z)V+(-2γ+2(γ-3)γ z), which is a quadratic polynomial of V. If the discriminant of the polynomial is negative for any γ∈ (,], z∈(z̃_m(γ),z_1(γ)], then d/dV[(1+V)_1(V,z)] will always be negative because 3(1-2γ)<0. The discriminant of the polynomial is given by Δ = 16(1-2γ+(γ-2)γ z)^2-12(1-2γ)(-2γ+2(γ-3)γ z)=: 8 p(z)where p(z):=2(γ-2)^2γ^2z^2+(1-2γ)γ(γ+1) z+(1-2γ)(2-γ). When γ=2, it is clear that p(z)<0. When γ≠ 2, p(z) is a quadratic polynomial in z. It has a local minimum at z = (2γ-1)(γ+1)/4(γ-2)^2γ > 1/4>z_1. Thus, by (<ref>), to verify the negativity of Δ, it is sufficient to check the negativity of p(2/25). This condition is checked in Proposition <ref>. Therefore, we have shown ∂/∂ V[(1+V)_1(V,z,2)] <0, and hence, to show that _1(V,z,2)>0, it is enough to check the sign of (1-C^2_8(z))_1(-C^2_8(z),z,2).Step 2: Our next goal is to show (1-C^2_8(z))_1(-C^2_8(z),z,m)> 0 for any z∈(z̃_m(γ),z_1(γ)], V∈[V_1,-C_8^2(z)), and m=1,2. We will first show (1-C^2_8(z))_1(-C^2_8(z),z,m) is a concave function in z. For notational convenience, we will write (z) := -C^2_8(z). By using Lemma <ref>, (<ref>) and (<ref>), we obtain √(2)-2=(z̃_m(γ))< (z)≤(z_1) = -3-√(5)/2. Note that by using Lemma <ref> and Lemma <ref>, we obtain '(z)= -C_8C'_8(z)>0, ”(z) = -[C_8C”_8(z)+(C'_8)^2]>0. We rewrite (1-C^2_8(z))_1(-C^2_8(z),z,m) as (1+(z))_1((z),z,m) =(m-1-mγ)^3(z)+(3m-4-2mγ+m(γ-2)γ z)^2(z)+(2m-4-mγ+m(γ-3)γ z)(z)+m(2-γ) z-1, and compute the second z derivative to obtain d^2/dz^2[(1+(z))_1((z),z,m)] := ”(z) A(γ,z,m)+'(z) B(γ,z,m), where A(γ,z,m) =3(m-1-mγ)^2(z)+2[3m-4-2mγ+m(γ-2)γ z](z)+2m-4-mγ+m(γ-3)γ z, B(γ,z,m) =6(m-1-mγ)(z)'(z)+2[3m-4-2mγ+m(γ-2)γ z]'(z)+4m(γ-2)γ(z)+2m(γ-3)γ. We claim A(γ,z,m) and B(γ,z,m) are negative. We first check B(γ,z,m)<0. We decompose B(γ,z,m) into two parts B(γ,z,m) = '(z)B_1(γ,z,m) + B_2(γ,z,m), where B_1(γ,z,m)=6(m-1-mγ)(z)+6m-8-4mγ+2m(γ-2)γ z, B_2(γ,z,m) = 2mγ[ ( 2(γ-2) (z)+(γ-3)]. For B_1(γ,z,m), by using m-1-mγ<0, (<ref>), |(γ-2)γ|≤ 1, and z<z_M<1/5, we obtain B_1(γ,z,m)< 6(m-1-mγ)(z̃_m(γ))+6m-8-4mγ+2m/5= 4 - 6 √(2) - m(28-30 √(2)) /5 + m(8- 6 √(2) )γ <0 for any m=1,2 and γ∈(,]. For B_2(γ,z,m), when γ≥ 2, it is clear that B_2(γ,z,m)<0. When γ∈(,2), by using (<ref>), B_2(γ,z,m) < 2mγ (2(γ-2)(z̃_m(γ))+γ-3) = 2mγ[(2√(2)-3)γ+5-4√(2)]<0. Hence, we have shown that B_2(γ,z,m)<0 for any m=1,2, γ∈(,] and z∈(z̃_m(γ),z_1(γ)]. Regarding A(γ,z,m), we observe that d A(γ,z,m)/dz = '(z)B_1(γ,z,m)+1/2 B_2(γ,z,m) <0. Therefore, in order to show A(γ,z,m)<0, it is enough to verify A(γ,z̃_m(γ),m)<0. When m=1, by using (<ref>) and B_2(γ,z,1)<0, we have A(γ,z̃_m(γ),1)= -3γ^2(z̃_m(γ))+2(-1-2γ)(z̃_m(γ))-2-γ+z̃_m(γ)/2B_2(γ,z̃_m(γ),1)<(8√(2)-11)γ+2(1-√(2))<0. When m=2, by using (<ref>), (<ref>) and (2√(2)-3)γ+5-4√(2)<0, we have A(γ,z̃_m(γ),2)= 3(1-2γ)^2(z̃_m(γ))+2(2-4γ)(z̃_m(γ))-2γ+2γ[2(γ-2)(z̃_m(γ))+γ-3]z̃_m(γ)<(16√(2)-22)γ+10-8√(2)+4γ[(2√(2)-3)γ+5-4√(2)]/25=2/25[(4√(2)-6)γ^2+(192√(2)-265)γ+125-100√(2)]=:p(γ). Since p(γ) has a global maximum at γ = 192√(2)-265/12-8√(2)>3> and p()=484-346√(2)/25<0, p(γ)<0 for any γ∈(,]. We conclude that A(γ,z,m)<0. Hence, (1+(z))_1((z),z,m) is a concave function in z. It is then enough to check the sign for the function at two ends points of z. By definition of z_1 (cf. (<ref>)), (1+(z_1))_1((z_1),z_1,m)=0. At z= z̃_m(γ), since 1+(z̃_m(γ)= √(2) -1 >0, we only need to show _1((z̃_m(γ)),z̃_m(γ),m)>0. By direct computations,_1((z̃_m(γ)),z̃_m(γ),m)=m(3√(2)-4)γ+(1-√(2))(2m-1) +m[(√(2)-2)γ^2+(3-2√(2))γ+2√(2)+2]z̃_m(γ)>m(3√(2)-4)γ+(1-√(2))(2m-1) +2m/25[-(2-√(2))γ^2+(3-2√(2))γ+2√(2)+2]>0,where we have used (√(2)-2)γ^2+(3-2√(2))γ+2√(2)+2>0 for each γ∈(,] and (<ref>) in the second line, while the positive sign of the last inequality is shown in Proposition <ref> and Proposition <ref> for m=1 and m=2 respectively.§.§ Conditional existence for γ∈(, 3]In this subsection, we will employ the barrier function B_3/2(V)=√(-3/2V) to delineate a narrower and more precise range for the potential location of z_std(γ;P_8). Withthe choice of the barrier function, we define z_2(γ) to be the value such that P_8 lies on the curve C=B_3/2(V): z_2 = √(33)-3/6+2√(33)+4γ.Since V_8(z_2)=C_8(z_2)-1 and 1-2/3C_8^2(z_2)=C_8(z_2),V_8(z_2) = √(33)-7/4, C_8(z_2) = √(33)-3/4. For any γ∈ (,3], any z∈(0,z_2(γ)] gives an upper solution for P_8. Similar to our approach in Lemma <ref>, we define z(γ;P_8) asthe solution to C_8(γ, z(γ;P_8))=C_1(γ) and we further introduce ẑ_m(γ), which satisfies the equation: C_8(, ẑ_m())= C_1(3)= √(3)/2 so that ẑ_m(γ)=√(3)/12+8√(3)+2γ. It is easy to check that for any γ∈(,3], 1/8>z_M(2)>z_M(γ)>ẑ_m(γ) ≥ẑ_m(3)>1/20. By Lemma <ref> and Lemma <ref>, for any γ∈ (,3] and z≤ẑ_m(γ), we have C_8(γ,ẑ_m(γ)) = C_1(3) ≥ C_1(γ) = C_8(γ, z(γ;P_8)), and it follows that ẑ_m(γ)≤z(γ;P_8). Consequently, each z∈(0,ẑ_m(γ)] serves as an upper solution for P_8. To conclude the proof, we need to establish that each z∈(ẑ_m(γ),z_2(γ)] gives an upper solution for P_8. To this end, we will employ the barrier function B_3/2(V). Hence, it suffices to establish that,for any γ∈ (,3] and z∈(ẑ_m(γ),z_2(γ)], B_3/2(V) is a lower barrier for the solution C(V) of (<ref>) with P_*=P_8.By Lemma <ref>, d C/dV<0 to the left of the triple point P_8. Therefore, C(V)> B_3/2(V) for V∈[-2/3C_8^2,V_8] and z∈(ẑ_m(γ),z_2(γ)), while C(V_8)= B_3/2 (V_8) for z=z_2(γ) and it satisfiesd C/d V|_V=V_8(z_2),C=C_8(z_2),z=z_2<- 1/2√(-V_8(z_2)),so thatC(V)>B_3/2(V) for V∈[V,V_8) for some V<V_8. The proof of(<ref>) is given in Lemma <ref>. Now, by using the barrier argument (cf.(<ref>)), it is sufficient to show that F(V,√(-3/2V);γ,z)/G(V,√(-3/2V);γ,z)+1/2√(-3/2V)<0. We observe that this inequality is (<ref>) with k(γ,z) replaced by 3/2. As G(V,√(-3/2V);γ,z)< 0 for any V∈[V_1,-2/3C_8^2(z)) by Lemma <ref>, it suffices to show that for any γ∈(,3], z∈(ẑ_m(γ),z_2(γ)], V∈[V_1,-2/3C_8^2(z)), and m=1,2, the following function (cf. (<ref>)) is positive. _3/2(V,z,m):= (m-1-mγ)V^2+(-4+2m+m+1/2-mγ+m(γ-2)γ z)V-m γ z-1+3mz/1+V.The strategy is the following: * For m=1,2, we will show that it is enough to check the sign of (1-2/3C^2_8(z))_3/2(-2/3C^2_8(z),z,m). * For m=1,2, we will show that d^2/dz^2[(1-2/3C^2_8(z))_3/2(-2/3C^2_8(z),z,m)]<0 so that (1-2/3C^2_8(z))_3/2(-2/3C^2_8(z),z,m) is a concave function. Since _3/2(-2/3C^2_8(z_2),z_2,m)=0 by the definition of z_2, it is sufficient to check the sign of (1-2/3C^2_8(ẑ_m(γ)))_3/2(-2/3C^2_8(ẑ_m(γ)),γ,ẑ_m(γ),m)>0. Step 1: First of all, by Lemma <ref>, V_1(γ) ≥ V_1() = √(2)-2. When m=1, by using (<ref>) and 0<1+V<1, we have ∂_3/2(V,z,1)/∂ V < -2γ V_1-1-γ+[(γ-2)γ -3]z<(3-2√(2))γ-1+[(γ-2)γ -3]z<0. Therefore, to establish the positivity of _3/2(V,z,1), it suffices to show (1-2/3C^2_8(z)) _3/2(-2/3C^2_8(z),z,1)≥ 0. As for m=2, we compute (1+V)_3/2(V,z,2) = (1-2γ)V^3+(5/2-4γ+2(γ-2)γ z)V^2+(1/2-2γ+2(γ-3)γ z)V+2(3-γ) z-1.For any γ∈(,3], z∈(ẑ_m(γ),z_2(γ)] and V∈[V_1,-2/3C_8^2(z)), by using (<ref>), ∂^2/∂ V^2[(1+V)_3/2(V,z,2)]= 6(1-2γ)V+5-8γ+4(γ-2)γ z <12(2γ-1)/γ+1+5-8γ+(γ-2)γ/2=γ^3-17γ^2+40γ-14/2(γ+1)<0 where the negative sign of the cubic polynomial is shown in Proposition <ref>.Thus, (1+V)_3/2(V,z,2) is a concave function in V. It is then enough to check the signs of _3/2(V_1,z,2) and _3/2(-2/3C^2_8(z),z,2). We now compute _3/2(V_1,z,2), _3/2(V_1,z,2) =3γ (γ-3)/(γ+1)^2 + 6(γ^2(3-γ)+γ+1)/γ^2-1z> 3γ (γ-3)/(γ+1)^2 + 6(γ^2(3-γ)+γ+1)/20(γ^2-1)=3(-γ^4+12γ^3-36γ^2+32γ+1)/10(γ-1)(γ+1)^2>0 where we have used (<ref>) in the second line, while the positive sign of the last inequality is shown in Proposition <ref>. Thus, in order to show (1+V)_3/2(V,z,2)>0 for any γ∈(,3], z∈(ẑ_m(γ),z_2(γ)] and V∈[V_1,-2/3C_8^2(z)), it is sufficient to show that (1-2/3C^2_8(z))_3/2(-2/3C^2_8(z),z,2) ≥ 0. Step 2: Our goal is to show (1-2/3C^2_8(z))_3/2(-2/3C^2_8(z),γ,z,m)≥ 0 for anyγ∈(,3], z∈(ẑ_m(γ),z_2(γ)], V∈[V_1,-2/3C_8^2(z)), and m=1,2. We will first show (1-2/3C^2_8(z))_3/2(-2/3C^2_8(z),γ,z,m) is a concave function in z. For notational convenience, we will denote (z) := -2/3C^2_8(z). By using Lemma <ref>, (<ref>) and (<ref>), we obtain -1/2=(ẑ_m(γ))< (z)≤(z_2) = -7-√(33)/4. Note that by using Lemma <ref> and Lemma <ref>, we obtain '(z)= -4/3C_8C'_8(z)>0, ”(z) = -4/3[C_8C”_8(z)+(C'_8)^2]>0. We rewrite (1-2/3C^2_8(z))_3/2(-2/3C^2_8(z),γ,z,m) as (1+(z))_3/2((z),z,m) =(m-1-mγ)^3(z)+(3m+m+1/2-5-2mγ+m(γ-2)γ z)^2(z)+(2m+m+1/2-5-mγ+m(γ-3)γ z)(z)+m(3-γ) z-1, andcompute the second z derivative to obtain d^2/dz^2[(1+(z))_3/2((z),z,m)] := ”(z) A(γ,z,m)+'(z) B(γ,z,m), where A(γ,z,m) = 3(m-1-mγ)^2(z)+2[3m+m+1/2-5-2mγ+m(γ-2)γ z](z)+2m+m+1/2-5-mγ+m(γ-3)γ z, B(γ,z,m) = 6(m-1-mγ)(z)'(z)+2[3m+m+1/2-5-2mγ+m(γ-2)γ z]'(z)+4m(γ-2)γ(z)+2m(γ-3)γ. We claim A(γ,z,m) and B(γ,z,m) are negative. We will first show B(γ,z,m)<0. We decompose B(γ,z,m) into two parts B(γ,z,m) ='(z) B_1(γ,z,m) + B_2(γ,z,m), where B_1(γ,z,m)=6(m-1-mγ)(z)+7m-9-4mγ+2m(γ-2)γ z, B_2(γ,z,m)=4m(γ-2)γ(z)+2m(γ-3)γ . Clearly, B_2(γ,z,m)<0. Regarding B_1(γ,z,m), by using (<ref>) and (<ref>), we obtain B_1(γ,z,m)< 6(m-1-mγ)(ẑ_m(γ))+7m-9-4mγ+2mγ z_M= 4m-6-3mγ/4<0 for any m=1,2 and γ∈(,3]. For A(γ,z,m), we notice that d A(γ,z,m)/dz = '(z) B_1(γ,z,m)+1/2 B_2(γ,z,m) <0. Thus, to show A(γ,z,m)<0, it is enough to verifyA(γ,ẑ_m(γ),m)<0. Using (<ref>) and (<ref>), A(γ,ẑ_m(γ),m) = mγ/4-m+3/4-mγẑ_m(γ) < mγ/5-m+3/4 < 0 form=1,2 and γ∈(,3].Hence, (1+(z))_3/2((z),z,m) is a concave function in z. It is enough to check the signat two ends points of z. By the definition (<ref>) of z_2, (1+(z_2))_3/2((z_2),γ,z_2,m)=0. For (1+(ẑ_m(γ)))_3/2((ẑ_m(γ)),ẑ_m(γ),m), we evaluate (1+(ẑ_m(γ)))_3/2((ẑ_m(γ))),ẑ_m(γ)),m) =γ-2/8+(12-γ^2)ẑ_m(γ)/4 when m=1,γ-3/4+(12-γ^2)ẑ_m(γ)/2 when m=2. Obviously, (1+(ẑ_m(γ)))_3/2((ẑ_m(γ))),ẑ_m(γ)),1)>0. For (1+(ẑ_m(γ)))_3/2((ẑ_m(γ))),ẑ_m(γ)),2), by using (<ref>), wehave (1+(ẑ_m(γ)))_3/2((ẑ_m(γ))),ẑ_m(γ)),2) > γ-3/4+12-γ^2/40 = -γ^2+10γ-18/40>0 for any γ∈(,3]. This completes the proof.§ SOLVING TO RIGHT In the previous sections, for each ∈(1,3], we established the existence of z_std(γ;P_*) and a range of z which z_std(γ;P_*) must belong to. This z_std allows the solution C(V;γ,z_std(γ;P_*),P_*) of (<ref>) to pass smoothly from P_1 through thetriple point P_*, where P_* is either P_6 or P_8.The remaining goalis to extend this smooth solution from the triple point P_* to the origin P_0 while ensuring that it remains within the second quadrant in the phase plane (that is, that we retain both V<0 and C>0 up to time the flow meets the origin). To prove this property for the solution associated to z_std, we will in fact prove the stronger property that the local solution to the right of the sonic P_* always extends to P_0 within the second quadrant for all z within the range containing z_std(γ;P_*). For notational convenience, we define unified notation for the various possible ranges of z containing z_std from the results of Sections <ref>–<ref>.𝒵(γ;P_*) =(z_g(γ),z_M(γ)] for γ∈(1,2]at P_6,(z_1(γ),z_M(γ)] for γ∈(,]at P_8,(z_2(γ),z_M(γ)] for γ∈(,3]at P_8.In this section, we will show that for any γ∈(1,3] and z∈𝒵(γ;P_*), the local analytic solutions around P_* constructed in Theorem <ref> continue to the origin P_0 in the second quadrant. From the phase portrait analysis, three possibilities arise for the extension of the local analytic solution. * The trajectory intersects the negative V-axis before reaching V=0. * The trajectory intersects the positive C-axis when V=0. * The trajectory converges to P_0 within the second quadrant. To rule out the first two possibilities, we will use suitable barrier functionsto establish an invariant region for the solutions ensuring convergence to P_0 within the second quadrant. We begin withthe extension for the local analytic solution to the right of the triple point. Let γ∈(1,3] and z∈𝒵 be given andlet P_*=(V_*,C_*) be either P_6 or P_8. Consider the local, analytic solution C:[V_*,V_*+ϵ]→_+, guaranteed by Theorem <ref>. This solutionextends smoothly to the right within the second quadrant onto the domainC:[V_*,V_0)→_+, where V_0 = min{C^-1(0),0}. Furthermore, except at the triple point, the solution enjoysdC/dV<0, F<0, G>0, D<0. The result follows from our choice of c_1 (cf.(<ref>)), F_C(V_*,C_*)>0 (cf.(<ref>)), G_C(V_*,C_*)<0 (cf.(<ref>)). Next, we will eliminate the first possibility. For any γ∈(1,3] and z∈𝒵, the solution constructed in Lemma <ref> does not intersect the negative V-axis (i.e., in the notation of that Lemma, V_0=0). We argue by contradiction. Suppose the solution intersects the negative V-axis before reaching V=0, and let (V,0) denote thepoint of intersection of the solution trajectory with the V-axis.Consider the initial value problem:dC(V)dV = F(V,C;γ,z)G(V,C;γ,z), C(V)=0.In a small rectangular region around (V, 0), it is evident that F(V,C;γ,z)/G(V,C;γ,z) is continuously differentiable. Bythe standard theorem for existence and uniqueness of solutions to ODEs with locally Lipschitz right hand side (e.g. <cit.>), this initial value problem possesses a unique solution on the interval (V-ϵ, V+ϵ) for sufficiently small ϵ. However,we see trivially that C(V) ≡ 0 solves this problem, and so must be the unique solution, leading to a contradiction. We remark that Lemma <ref> yields V_0=0 in Lemma <ref>.The rest of this section is devoted to ruling out the second possibility by the barrier argument.§.§ Connecting P_6 to the origin In this subsection, we prove that the local analytic solutions around P_6 constructed in Theorem <ref> for any γ∈ (1,2] and z∈(z_g(γ),z_M(γ)] continue to the origin and stay below the barrier curve B_1(V)=√(-V). In particular, this implies that the solution trajectories to the right of the sonic point will stay between C=0 and B_1(V) and must therefore strictly decrease to the origin. For any γ∈ (1,2] and z∈(z_g(γ),z_M(γ)], the solution constructed in Lemma <ref> with P_*=P_6, always lies below the curve B_1(V)= √(-V) for V∈[V_6,0). According to (<ref>), P_6 consistently remains below the curve C=B_1(V) for any γ∈ (1,2]. In other words, C_6<B_1(V_6). Additionally, as per Lemma <ref>, dC/dV<0 holds for V∈ [V_6,0). Observing that -C_6^2>V_6, we therefore see that the inequality C(V)<B_1(V) holds trivially for V∈[V_6,-C_6^2]. Consequently, our analysis can be confined to V∈(-C_6^2,0). Thus, employing the barrier argument (<ref>), we verify the validity of the following inequality for any γ∈(1,2], z∈(z_g(γ),z_M(γ)], and V∈(-C_6^2,0): F(V,√(-V);γ,z)/G(V,√(-V);γ,z)+1/2√(-V) <0. Observe that the inequality is (<ref>) with k(γ,z) replaced by 1. So, by Lemma <ref> which assures that G(V,√(-V);γ,z)> 0 for any V∈(-C_6^2,0), our task reduces to proving _1(V,z,m)=2F(V,√(-V);γ,z)/√(-V)+G(V,√(-V);γ,z)/-V<0, for m=1,2, ∈(1,2], z∈(z_g(),z_M()], V∈(-C_6^2,0),where we recall from (<ref>) that_1(V,z,m) =(m-1-mγ)V^2+(-3+2m-mγ+m(γ-2)γ z)V-mγ z-1+2mz/1+V. We note first that ∂_1(V,z,m)/∂ V = (m-1-mγ)(1+2V)+m-2+m(γ-2)γ z-2mz/(1+V)^2<(m-1-mγ)(1-2C_6^2)<0 for V∈[-C_6^2,0), since C_6 ≤√(γ)/√(γ)+√(2)≤1/2 by using (<ref>). Thus, it is sufficient to check the negativity of _1(-C^2_6(γ,z),z, m). For any fixed γ∈(1,2], we write C_6=C_6(z) and C_6'(z) = dC_6/dz. By (<ref>), Lemma <ref> and the chain rule, we check the sign of the z-derivative of _1(-C^2_6(z),z,m): d _1(-C^2_6(z),z,m)/d z =∂_1 /∂ V(-C^2_6(z),z,m)(-2C_6(z)C_6'(z)) + ∂_1 /∂ z(-C^2_6(z),z,m)>0, where we also used ∂_1 /∂ z(-C^2_6(z),z,m) =-m(γ-2)γ C^2_6(z) + m(2/1-2C^2_6(z)-γ)>0 for any γ∈(1,2]. As a consequence, it is now enough to show that _1(-C^2_6(γ,z_M),z_M)<0 to accomplish (<ref>). By (<ref>) and (<ref>), we have C_6(γ,z_M) = √(γ z_M) and hence, we obtain _1(-C^2_6(γ,z_M),z_M,m)= (m-1-mγ)γ^2z^2_M-(-3+2m-mγ+m(γ-2)γ z_M)γ z_M-mγ z_M-1+2mz_M/1-γ z_M= (3m-1)γ^2z^2_M -2mγ^3z^2_M+3(1-m)γ z_M+mγ^2z_M-1+2mz_M/1-γ z_M. When m=1, we have _1(-C^2_6(γ,z_M),z_M,1) =2(1-γ) γ^2z^2_M+γ^2z_M-γ^3z^2_M-1+(γ+2) z_M/1-γ z_M. With γ>1, the first term in the equation above is negative. To find an explicit expression for the second term, first, by (<ref>), we see that 1-γ z_M = 1-γ/γ+2+2√(2γ)>0and γ^2z_M-γ^3z^2_M-1+(γ+2) z_M= γ^2(√(γ)+√(2))^2-γ^3-(√(γ)+√(2))^4+(γ+2)(√(γ)+√(2))^2/(√(γ)+√(2))^4=(γ^2-2√(2γ))(√(γ)+√(2))^2-γ^3/(√(γ)+√(2))^4<0, where we have used γ≤ 2. Therefore, from (<ref>), we deduce that _1(-C^2_6(γ,z_M),z_M,1)<0 and hence _1(V,z,1)<0 for all V∈(-C_6^2,0), z∈(z_g(),z_M()]. When m=2, we have _1(-C^2_6(γ,z_M),z_M,2) =4(1-γ)γ^2z^2_M+4γ^2 z^2_M-2γ z_M+2γ^2z_M-γ^3z^3_M-2γ^3z^2_M-1+4z_M/1-γ z_M. Since γ>1, the first term is again negative. For the second term, we again apply (<ref>) to rearrange the numerator as 4γ^2 z^2_M-2γ z_M+2γ^2z_M-γ^3z^3_M-2γ^3z^2_M-1+4z_M= (4γ z_M-1)γ z_M-γ^3z^3_M-2γ^3z_M^2+2(γ^2+1-√(2γ)-γ)z_M=3γ-2-2√(2γ)/γ+2+2√(2γ)γ z_M-γ^3z^3_M+2(γ^2+1-√(2γ)-γ-γ^3z_M)z_M. The first two terms are negative because γ∈(1,2]. As for the last term, when γ∈(1,√(2)], we have γ^2-√(2γ)<0. Thus, _1(-C^2_6(γ,z_M),z_M,2)<0 when γ∈(1,√(2)]. When γ∈(√(2),2], by using z_M≥1/8, we obtain γ^2+1-√(2γ)-γ-γ^3z_M< γ^2+1-√(2γ)-γ-γ^3/8. This upper bound is increasing as a function ofas d/dγ(γ^2+1-√(2γ)-γ-γ^3/8) = -3γ^2/8+2γ-1-1/√(2γ)≥ -3γ^2/8+2γ-1-1/√(2√(2))>0. Hence, for any γ∈(√(2),2], we have γ^2+1-√(2γ)-γ-γ^3/8≤ 0, which, combined with (<ref>), (<ref>) and (<ref>), leads to _1(-C^2_6(γ,z_M),z_M,2)<0 in the remaining range ∈[√(2),2] and hence we have established _1(V,z,2)<0 for any V∈ [-C^2_6,0), z∈(z_g(),z_M()]. This concludes the proof. §.§ Connecting P_8 to the origin In this subsection we prove analogous results around P_8 for γ∈ (, 3] to those of the previous subsection, that is, we show that the local analytic solutions around P_8 for z∈𝒵(γ;P_8) converge to the origin P_0 within the second quadrant by employing a barrier argument. We split into two cases: γ∈(,] and γ∈ (, 3]. §.§.§ P_8 for γ∈(,] For any γ∈ (,] and z∈(z_1(γ),z_M(γ)], the solution constructed in Lemma <ref> with P_*=P_8, always lies below the curve B_1(V)= √(-V) for V∈(V_8,0). By the definition of z_1(γ) (cf.(<ref>)) and Lemma <ref>, C_8≤ B_1(V_8) for any γ∈ (,] and z∈(z_1(γ),z_M(γ)]. Since dC/dV<0 by Lemma <ref>, the solution stays below the curve B_1(V) for V∈(V_8,-C_8^2]. Thus, it suffices to show the claim for V∈(-C_8^2,0). By employing the barrier argument(<ref>), we will establish the following inequality for any γ∈(,], z∈(z_1(γ),z_M(γ)], and V∈(-C_8^2,0): F(V,√(-V);γ,z)/G(V,√(-V);γ,z)+1/2√(-V) <0. By Lemma <ref>, G(V,√(-V);γ,z)>0 for all V∈(-C_8^2,0) and hence, as in Lemma <ref> (cf. (<ref>)), we again see that it is sufficient to prove that _1(V,z,m)=(m-1-mγ)V^2+(-3+2m-mγ+m(γ-2)γ z)V-mγ z-1+2mz/1+V<0. We first derive an upper bound of ∂_1/∂ V: ∂_1(V,z,m)/∂ V = 2(m-1-mγ)V-3+2m-mγ+m(γ-2)γ z-2mz/(1+V)^2<(m-1-mγ)(2V+1)+m-2+[(γ-2)γ-2] mz<(m-1-mγ)(2V+1) because (γ-2)γ-2<0 for γ∈(,]. By (<ref>), we observe that for any γ∈(,], 9/20<√()/√()+√(2)<√(γ)/√(γ)+√(2)≤ C_8 ≤ C_8(z_1) = √(5)-1/2. Thus, 2V+1>-2C_8^2+1>0, whichimplies ∂_1(V,z,m)/∂ V<0. It is therefore sufficient to show that _1(-C^2_8(γ,z),z,m)<0. We use the same notation as in Lemma <ref>, denoting (z) = -C^2_8(γ,z). For any fixed γ∈(,], we write '(z) = d(z)/dz and ”(z) = d^2(z)/dz^2. We then derive the z derivative of _1((z),z,m) as ∂_1((z),z,m)/∂ z =∂_1/∂ V ((z),z,m) '(z) + ∂_1/∂ z((z),z,m) =: A(γ,z,m) '(z)+mB(γ,z), where A(γ,z,m) : = 2(m-1-mγ)(z)-3+2m-mγ+m(γ-2)γ z,B(γ, z):=2/1+(z)-γ+(γ-2)γ(z)-2z'(z)/(1+(z))^2. We claim that both A(γ,z,m) and B(γ,z) are negative. ForA(γ,z,m), in the case m=1, we have A(γ,z,1) = -2γ(z)-1-γ+(γ-2)γ z < -γ(1+2(z))-1+|γ-2|γ z_M < 0, where we have used |γ-2|γ z_M < 1 in the last inequality. When m=2, by (<ref>) and z<z_M(γ)< z_M()<1/5 for any γ∈(,] and z∈(z_1(γ),z_M(γ)], we see A(γ,z,2) = (1-2γ)(1+2(z))+2(γ-2)γ z < 1-2γ+2|γ-2|γ/5<0 for any γ∈(,γ_1]. Hence, A(γ,z,m)<0. For B(γ,z), we first observe that when γ∈(,], by (<ref>), -γ+(γ-2)γ(z) = -γ[1-(γ-2)(z)] < 0 . For the remaining two terms,by Lemma <ref> and Lemma <ref>, we have'(z)>0 and ”(z)>0. Therefore, for any z∈(z_1(γ),z_M(γ)], d/dz[1+(z)-z'(z)] =-z ”(z) <0 . Hence, using also (<ref>), B(γ, z)<2/1+(z) -2z'(z)/(1+(z))^2 = 2/(1+(z))^2 (1+(z)-z'(z)) < 2/(1+(z))^2 (1+(z_1)-z_1'(z_1)) . Recalling (<ref>) and (<ref>), we have C_8(z_1) = √(5)-1/2 andw(z_1) = √(5)-2-(γ-2)z_1>0, where the positivity of w(z_1) is due to Remark <ref>. By direct computation, using (<ref>), (<ref>), and(<ref>), we obtain 1+(z_1)-z_1'(z_1)= 1- C_8^2(z_1)+2z_1C_8(z_1)C_8'(z_1)=C_8(z_1)/w(z_1) [√(5)-2+[(√(5)-4)γ-2(√(5)-2)] z_1]=C_8(z_1)/w(z_1)(5-3√(5))γ+4√(5)-8/2(√(5)+1+γ)<0 for any γ∈(,], and therefore we obtain B(,z)<0. Therefore, we have obtained ∂/∂ z_1((z),z,m)<0. Hence, we conclude that for any fixed γ∈ (,], m=1,2, z∈(z_1(γ),z_M(γ)], and V∈ (-C^2_8,0), applying also(<ref>), we have _1(V,z,m)<_1((z),z,m)<_1((z_1),z_1,m)=0 by the definition of z_1 (cf.(<ref>)). §.§.§ P_8 for γ∈(,3] For any γ∈ (,3] and z∈(z_2(γ),z_M(γ)], the solution constructed in Lemma <ref> with P_*=P_8, always lies below the curve B_3/2(V)= √(-3/2V) for V∈(V_8,0). Using a similar argument as in Lemma <ref>, it suffices to verify the following inequality: F(V,√(-3/2V);γ,z)/G(V,√(-3/2V);γ,z)+1/2√(-3/2V)<0 for any m=1,2, γ∈ (,3], z∈(z_2(γ),z_M(γ)] and V∈(-2/3C_8^2,0). Given that G(V,√(-3/2V),γ,z)>0 for any V∈(-2/3C_8^2,0) as shown in Lemma <ref>, and by the same calculations as in Lemma <ref> (cf. (<ref>)), it is sufficient to demonstrate that _3/2(V,z,m)=(m-1-mγ)V^2+(-4+2m+m+1/2-mγ+m(γ-2)γ z)V-m zγ-1+3mz/1+V<0 for any m=1,2, γ∈ (,3], z∈(z_2(γ),z_M(γ)] and V∈ (-2/3C^2_8,0). The V derivative of _3/2 is given by ∂_3/2(V,z,m)/∂ V =2(m-1-mγ)V+(-4+2m+m+1/2-mγ+m(γ-2)γ z)-3mz/(1+V)^2. When m=1, the same argument as in (<ref>) implies ∂_3/2(V,z,1)/∂ V<0. When m=2, we have ∂_3/2(V,z,2)/∂ V =2(1-2γ)V+3/2-2γ+2[(γ-2)γ -3/(1+V)^2]z. Note that (γ-2)γ -3/(1+V)^2<0 for γ∈ (,3] and V∈ (-2/3C_8^2,0). Moreover, 2(1-2γ)V+3/2-2γ<0, since for any fixed γ∈ (,3], 2(1-2γ)V+3/2-2γ < -2(1-2γ)2/3C_8^2+3/2-2γ =√(33)-4/2+(5-√(33))γ <0 . Hence, ∂_3/2(V,z,m)/∂ V<0. It is therefore sufficient to show that _3/2(-2/3C^2_8(z),z)<0. Let (z) = -2/3C^2_8(z) and, for any fixed γ, we write '(z) = d (z)/dz and ”(z) = d^2 (z)/dz^2. By Lemma <ref> and Lemma <ref>, '(z)>0 and ”(z)>0. The z derivative of _3/2((z),z,m) can therefore be written as d _3/2((z),z,m)/d z = ∂_3/2/∂ V ((z),z,m)'(z)+ ∂_3/2/∂ z((z),z,m) =:A(γ,z,m)'(z)+mB(γ,z), where A(γ,z,m):=2(m-1-mγ)(z)-4+2m+m+1/2-mγ+mγ(γ-2)z,B(γ,z):=(γ-2)γ(z)-γ+3(1+(z))-3z'(z) /(1+(z))^2 . We claim that both A(γ,z,m) and B(γ,z) are negative. We first check A(γ,z,m). When m=1, A(γ,z,1)=-2γ(z)-1-γ+(γ-2)γ z = -γ(1+2(z)) - 1+(γ-2)γ z <0, where we have used (γ-2)γ z≤ (γ-2)γ z_M≤γ/γ+2+2√(2γ)≤3/5+2√(6) and 1+2(z)> 1+2(z_2)= √(33)-5/2 in the last inequality. When m=2, using 1+2(z)> 1+2(z_2)= √(33)-5/2>3/5+2√(6)≥γ z_M≥γ z, A(γ,z,2) =2(1-2γ)(z)+3/2-2γ+2(γ-2)γ z = 2(2-γ)[1+2(z)-γ z]-3(1+2(z))+1/2≤ -3(1+2(z))+1/2≤ -3 (√(33)-5/2) + 1/2= 16-3√(33)/2<0. For B(γ,z), the first term is trivially negative for any γ∈(, 3] since (z)<0. We will show that the remaining term is negative as well. By using (<ref>), we obtain that for any z∈(z_2(γ),z_M(γ)], 0<C_8(z_2)=1-2/3C_8^2(z_2) < 1+(z)≤ 1-2/3C_8^2(z_M) = 1-2γ z_M(γ)/3.Since d/dz(-z'(z)+1+(z)) = -z”(z)<0, 3(-z'(z)+1+(z))/(1+(z))^2 < 3|-z_2'(z_2)+1+(z_2)|/(1+(z_2))^2. Recalling (<ref>), Remark <ref> and (<ref>), we have C_8(z_2) =√(33)-3/4 andw(z_2) =√(33)-5/2-(γ-2)z_2>0. Thus, by adirect computation, we obtain 3|-z_2'(z_2)+1+(z_2)|/(1+(z_2))^2 = 3/C_8^2(z_2)|1-2/3C^2_8(z_2)+4/3z_2C_8(z_2)C_8'(z_2)|=1/C_8(z_2)|(33-7√(33))γ+12√(33)-48|/(√(33)-7)γ+12=4/√(33)-3|√(33)-48/(√(33)-7)γ+12|≤4/√(33)-3(√(33)-48/(√(33)-7)+12)<γ for any γ∈(,3]. Therefore, combining (<ref>)–(<ref>), we have found d _3/2((z),z,m)/d z =A(γ,z,m)'(z)+mB(γ,z) < m((γ-2)γ(z)-γ+3(1+(z))-3z'(z) /(1+(z))^2) < m( - + 3|-z_2'(z_2)+1+(z_2)|/(1+(z_2))^2)<0. Hence, for any fixed γ∈ (,3], m=1,2, z∈(z_2(γ),z_M(γ)], and V∈ (-2/3C^2_8,0), we have _3/2(V,z,m)<_3/2((z),z,m)<_3/2(-2/3C_8^2(z_2),z_2,m)=0 by the definition of z_2 (cf.(<ref>)). § PROOF OF THE MAIN THEOREMWe now prove Theorem <ref>. (i) follows by (ii), (iii), and (iv).By Theorem <ref>, there exists a z_std(γ) such that the local real analytic solution C(V;γ,z,P_*) around P_* for either P_*=P_6 or P_*=P_8 given by Theorem <ref>extends on the left to P_1.(ii). Let γ∈(1,] be fixed.By Lemma <ref>, any such z_std() must connect P_1 to P_6 and, by Proposition <ref>, z_std(γ) ∈ (z_g(γ),z_M(γ)], that is, z_std(γ) ∈𝒵(γ;P_6). Then, Lemma <ref> gives that in factz_std(γ) isunique.By Lemma <ref>, this unique solutionextends to P_0in the second quadrant to give a unique connection from P_1 to P_0 which passes through P_6 and is monotone.(iii). Let γ∈(,2) be fixed. Given such a z_std(), if P_1 is connected analytically to P_6, by Proposition <ref>, we must have z_std(γ) ∈ (z_g(γ),z_M(γ)] and so, by Lemma <ref>, the solution extends to the right of P_6 to connect to P_0 within the second quadrant.On the other hand, if the solution connects P_1 to P_8 analytically, then by Lemma <ref>, z_std(γ)∈(z_1(γ),z_M(γ)], and, applying Lemma <ref>, the solution again extends inside the second quadrant to connect to P_0. Thus, in either case, we have z_std(γ) ∈𝒵(γ;P_*) and have obtained a monotone analytic solution connecting P_1 to P_0 through a single triple point.(iv). Let γ∈[2,3] be fixed. By Proposition <ref>, the solution for z=z_std() must connect P_1 to P_8 analytically and, by Lemmas <ref> and <ref>, z_std(γ) ∈𝒵(γ;P_8). Therefore, byLemma <ref> and Lemma <ref>, in each case, the solutionextends to the right in the second quadrant to connect P_8 to P_0 and we again have obtained an analytic, monotone solution connecting P_1 to P_0.Acknowledgements. JJ and JL are supported in part by the NSF grants DMS-2009458 and DMS-2306910. MS is supported by the EPSRC Post-doctoral Research Fellowship EP/W001888/1. § CALCULATION FOR TAYLOR EXPANSIONThe purpose of this Appendix is to establish the proof of Lemma <ref>. To this end, we begin from (<ref>) in the form(1+V)F(C,V)- (1+V)C'(V)G(C,V)=0.We write the left hand side of this equation as a power series in v=V-V_* as(1+V)F(C,V)- (1+V)C'(V)G(C,V)=∑_ℓ=0^∞𝒞_ℓ v^ℓ.Substituting in (<ref>), (<ref>), (<ref>), and (<ref>), the first term of this identity expands as (1+v+V_*)F(V,C) =(1+v+V_*)C(V){C^2(V)[1+mz/(1+v+V_*)]- a_1(1+v+V_*)^2+a_2(1+v+V_*)-a_3}=vC^3(V)+(1+V_*+mz)C^3(V)+[-a_1v^3+[-3a_1(1+V_*)+a_2]v^2+[-3a_1(1+V_*)^2+2a_2(1+V_*)-a_3]v+[-a_1(1+V_*)^3+a_2(1+V_*)^2-a_3(1+V_*)]]C(V) = ∑_ℓ =0^∞(c^3)_ℓv^ℓ+1+(1+V_*+mz)∑_ℓ =0^∞(c^3)_ℓv^ℓ-a_1∑_ℓ =0^∞c_ℓv^ℓ+3+[-3a_1(1+V_*)+a_2]∑_ℓ =0^∞c_ℓv^ℓ+2+[-3a_1(1+V_*)^2+2a_2(1+V_*)-a_3]∑_ℓ =0^∞c_ℓv^ℓ+1+[-a_1(1+V_*)^3+a_2(1+V_*)^2-a_3(1+V_*)]∑_ℓ =0^∞c_ℓv^ℓ.Expanding also the second term, we obtain(1+v+V_*)C'(V)G(V,C) =(1+v+V_*)C'(V){C^2(V)[(m+1)(v+V_*)+2mz]-(v+V_*)(1+v+V_*)(λ+v+V_*)}=(m+1)v^2C'(V)C^2(V)+[(m+1)(1+2V_*)+2mz]vC'(V)C^2(V)+(1+V_*)[(m+1)V_*+2mz]C'(V)C^2(V)-[v^4+(λ +2+4V_*)v^3 +[6V_*^2+(3λ + 6)V_*+2λ +1]v^2+[4V_*^3+(3λ+6)V_*^2+(4λ+2)V_*+λ]v+V_*(1+V_*)^2(λ+V_*)]C'(V)=m+1/3∑_ℓ =1^∞ℓ(c^3)_ℓv^ℓ+1+(m+1)(1+2V_*)+2mz/3∑_ℓ =1^∞ℓ(c^3)_ℓv^ℓ + (1+V_*)[(m+1)V_*+2mz]/3∑_ℓ =1^∞ℓ(c^3)_ℓv^ℓ-1-∑_ℓ =1^∞ℓ c_ℓv^ℓ+3-(λ +2+4V_*)∑_1^∞ℓ c_ℓv^ℓ+2-[6V_*^2+(3λ + 6)V_*+2λ +1]∑_ℓ =1^∞ℓ c_ℓv^ℓ+1-[4V_*^3+(3λ+6)V_*^2+(4λ+2)V_*+λ]∑_ℓ =1^∞ℓ c_ℓv^ℓ-V_*(1+V_*)^2(λ+V_*)∑_ℓ =1^∞ℓ c_ℓv^ℓ-1.We now proceed to study the difference of (<ref>) and (<ref>) and to group terms at each order in v to simplify the resulting identity for 𝒞_ℓ. First, at order zero, we have𝒞_0=(1+V_*+mz)c_0^3+[-a_1(1+V_*)^3+a_2(1+V_*)^2-a_3(1+V_*)]c_0-(1+V_*)[(m+1)V_*+2mz]/3(3c_0^2c_1)+V_*(1+V_*)^2(λ+V_*)c_1=[c_0^2(1+V_*+mz)-a_1(1+V_*)^3+a_2(1+V_*)^2-a_3(1+V_*)]c_0-[c_0^2[(m+1)V_*+2mz]-V_*(1+V_*)(λ+V_*)]](1+V_*)c_1=F(V_*,C_*)(1+V_*)c_0-G(V_*,C_*)(1+V_*)c_1=0.Next, the first order coefficient in v simplifies as 𝒞_1=c_0^3+(1+V_*+mz)3c_0^2c_1+[-3a_1(1+V_*)^2+2a_2(1+V_*)-a_3]c_0+[-a_1(1+V_*)^3+a_2(1+V_*)^2-a_3(1+V_*)]c_1-(m+1)(1+2V_*)+2mz/33c_0^2c_1-(1+V_*)[(m+1)V_*+2mz]/32(3c_0^2c_2+3c_0c_1^2)+[4V_*^3+(3λ+6)V_*^2+(4λ+2)V_*+λ]c_1+V_*(1+V_*)^2(λ+V_*)2c_2.In order to simplify this identity, we recall that as G(V_*,C_*)=0, we have c_0^2((m+1)+2mz)=V_*(1+V_*)(+V_*) and thus, recalling also (<ref>), we have the auxiliary identity-[(m+1)(1+2V_*)+2mz]c_0^2+[4V_*^3+(3λ+6)V_*^2+(4λ+2)V_*+λ]=-G_V(V_*,C_*)(1+V_*).Substituting this along with the other identities in (<ref>) into (<ref>) and grouping terms, we find 𝒞_1 simplifies to𝒞_1=2{V_*(1+V_*)^2(λ+V_*)-(1+V_*)[(m+1)V_*+2mz]c_0^2}c_2+{-2(1+V_*)[(m+1)V_*+2mz]c_0}c_1^2+{3(1+V_*+mz)c_0^2+[-a_1(1+V_*)^3+a_2(1+V_*)^2-a_3(1+V_*)]-[(m+1)(1+2V_*)+2mz]c_0^2+[4V_*^3+(3λ+6)V_*^2+(4λ+2)V_*+λ]}c_1+c_0^3+[-3a_1(1+V_*)^2+2a_2(1+V_*)-a_3]c_0 = -2G(V_*,C_*)(1+V_*)c_2-G_C(V_*,C_*)(1+V_*)c_1^2+[F_C(V_*,C_*)-G_V(V_*,C_*)](1+V_*)c_1+F_V(V_*,C_*)c_0 = -G_C(V_*,C_*)(1+V_*)c_1^2+[F_C(V_*,C_*)-G_V(V_*,C_*)](1+V_*)c_1+F_V(V_*,C_*)c_0.Finally, we group coefficients at order ℓ≥ 2, recalling the convention that c_k=0 if k<0, to obtain the coefficient𝒞_ℓ=(c^3)_ℓ-1+(1+V_*+mz)(c^3)_ℓ-a_1c_ℓ-3+(-3a_1(1+V_*)+a_2)c_ℓ-2+(-3a_1(1+V_*)^2+2a_2(1+V_*)-a_3)c_ℓ-1+(-a_1(1+V_*)^3+a_2(1+V_*)^2-a_3(1+V_*))c_ℓ-m+1/3(ℓ-1)(c^3)_ℓ-1-(m+1)(1+2V_*)+2mz/3ℓ(c^3)_ℓ -(1+V_*)((m+1)V_*+2mz)/3(ℓ+1)(c^3)_ℓ+1+(ℓ-3)c_ℓ-3+(+2+4V_*)(ℓ-2)c_ℓ-2+(6V_*^2+(3+6)V_*+2+1)(ℓ-1)c_ℓ-1+(4V_*^3+(3+6)V_*^2+(4+2)V_*+)ℓ c_ℓ +V_*(1+V_*)^2(+V_*)(ℓ+1)c_ℓ+1.Recalling that(c^3)_ℓ=∑_i+j+k=ℓc_ic_jc_k=3c_0^2c_ℓ +∑_i+j+k=ℓi,j,k≤ℓ-1c_ic_jc_k(c^3)_ℓ+1=∑_i+j+k=ℓ+1c_ic_jc_k=3c_0^2c_ℓ+1+6c_0c_1c_ℓ +∑_i+j+k=ℓ+1i,j,k≤ℓ-1c_ic_jc_k,we isolate the highest order terms in c_ℓ and c_ℓ+1 from (<ref>) asc_ℓ+1(ℓ+1)(3c_0^2(-(1+V_*)((m+1)V_*+2mz)/3+V_*(1+V_*)^2(+V_*))=-c_ℓ+1(ℓ+1)(V_*+1)G(V_*,C_*)=0,c_ℓ(3(1+V_*+mz)c_0^2-a_1(1+V_*)^3+a_2(1+V_*)^2-a_3(1+V_*)-((m+1)(1+2V_*)+2mz)ℓ c_0^2-2(1+V_*)((m+1)V_*+2mz)(ℓ+1)c_0c_1+(4V_*^3+(3+6)V_*^2+(4+2)V_*+)ℓ)=c_ℓ(1+V_*)(F_C-ℓ G_V-(ℓ+1)c_1 G_C)=A_ℓ c_ℓ,where we have again applied (<ref>) and (<ref>) and where A_ℓ is as defined in Lemma <ref>. Thus, substituting these into (<ref>) and grouping terms by order of c_k, we have obtained that𝒞_ℓ=A_ℓ c_ℓ - (1+V_*)[(m+1)V_*+2mz]/3(ℓ +1)∑_i+j+k = ℓ+1i,j,k≤ℓ-1c_ic_jc_k+[(1+V_*+mz)-(m+1)(1+2V_*)+2mz/3ℓ]∑_i+j+k = ℓi,j,k≤ℓ-1c_ic_jc_k+[1-m+1/3(ℓ -1)]∑_i+j+k = ℓ-1i,j,k≥ 0c_ic_jc_k+[[6V_*^2+(3λ + 6)V_*+2λ +1](ℓ -1)-3a_1(1+V_*)^2+2a_2(1+V_*)-a_3]c_ℓ-1+[(λ +2+4V_*)(ℓ -2)-3a_1(1+V_*)+a_2]c_ℓ-2+[ℓ-3-a_1]c_ℓ-3,which, recalling the definition of B_ℓ from Lemma <ref>, concludes the proof. § Z_M<Z_G FOR Γ∈(1,2]As the expression for z_g depends on m, we will prove the inequality first in the case m=1 and then for m=2. Recall first that z_m is defined by (<ref>),z_m = (γ-1)(2γ-1)(γ+1) = (γ-1)2γ^2+γ-1.On the other hand, for z_g, when m=1, we havez_g = γ-1/γ(√(γ^2+(γ-1)^2)+γ).Therefore, it is sufficient to check γ^2+γ-1 > γ√(γ^2+(γ-1)^2) for γ∈(1,2]. Since(γ^2+γ-1)^2-γ^2(γ^2+(γ-1)^2) = -(γ-1)(γ^2(γ-3)+1-γ)>0for any γ∈(1,2], we concludez_m < z_gfor any γ∈(1,2] as desired.When m=2, we havez_g=2(γ-1)/√((2γ^2-γ+1)^2 +2γ(γ-1)[4γ(γ-1)+8/3])+(2γ^2-γ+1).Therefore, in order to show z_g>z_m, it is enough to check 2γ^2+3γ-3>√((2γ^2-γ+1)^2 +2γ(γ-1)[4γ(γ-1)+8/3]). Since(2γ^2+3γ-3)^2 - (2γ^2-γ+1)^2 +2γ(γ-1)[4γ(γ-1)+8/3] = 8/3(γ-1)(3-γ)(3γ^2-1)>0for any γ∈(1,2], we again conclude the claimed inequality. § PROOF OF (<REF>) Let ∈(1,2] and z∈[z_g,z_M], where we recall that z_g is defined as in (<ref>). In this section, for notational convenience, we will use G_C, G_V, F_C, F_V, R to represent their evaluations at P_6=(V_6,C_6) where G_C, G_V, F_C, F_V, R are given in (<ref>)–(<ref>) and (<ref>) respectively. Recall from (<ref>) that the slope of the curve C=C(V) at P_6=(V_6,C_6) is given by d C/d V|_V=V_6,C=C_6 =c_1 = F_C-G_V+ R/2G_C.Thus(<ref>) holds if and only if F_C-G_V+ R/2G_C - 1+V_6/2V_6=V_6F_C-V_6G_V+ V_6R-(1+V_6)G_C/2V_6G_C<0.Note that G_C<0 by (<ref>) and hence V_6G_C>0.Therefore, showing (<ref>) is equivalent to proving that the numerator is negative for each γ∈ (1,2] and z∈[z_g,z_M].Using (<ref>),it is then sufficient to show V_6^2[(F_C-G_V)^2+4F_VG_C]-[V_6F_C-V_6G_V-(1+V_6)G_C]^2>0,which is equivalent to 𝒬:= 4V_6^2F_VG_C-(1+V_6)^2G_C^2+2(1+V_6)V_6(F_C-G_V)G_C>0 .The rest of this section is devoted to the proof of (<ref>). Wefirst rewrite 𝒬 by using various identities satisfied by V_6, C_6, F_C, F_V, G_C, G_V: 𝒬=G_C[2V_6^2(m(γ-1)wC_6-2F_C)-C_6^2G_C+2C_6V_6(F_C+mwC_6+G_C)] =G_CV_6C_6[-2V_6^2+2m[γ w+(γ-2)z]V_6+2m([2-γ]z+w)+2]where we have used2F_V= m(γ-1) wC_6- 2F_C and-G_V= mwC_6 + G_C from (<ref>) and (<ref>) as well as C_6= 1+V_6 in the first line and used (<ref>) and (<ref>) in the second line. Recalling the formula for V_6 in (<ref>) and using V_6^2= ((γ-2)z-1) V_6- 2z, we next rearrange the bracket as a linear function in w whose coefficients are polynomials in γ,z so that 𝒬 =G_CV_6C_6[A(γ,z,m)w+B(γ,z,m)]whereA(γ,z,m)= 2m-1-mγ+(γ-2+2m-3mγ+mγ^2)z, B(γ,z,m)=1-mγ+(6m+(2+m)γ+2mγ^2)z+(4m-4+4(1-2m)γ+(5m-1)γ^2-mγ^3)z^2.Since G_CV_6C_6 >0, our aim is to show A(γ,z,m)w+B(γ,z,m) >0 for each γ∈ (1,2],z∈[z_g,z_M] and m=1, 2. We will treat m=1 and m=2 separately.Case 1: m=1. When m=1, we have A(γ,z,1) =1-γ+γ(γ-2)z<0, B(γ,z,1) =1-γ+(2γ^2+3γ+6)z-γ(γ-2)^2z^2for all γ∈(1,2]. On the other hand, for any fixed γ∈(1,2], d/d z(A(γ,z,1)w+B(γ,z,1)) = 2γ^2+3γ+6-2γ(γ-2)^2z+A(γ,z,1)d w/dz+γ(γ-2) w=γ(2γ+(γ-2)w)+2γ(1-(γ-2)^2z)+6+γ-A(γ,z,1)(γ+2)-(γ-2)^2z/w>0,where the last inequality follows from -1<γ-2≤ 0, 0≤ w<1 and z<1 for any γ∈(1,2] and z∈[z_m,z_M]. Here we extend the domain of z into [z_m, z_M] to facilitate computations. Since A(γ,z,1)w+B(γ,z,1) is increasing in z,it is then enough to check thatA(γ,z_m,1)w(z_m)+B(γ,z_m,1) >0.From (<ref>), we obtain the following relation between w(z_m) and z_m: V_6(γ,z_m) = -2/γ+1 ⟺ -1+(γ-2)z_m-w(z_m)/2 = -2/γ+1 ⟺ w(z_m) = 3-γ/γ+1+(γ-2)z_m.Hence, using (<ref>) again we get for any γ∈(1,2], A(γ,z_m,1)w(z_m)+B(γ,z_m,1) = (1-γ+γ(γ-2)z_m)(3-γ/γ+1+(γ-2)z_m)+1-γ+(2γ^2+3γ+6)z_m-γ(γ-2)^2z_m^2=4(γ-1)(γ^2+2)/(γ+1)^2(2γ-1)>0.Therefore, we deduce thatA(γ,z,1)w+B(γ,z,1)>0 for any γ∈(1,2] and z∈[z_m,z_M], which in turn implies(<ref>).Case 2: m=2. When m=2, the sign of A(γ,z,2) = 3-2γ+(2-5γ+2γ^2)z changes for γ∈(1,2] and z∈[z_g,z_M] and the argument for m=1 is not applicable. We will employ another approach.We first decompose 𝒬 in (<ref>) into two parts𝒬 =: I + 2 IIwhere I:=2V_6^2F_VG_C-(1+V_6)^2G_C^2, II:= V_6^2F_VG_C+(1+V_6)V_6(F_C-G_V)G_C.We claim that I and II are both positive. For I, we first rewrite it by again using the identities (<ref>), (<ref>), (<ref>), and (<ref>), as well as C_6= 1+V_6 to get I= G_C[2V_6^2F_V-(1+V_6)^2G_C]=G_C[V_6^2(2(γ-1)wC_6-4C_6(C_6+2z))-2V_6C_6^2(C_6+2γ z)]=G_C[2(γ-1)wV_6^2C_6-4V_6^2C_6^2-8zV_6^2C_6-2V_6C_6^3-4γ zV_6C_6^2]=G_CV_6C_6[2(-V_6-C_6)C_6 +4z(-2V_6-γ C_6) + (2(γ-1)w-2C_6)V_6 ].The goal is to show the bracket is positive. For any γ∈(1,2] and z∈[z_g,z_M], by Lemma <ref>, we have V_6 ≤ V_6(z_M(2)) = -1/2. Thus, -V_6-C_6≥ 0 since 1+V_6 = C_6. Next, weshow that the remainder of the bracket is also positive. By using C_6 = 1+V_6, we have 4z(-2V_6-γ C_6) + (2(γ-1)w-2C_6)V_6 = -2V_6^2-(2+(8+4γ)z-2(γ-1)w)V_6-4γ z =:p_1(V_6).We observe that p_1(V_6) is a quadratic polynomial of V_6. Since the coefficient of V_6^2 is negative and -1<V_6≤ -1/2 for γ∈(1,2] and z∈[z_g,z_M], to show p_1(V_6)>0, it is sufficient to check that p_1(-1)>0 and p_1(-1/2)>0. Note that p_1(-1)= -2+2+(8+4γ)z-2(γ-1)w-4γ z = 8z-2(γ-1)w > 8z_m-2(γ-1)w(z_m)for any γ∈(1,2] and z∈[z_g,z_M] because dw(z)/dz<0. By (<ref>) and (<ref>), we deduce that p_1(-1)>8z_m-2(γ-1)w(z_m)= 8(γ-1)/(2γ-1)(γ+1) - 2(γ-1)[3-γ/γ+1+(γ-2)(γ-1)/(2γ-1)(γ+1)]= 2(γ-1)(γ^2-4γ+5)/(2γ-1)(γ+1)>0.For p_1(-1/2), observe that p_1(-1/2) = -1/2+1+(4+2γ)z-(γ-1)w-4γ z = 1/2+(4-2γ)z-(γ-1)w > 1/2+(4-2γ)z_m-(γ-1)w(z_m)for any γ∈(1,2] and z∈[z_g,z_M] because dw(z)/dz<0. By (<ref>) and (<ref>), we have p_1(-1/2)>1/2+(4-2γ)z_m-(γ-1)w(z_m)= 1/2 + (4-2γ)(γ-1)/(2γ-1)(γ+1) -(γ-1)[3-γ/γ+1+(γ-2)(γ-1)/(2γ-1)(γ+1)]= 2γ^3-12γ^2+23γ-11/2(2γ-1)(γ+1)>0where the positive sign is shown in Proposition <ref>. Therefore, we conclude that I>0 since G_C V_6 C_6>0. For II, by using (<ref>), (<ref>), (<ref>) and (<ref>), we have the following: II= G_C V_6[V_6(F_V+F_C)+F_C+C_6(2wC_6+G_C) ]= G_C V_6[2(γ-1)w/2V_6C_6+2C_6(C_6+2z)+C_6(2wC_6+2V_6(C_6+2γ z)) ]= G_C V_6 C_6[2w(γ-1/2V_6+C_6)+4z(1+γ V_6)+2(1+V_6)^2]=G_C V_6 C_6[Ã(γ,z)w+B̃(γ,z)],whereÃ(γ,z)= 1-γ/2+γ^2-7γ+2/2z<0, B̃(γ,z)=-γ-1/2+(γ^2+γ+2)z+(2-γ)(γ^2-7γ+2)/2z^2for γ∈(1,2]. We will first show B̃(γ,z)>0 for any γ∈(1,2] and z∈[z_m,z_M]. When γ=2, B̃(2,z)= - 1/2 + 8 z ≥- 1/2 + 8 z_m =- 1/2 + 8/15 >0.For any fixed γ∈(1,2), B̃(γ,z) is a quadratic polynomial of z. Since (2-γ)(γ^2-7γ+2)/2<0, B̃(γ,z) has global maximum at z = (γ^2+γ+2)/(2-γ)(γ^2-7γ+2)>1/2 since 2(γ^2+γ+2)>γ^2-7γ+2 and z≤ z_M =1/γ+2+2√(2γ)<1/2 for any γ∈(1,2). Hence, to show B̃(γ,z)>0, it is sufficient to check the sign of B̃(γ,z_m) for any γ∈(1,2). Direct computations show that B̃(γ,z_m)= -γ-1/2+(γ^2+γ+2)(γ-1)/(2γ-1)(γ+1)+(2-γ)(γ^2-7γ+2)(γ-1)^2/2(2γ-1)^2(γ+1)^2=(γ-1)(-γ^4+12γ^3-14γ^2+24γ-9)/2(2γ-1)(γ+1)>0,where the positive sign is verified in Proposition <ref>.Now since B̃(γ,z)>0, to show Ã(γ,z)w+B̃(γ,z)>0, it is sufficient to check the sign of B̃^2(γ,z)-Ã^2(γ,z)w^2>0. By direct computations, B̃^2(γ,z)-Ã^2(γ,z)w^2=2z[-2γ^2+γ+1+(4γ^3+4γ^2-6γ+4)z+(-2γ^4+13γ^3+5γ^2-16γ+4)z^2]=2z[-2γ^2+γ+1+2(γ+2)(2γ^2-2γ+1)z-(γ^2 - 7 γ + 2) (2 γ^2 + γ - 2)z^2]=:2zp_2(z).Clearly p_2(z) is a quadratic polynomial in z.We notice that -(γ^2 - 7 γ + 2) (2 γ^2 + γ - 2)>0, and B̃(γ,z) has global minimum at z = (γ+2)(2γ^2-2γ+1)/(γ^2 - 7 γ + 2) (2 γ^2 + γ - 2)<0 for any γ∈(1,2]. Hence, in order to prove p_2(z)>0 on [z_g,z_M], it is enough to check the sign of p_2(z_g) for each γ∈(1,2]. From (<ref>), we write z_g as z_g=2(γ-1)/√((2γ^2-γ+1)^2 +2γ(γ-1)[4γ(γ-1)+8/3])+(2γ^2-γ+1)=:2(γ-1)/q(γ)+(2γ^2-γ+1).We then compute p_2(z_g)[q(γ)+(2γ^2-γ+1)]^2 to obtain p_2(z_g)[q(γ)+(2γ^2-γ+1)]^2 =(-2γ^2+γ+1)[q(γ)+(2γ^2-γ+1)]^2+4(γ+2)(2γ^2-2γ+1)(γ-1)[q(γ)+(2γ^2-γ+1)]-4(γ-1)^2(γ^2 - 7 γ + 2) (2 γ^2 + γ - 2) = (-2γ^2+γ+1)[2(2γ^2-γ+1)^2 +2γ(γ-1)[4γ(γ-1)+8/3]]+4(γ+2)(2γ^2-2γ+1)(γ-1)(2γ^2-γ+1)-4(γ-1)^2(γ^2 - 7 γ + 2) (2 γ^2 + γ - 2)+2q(γ)[(-2γ^2+γ+1)(2γ^2-γ+1)+2(γ+2)(2γ^2-2γ+1)(γ-1)] = 2(γ-1)^2(-36γ^4+114γ^3-4γ^2-83γ+15)/3+2(γ-1)^2(4γ-3)q(γ)>0,where the positive sign of the quartic polynomialis shown in Proposition <ref>. Therefore, we deduce that Ã(γ,z)w+B̃(γ,z)>0 and hence II>0. This completes the proof of (<ref>) for m=2.§ PROOF OF (<REF>) AND (<REF>) In this section, we consider specific values of the parameters: z=z_1(γ) or z_2(γ), and κ=1 or 3/2. For notational convenience, we use G_C, G_V, F_C, F_V, R to represent their evaluations at P_8=(V_8,C_8) where G_C, G_V, F_C, F_V, R are given in (<ref>)–(<ref>) and (<ref>) respectively. Recalling (<ref>), the two inequalities (<ref>) and (<ref>) can be written asF_C-G_V+R/2G_C < -1/2√(κ/-V_8) = -κ/2C_8,where κ=1 corresponds to (<ref>) and κ=3/2 is equivalent to (<ref>). From (<ref>), we see that G_C<0. Moreover, R>0, and hence it is equivalent to prove thatR^2 > (-G_C√(κ/-V_8)-F_C+G_V)^2.Expanding R using the definition in (<ref>), we find that this is equivalent to-κ G_C/V_8+2κ/C_8(F_C-G_V)-4F_V>0. By using (<ref>), (<ref>), (<ref>) and w = 2C_8-1-(γ-2)z which is given by (<ref>), we compute4F_V= 4(-γ-1/2mwC_8 - 2C_8(C_8+mz)) = -2m(γ-1)wC_8-8C_8^2-8mzC_8=-[4m(γ-1)+8] C_8^2+2m(γ-1)C_8+2m(γ^2-2γ-3)zC_8.Thanks to(<ref>), (<ref>), (<ref>) , C_8^2 = -κ V_8,w = 2C_8-1-(γ-2)z and C_8=1+V_8, we obtain-κ G_C/V_8+2κ/C_8(F_C-G_V)= 2κ( -mγ z -1+V_8+2C_8+2mz-mw ) - 4mγ z C_8=2κ[ m-2+(3-2m)C_8] - 4mγ z C_8.Now, we are ready to show (<ref>) and (<ref>) hold.For any γ∈(, ], z=z_1 and κ=1, (<ref>) holds,and therefore so does (<ref>). To show that (<ref>) holds, by using (<ref>),it is enough to check the positivity of -κ G_C/V_8+2κ/C_8(F_C-G_V)-4F_V.When m=1, by using (<ref>), (<ref>), V_8(z_1)=-C_8^2(z_1), we obtain that, for any γ∈(,], - G_C/V_8+2/C_8(F_C-G_V)-4F_V=2C_8-2+4(γ+1) C_8^2 -2(γ-1)C_8+2(3-γ^2) z_1C_8=-2γ(2 V_8 +C_8)+2[(3-γ^2)z_1C_8+1] > (7-3√(5)) + 2[(3-^2)z_1()C_8+1] >0, where we have used (<ref>) and (<ref>) in the last two inequalities. When m=2, by using (<ref>), (<ref>) and V_8(z_1)=-C_8^2(z_1), we have,for anyγ∈(, ], - G_C/V_8+2/C_8(F_C-G_V)-4F_V= 8γ C_8^2-(4γ-2)C_8+4(3-γ^2)z_1C_8=-4γ(2V_8+C_8)+2C_8+4(3-γ^2) z_1 C_8>2(7-3√(5))γ +2[2(3-^2)z_1()+1]C_8>0, where we have used (<ref>) and (<ref>) in the last two inequalities.For any γ∈(,3], z=z_2 and κ=3/2, (<ref>) holds and therefore so does (<ref>). To show that (<ref>) holds, by using (<ref>),it is enough to check the positivity of -κ G_C/V_8+2κ/C_8(F_C-G_V)-4F_V. When m=1, by using (<ref>), (<ref>) and C_8^2 = -3/2V_8, we have, for any γ∈(,3], -3/2 G_C/V_8+3/C_8(F_C-G_V)-4F_V= 3C_8-3+4(γ+1) C_8^2 -2(γ-1)C_8+2(3-γ^2)z_2C_8=-2γ(3V_8+C_8)-V_8+2[(3-γ^2)z_2C_8+1]>2(6-√(3)3)γ-V_8+2(-6z_2()C_8+1)>0, where we have used (<ref>) and (<ref>) in the last two equalities. When m=2, by using (<ref>), (<ref>) and C_8^2 = -3/2V_8, we obtain, for any γ∈(,3], -3/2 G_C/V_8+3/C_8(F_C-G_V)-4F_V= 8γ C_8^2-(4γ-1)C_8+4(3-γ^2)z_2C_8=-4γ(3V_8+C_8)+[4(3-γ^2)z_2+1]C_8>-4γ(3V_8+C_8)+[-24z_2(2)+1]C_8=4(6-√(3)3)γ+16√(33)-93>0, where we have used (<ref>) and (<ref>) in the last two inequalities. § CALCULATION OF POLYNOMIALS Consider the cubic polynomial p(x)=ax^3+bx^2+cx+d where a≠ 0, b, c and d are all real numbers. Define the discriminant Δ of p(x) as Δ = b^2c^2-4ac^3-4b^3d-27a^2d^2+18abcd. Then, * If Δ = 0, then p(x) has a multiple root and all its roots are real; * If Δ >0, then p(x) has three distinct real roots; * If Δ<0, then p(x) has one real root and two complex conjugate roots. Consider the quartic polynomial p(x)=ax^4+bx^3+cx^2+dx+e where a≠ 0, b, c, d and e are all real numbers. Define the discriminant Δ of p(x) as Δ = 18abcd^3+18b^3cde-80abc^2de-6ab^2d^2e+144a^2cd^2e+144ab^2ce^2-128a^2c^2e^2-192a^2bde^2+b^2c^2d^2-4b^3d^3-4c^3d^3-4b^2c^3e+16ac^4e-27a^2d^4-27b^4e^2+256a^3e^3. Then, * If Δ = 0, then p(x) has a multiple root; * If Δ >0, then the roots of p(x) are either all real or all complex; * If Δ<0, then p(x) has two real roots and two complex conjugate roots. Take any polynomial p(x), and let p_0(x), … p_m(x) denote the Sturm chain corresponding to p(x). Take any interval (a, b) such that p_i(a), p_i(b) ≠ 0 , for any i∈{0,1,…,m}. For any constant c, let σ(c) denote the number of changes in sign in the sequence p_0(c), … p_m(c). Then p(x) has σ(a)-σ(b) distinct roots in the interval (a, b). For any γ∈(1,2], the cubic polynomial p(γ)=-γ^3-7γ^2+106/3γ-36<0. Since the discriminant of p(x) is Δ = -189956/27 <0, p(x) has only one real root by Lemma <ref>. Moreover, p(-∞)>0 and p(0)=-36<0, which implies that the real root must be negative. Thus, p(γ)<0 for γ∈(1,2]. For any γ∈(1,2], the quintic polynomial p(γ)=81(γ-1)(γ+1)^4-8γ(3γ-1)^4<0.By direct computations, the Sturm chain of p(γ) is given byp_0(γ) =-81 - 251 γ - 66 γ^2 - 270 γ^3 + 1107 γ^4 - 567 γ^5, p_1(γ) =-251 - 132 γ - 810 γ^2 + 4428 γ^3 - 2835 γ^4, p_2(γ) =52816/525 + 36944 γ/175 + 720 γ^2/7 - 41616 γ^3/175, p_3(γ) =-92164800/83521-126201600 γ/83521+162187200 γ^2/83521, p_4(γ) =-14651587904/271832505-5446905536 γ/453054175, p_5(γ) =-3876577982771200/86724949081. Therefore, σ(-∞)=3 and σ(∞)=2. Thus, p(γ) only has one real root. Since p(-∞)>0 and p(1)<0, p(γ)<0 for any γ∈(1,2]. For any γ∈(,], the quartic polynomial p(γ)=8(γ-2)^2γ^2/625+ 2(1-2γ)γ(γ+1)/25 +(1-2γ)(2-γ)<0. By (<ref>), the discriminant of p(γ) is Δ = -1586137650624/3814697265625<0. By Lemma <ref>, p(x) has two real roots. Since p(-∞) >0, p(1)=-717/625<0, p(5/2)=-39/50<0, and p(+∞) >0, p(γ)<0 for any γ∈(,]. For any γ∈(,], the quadratic polynomial p(γ)=(3√(2)-4)γ+1-√(2)+2/25[-(2-√(2))γ^2+(3-2√(2))γ+2√(2)+2]>0. Since p(-∞)<0, p(1)=58√(2)-77/25>0, p(3)=264√(2)-361/25>0, and p(∞)<0. So, p(γ)>0 for any γ∈(,]. For any γ∈(,], the quadratic polynomial p(γ)=2(3√(2)-4)γ+3-3√(2)+4/25[-(2-√(2))γ^2+(3-2√(2))γ+2√(2)+2]>0. Since p(-∞)<0, p(3/2)=182 √(2)-253/25>0, p(3)=503√(2)-697/25>0, and p(∞)<0. So, p(γ)>0 for any γ∈(,]. For any γ∈(,3], the cubic polynomial p(γ) = γ^3-17γ^2+40γ-14<0. Since p(-∞)<0, p(∞)>0, p(1)=10>0, p()=11 √(2)-18<0, p(3) = -20<0. Thus, p(x)<0 for any γ∈(,3]. For any γ∈(,3], the quartic polynomial p(γ)=-γ^4+12γ^3-36γ^2+32γ+1>0. Sincethe discriminant of p(γ) is Δ = -495616<0, by Lemma <ref>, p(x) has two real roots. Thus, p(-∞)<0, p(1) =8 >0, p(3)=16, and p(∞)<0implies that p(x)>0 for any γ∈(,3]. For any γ∈(1,2], the cubic polynomial p(γ) = 2γ^3-12γ^2+23γ-11>0. Sincethe discriminant of p(γ) is Δ = -964<0, by Lemma <ref>, p(x) has only one real root. Thus, as p(-∞)<0 and p(1)=2, we must have that p(x)>0 for any γ∈(1,2]. For any γ∈(1,2], the quartic polynomial p(γ) = -γ^4+12γ^3-14γ^2+24γ-9>0. Sincethe discriminant of p(γ) is Δ = -30235392<0, by Lemma <ref>, p(x) has two real roots. Thus, as p(-∞)<0, p(∞)<0, p(1) = 12>0 and p(2)=63>0, we have thatp(x)>0 for any γ∈(1,2]. For any γ∈(1,2], the quartic polynomial p(γ) = -36γ^4+114γ^3-4γ^2-83γ+15>0. Note thatthe first derivative of p(γ) is d p(γ)/d γ = -144γ^3+342γ^2-8γ-83. Since d p/d γ(-∞)>0, d p/d γ(∞)<0, d p/d γ(0)=-83<0, d p/d γ(1) = 107>0, and d p/d γ(2) = 117>0, d p(γ)/d γ>0 for any γ∈(1,2]. Thus, p(1)=6 implies that p(x)>0 for any γ∈(1,2].99Axford81 Axford, R. A. and Holm, D. D., Converging finite-strength shocks, Physica D 2 (1981), 194–202 Abbrescia22 Abbrescia, L., Speck, J., The emergence of the singular boundary from the crease in 3D compressible Euler flow, arXiv preprint, arXiv:2207.07107 (2022)Biasi21 Biasi, A., Self-similar solutions to the compressible Euler equations and their instabilities,Commun. Nonlinear Sci. Numer. Simul., 103 (2021), Paper No. 106014Bilbao96 Bilbao, L. E. and Gratton, J., Spherical and cylindrical convergent shocks, Il Nuovo Cimento D 18 (1996), 1041–1060 BCG22 Buckmaster, T., Cao-Labora,G.,Gomez-Serrano, J., Smooth imploding solutions for 3D compressible fluids, arXiv preprint,arXiv:2208.09445 (2022)Buckmaster22b Buckmaster, T., Drivas, T., Shkoller, S., Vicol, V., Simultaneous development of shocks and cusps for 2D Euler with azimuthal symmetry from smooth data, Ann. PDE 8 (2022), Paper No. 26Buckmaster23 Buckmaster, T., Shkoller, S., Vicol, V., Shock formation and vorticity creation for 3D Euler, Comm. Pure Appl. Math. 76 (2023), 1965–2072CGSS23 Cao-Labora,G.,Gomez-Serrano, J., Shi, J., Staffilani, G., Non-radial implosion for compressible Euler and Navier-Stokes in 𝕋^3 and ℝ^3, arXiv preprint, arXiv: 2310.05325 (2023)Chen97 Chen, G.-Q., Remarks on spherically symmetric solutions of the compressible Euler equations, Proc. Roy. Soc. Edinburgh Sect. A 127 (1997), 243–259Chen15Chen, G.-Q.,Perepelitsa, M., Vanishing viscosity solutions of the compressible Euler equations with spherical symmetry and large initial data, Comm. Math. Phys. 338 (2015), 771–800 Chen18 Chen, G.-Q., Schrecker, M. R. I., Vanishing viscosity approach to the compressible Euler equations for transonic nozzle and spherically symmetric flows, Arch. Ration. Mech. Anal. 229 (2018), 1239–1279 Chen22 Chen, G.-Q., Wang, Y., Global Solutions of the Compressible Euler Equations with Large Initial Data of Spherical Symmetry and Positive Far-Field Density, Arch. Ration. Mech. Anal. 243 (2022), 1699–1771Chisnell98 Chisnell, R. F.,An analytic description of converging shock waves,Journal of Fluid Mechanics, 354 (1998), 357–375Christodoulou07 Christodoulou, D., The formation of shocks in 3-dimensional fluids, EMS Monogr. Math. European Mathematical Society (EMS), Zürich, 2007Christodoulou16 Christodoulou, D., Lisibach, A., Shock development in spherical symmetry, Ann. PDE 2 (2016), Paper No. 3ChrisMiao14 Christodoulou, D., Miao, S., Compressible Flow and Euler's Equations, Surveys in Modern Mathematics Volume 9, International Press, 2014Courant48 Courant, R., Friedrichs, K.O., Supersonic Flow and Shock Waves, Springer, New York, 1948Dafermos16 Dafermos, C., Hyperbolic conservation laws in continuum physics, Grundlehren Math. Wiss., 325Springer-Verlag, Berlin, 2016Giron23 Giron, I., Balberg, S., Krief, M., Solutions of the converging and diverging shock problem in a medium with varying density, Physics of Fluids 35 (2023), 066112 Guderley42 Guderley, G., Starke kugelige und zylindrische verdichtungsstösse in der nähe des kugelmittelpunktes bzw. der zylinderachse, Luftfahrtforschung 19 (1942), 302–311GHJ21 Guo, Y., Hadžić, M., Jang, J., Larson-Penston self-similar gravitational collapse, Comm. Math. Phys. 386 (2021), 1551–1601 GHJ23 Guo, Y., Hadžić, M., Jang, J., Naked singularities in the Einstein-Euler system, Ann. PDE 9 (2023), Paper No. 4GHJS22 Guo, Y., Hadžić, M., Jang, J., Schrecker, M.,Gravitational Collapse for Polytropic Gaseous Stars: Self-similar Solutions, Arch. Ration. Mech. Anal. 246 (2022), 957–1066Irving04 Irving, R.S.,Integers, polynomials, and rings: A course in Algebra, Springer, New York 2004 Jenssen18 Jenssen, H.K.,Tsikkou, C.,On similarity flows for the compressible Euler system,J. Math. Phys. 59 (2018), 121507 Jenssen23 Jenssen, H.K.,Tsikkou, C., Radially symmetric non-isentropic Euler flows: Continuous blowup with positive pressure, Physics of Fluids 35(2023), 016117Landau87 Landau, L.D., Lifshitz, E.M.: Fluid Mechanics, Course of Theoretical Physics, Vol. 6, 2nd edition, Elsevier, London, 1987Lazarus81 Lazarus, R. B.,Self-similar solutions for converging shocks and collapsing cavities, SIAM J. Numer. Anal. 18 (1981), 316–371 Luk18 Luk, J., Speck, J., Shock formation in solutions to the 2D compressible Euler equations in the presence of non-zero vorticity, Invent. Math. 214 (2018), 1–169Makino92 Makino, T., Mizohata, K., Ukai, S., Global weak solutions of the compressible Euler equations with spherical symmetry I, Jpn. J. Ind. Appl. Math. 9 (1992), 431–449Makino94 Makino, T., Mizohata, K., Ukai, S., Global weak solutions of the compressible Euler equations with spherical symmetry II, Jpn. J. Ind. Appl. Math. 11 (1994), 417–426Merle22a Merle, F., Raphaël, P.,Rodnianski, I., Szeftel, J., On the implosion of a compressible fluid i: smooth self-similar inviscid profile, Ann. of Math. (2), 196 (2022), 567–778 Merle22b Merle, F., Raphaël, P.,Rodnianski, I., Szeftel, J., On the implosion of a compressible fluid ii: singularity formation, Ann. of Math. (2) 196 (2022), 779–889Morawetz51 Morawetz, C. L., Contracting spherical shocks treated by a perturbation method, PhD Thesis, New York University (1951)Ponchaut06 Ponchaut, N. F., Hornung, H. G., Pullin, D. I., and Mouton, C. A., On imploding cylindrical and spherical shock waves in a perfect gas, J. Fluid Mech. 560 (2006), 103–122Ramsey12 Ramsey, S. D., Kamm, J. R., and Bolstad, J. H., The Guderley problem revisited, Int. J. Comput. Fluid Dyn. 26 (2012), 79–99Schrecker20 Schrecker, Matthew R. I., Spherically symmetric solutions of the multidimensional, compressible, isentropic Euler equations, Trans. Amer. Math. Soc. 373 (2020), 727–746Sed Sedov, L. I.: Propagation of strong shock waves, Journal of Applied Mathematics and Mechanics 10 (1946), 241–250 Sedov Sedov, L. I. :Similarity and dimensional methods in mechanics, "Mir", Moscow, 1982. Translated from the Russian by V. I. Kisin. SV23 Shkoller, S., Vicol, V.,The geometry of maximal development for the Euler equations, arXiv preprint,arXiv: 2310.08564 (2023)Sideris85 Sideris, T. C., Formation of singularities in three-dimensional compressible fluids, Comm. Math. Phys. 101 (1985), 475–485Stanyukovich Stanyukovich, K. P. Unsteady motion of continuous media, Pergamon Press, 1960Sturm09 Sturm, Par C.,Mémoire sur la Résolution des équations numériques, Collected Works of Charles Francois Sturm, Birkhäuser Basel,(2009), 345–390Teschl12 Teschl, G., Ordinary differential equations and dynamical systems, American Mathematical Soc. 140 (2012)Welsh67 Welsh, R. L., Imploding shocks and detonations, J. Fluid Mech. 29 (1967), 61–79Yin04 Yin, H., Formation and construction of a shock wave for 3-D compressible Euler equations with the spherical initial data, Nagoya Math. J. 175 (2004), 125–164ZRY67 Zel’dovich, Ya. B; Razier, Yu. P.,Physics of shock waves and high temperature hydrodynamic phenomena, Volume II, Academic Press, New York and London, 1967 | http://arxiv.org/abs/2310.18483v1 | {
"authors": [
"Juhi Jang",
"Jiaqi Liu",
"Matthew Schrecker"
],
"categories": [
"math.AP"
],
"primary_category": "math.AP",
"published": "20231027205539",
"title": "On self-similar converging shock waves"
} |
figure | http://arxiv.org/abs/2310.17943v1 | {
"authors": [
"Yufei Chen",
"Lei Liu",
"Yuhao Chi",
"Ying Li",
"Zhaoyang Zhang"
],
"categories": [
"cs.IT",
"math.IT"
],
"primary_category": "cs.IT",
"published": "20231027073849",
"title": "Low-Complexity and Information-Theoretic Optimal Memory AMP for Coded Generalized MIMO"
} |
Integer Sequences: Irregular Arraysand Intra-Block Permutations Boris Putievskiy October 27, 2023 ===================================================================This article investigates integer sequences that partition the sequence into blocks of various lengths - irregular arrays. The main result of the article is explicitformulas for numbering of irregular arrays. A generalization of Cantor diagonal method is proposed. We also define and describe intra-block permutations of natural numbers. Generalizations of reluctant sequences are introduced, namely generalized reluctant sequences and generalized reverse reluctant sequences. Explicit formulas are presented for these sequences. The article provides numerous examples to illustrate all statements.§ INTRODUCTIONDenote the set of integers by ℤ, the set of nonnegative integers by ℤ^*, the set of positive integers by ℤ^+, the set of positive real numbers by ℝ^+.Denote the set of integer sequences by 𝒜 and the set of positive integer sequences by 𝒜^+.A pairingfunction is a function that reversibly maps ℤ^+ x ℤ^+ → ℤ^+. A permutation of natural numbersis bijective map ℤ^+ → ℤ^+.A block (or segment) of a sequence is any set of the consecutive terms of the form (a_k+1,a_k+2,a_k+3,... a_k+m), where k∈ℤ^*, m ∈ℤ^+, and mis the length of the block.Throughout this paper, we will refer to sequences by their Annnnn numbers, as found in the Online Encyclopedia of Integer Sequences [1]. Denote the sequence of natural numbers (1,2,3,...)A000027byξ.§ PARTITIONS OF THE SET OF POSITIVE INTEGERSDefinition 2.1. Let a sequences α: a_1, a_2, a_3,...∈𝒜 and β: b_1, b_2, b_3,... ∈ 𝒜^+.The sequence β partitions the sequence α of into blocks of lengths b_1, b_2, b_3,.... The sequence α is written as irregular array read by rows: a_1, a_2, ... a_b_1,a_b_1+1, a_b_1+2, ... a_b_1+b_2, a_b_1+ b_2+1,a_b_1+ b_2+2, ... a_b_1+b_2+b_3, . .. The sequence β is called partitioning sequence. We use two parameters to number the terms of an irregular array L(n) and R(n). Where L(n) represents the block number, and R(n)indicates the position within the block from left to right. Thus (1,1),(1,2), ... (1,b_1),(2,1),(2,2), ... (2,b_2),(3,1),(3,2), ... (3,b_3), . .. Denote by B(s)=b_1+b_2+...+b_s partial sums β, B(s)=0,B(s-1)+b_s=B(s). Let L(0)=0, forn ≥ 1 we get: R(n)=n - B(L(n)-1). Denote by R^'(n) the position within the block from right to left. Then R^'(n) = B(L(n))+1-n,R(n)+R^'(n)=b_L(n)+1. Using (2), we can derive a formula for the inverse problem: how to calculate the number of terms if the values of the functions L and R are known. n=B(L-1)+R.Let the sequences β = ξ, then a sequence α is written as regular array read by rows: a_1,a_1, a_2, a_1,a_2,a_3, . .. These formulas are commonly knownA003056, A002260, A004736. Row numbering of a regular array starts from 0: t=⌊√(8n-7)-12⌋.Then L(n)=t+1,R(n)= n-t(t+1)2,R^'(n)= (t+1)(t+2)2+1-n, R(n)+R^'(n)=t+2. Let x(n): ℤ^+→ℝ^+ and x(n) is the largestroot of the equation B(x)=n. Then L(n)=⌈ x(n) ⌉. By definition B(0) = 0, B(1) =b_1, B(2) =b_1+b_2, ... The function B(n) is strictly increasing. Therefore 0< x(1) < x(2) < ... < x(b_1) = 1, 1< x(b_1 + 1) < x(b_1 + 2) < ... < x(b_1 + b_2) = 2, 2< x(b_1 + b_2 + 1) < x(b_1 + b_2 + 2) < ... < x(b_1 + b_2 + b_3) = 3, ⋯ We obtain ⌈ x(1) ⌉=1, ⌈ x(2) ⌉=1, ... , ⌈ x(b_1) ⌉=1, ⌈ x(b_1+1) ⌉=2, ⌈ x(b_1+2) ⌉=2, ... , ⌈ x(b_1+b_2) ⌉=2,⌈ x(b_1+b_2+1) ⌉=3, ⌈ x(b_1+b_2+2) ⌉=3, ... ,⌈ x(b_1+b_2+b_3) ⌉=3, ⋯Let L(n)be the number of block of the sequence β:b_1, b_2, b_3,... ∈𝒜^+ and m ∈ℤ^+, m>1. The following properties hold.(P2.1.) The number of the block of the sequence :mb_1, mb_2, mb_3,... is L(u), where u=⌊n-1/m⌋ +1.(P2.2.) Let the sequence β:b_s=0mod m for s ≥ 1.The number of the block of the sequence b_1m, b_2m, b_3m,... is L(mn).(P2.3.) Let a sequences β be the union of m rows of the sequence β:b_1=b_1+b_2+... b_m,b_2=b_m+1+b_m+2+... b_2m,b_3=b_2m+1+b_2m+2+... b_3m, ...Then L(n)=⌊L(n)+m-1m⌋. Let's examine some special cases of the sequenceβ.Example 2.0. Let p_0∈ℤ^+, b_s=p_0. Using (1), (2) and (3) we getB(s) = p_0s,x(n)=np_0,L(n)=⌈np_0⌉, R(n)= n- p_0(L(n)-1).Example 2.1. Let the partitioning sequence β is linear functionb_s=p_1s + p_0, where p_0∈ℤ,p_1∈ℤ^+. Using (1), (2) and (3) we getB(s) = p_1s(s+1)2+p_0s.L(n)=⌈-2p_0-p_1+√(8 n p_1+(2p_0+p_1)^2)2p_1⌉. R(n)= n - p_1(L(n)-1)L(n)2+p_0(L(n)-1),Let p_1=5 and p_0=2, thenL(n)=⌈√(n+9)-3⌉:1,1,1,1,1,1,1, 2,2,2,2,2,2,2,2,2, 3,3,3,3,3,3,3,3,3,3,3,. .. Example 2.1.1. This is a special case of the previous example p_0=0, b_s=p_1s.B(s) = p_1s(s+1)2, L(n)=⌈-p_1+√(8 n p_1+p_1^2)2p_1⌉. For p_1=1 weobtainthe regular array and popular formula A002024L(n)=⌈-1+√(8 n +1)2⌉. For p_1=2 we get irregular array and the formula A000194L(n)=⌈-1+√(4 n +1)2⌉. We can also solve this problem by using (P2.1). L(n)=⌈-1+√(8u +1)2⌉, where u=⌊n-1/p_1⌋ +1. Article [4] presents an alternative method L(n)=⌈√(⌈2n/p_1⌉)+12⌉ - 1.Example 2.1.2. Cantor's diagonalization is a well-known for numbering infinite arrays. In this example, we propose a generalization of the Cantor numbering method for two adjacent diagonals. A pair of neighboring diagonals are combined into one block.The sequences α = ξ. The partitioning sequence β is b_s= 4s-1, p_1=4,p_0=-1,A004767: 3, 7, 11, 15, 19,... ThenL(n) = ⌈-1 + √(8n + 1)/4⌉.The partial sums B(s)=s(2s+1) is the sequence of second hexagonal numbers [3] , A014105.Example 2.1.3. Let d ∈ℤ^+, d>1. We shall combine d diagonals into one block, starting with the first diagonal. The sequence α = ξ. Then b_s= d^2s -d(d-1)2, B(s)=ds(ds+1)2.Using (4) for p_1=d^2 and p_0=-d(d-1)2 we getL(n)=⌈-1+√(8n-7)/2d⌉,A second way to solve this problem is to use (P2.3.). L(n)=⌊t+dd⌋,wheret=⌊√(8n-7)-12⌋ .For d=3 b_s=9s-3 is the sequences A017233 and we obtain 1,1,1,1,1,1, 2,2,2,2,2,2,2,2,2,2,2,2,2,2,2, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3, . .. Example 2.1.4. Now we shall change example 2.1.3. by forming blocks of d adjacent diagonals, starting from the second diagonal, α = ξ. Then b_1=1, b_s= d^2(s-1) -d(d-3)2 fors>1,The partitioning sequence β is not linear function. B(0)=0, B(s)=(d(s-1)+1)(d(s-1)+2)2 fors>1. Using [3] we obtainL(n)=⌈2d-3+√(8n+1)/2d⌉.We can solve this problem using a modified version of (P2.3.). Let a sequences β: b_1=b_1, b_2=b_2+... b_m+1,b_3=b_m+2+b_m+3+... b_2m+1, .... ThenL(n)=⌊t+d-1d⌋+1, wheret=⌊√(8n-7)-12⌋ . For d=3 the sequence βis b_1=1, b_s=9(s-1),fors>1 and we get 1, 2,2,2,2,2,2,2,2,2, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3, . ..Example 2.2. Let the partitioning sequence β is quadratic function b_s=p_2s^2 + p_1s + p_0, where p_0,p_1∈ℤ, p_2∈ℤ^+. B(s)=p_2s(s+1)(2s+1)/6+p_1s(s+1)/2+p_0s.For the cubic equation2p_2x^3 + (3p_2+3p_1)x^2 + (p_2+3p_1+6p_0)x - 6n = 0 we use Cardano's formula [5].L(n) =⌈-p_1+p_2/2p_2 - U/3·2^2/3· p_2√(V+√(4U^3+V^2)) +1/6 · 2^1/3· p_2√(V+√(4U^3+V^2))⌉, whereU= 3(- 3p_1^2 + 12p_0p_2 - p_2^2), V = 54(- p_1^3 + 6p_0p_1 + 12np_2^2 + 6p_0p_2^2 + p_1p_2). R(n)=n- p_2(L(n)-1)L(n)(2(L(n)-1)+16-p_1(L(n)-1)L(n)2 - p_0(L(n)-1).Example 2.2.1. This is a special case of the previous example p_2=1, p_1=0,p_0≥ 0, b_s=s^2+p_0. Using (1) and (5) we getB(s) =s(s+1)(2s+1)/6+ p_0s, U=36p_0-3,V=648n+324p_0. thenThe discriminant Δ=-(4(36p_0-3)^3+(648n+324p_0)^2)<0 and so the cubic equation has one real root and two non-real complex conjugate roots.L(n) =⌈-1/2 - 36p_0-3/3·2^2/3√(648n+324p_0+√(4(36p_0-3)^3+(648n+324p_0)^2)) +1/6 · 2^1/3√(648n+324p_0+√(4(36p_0-3)^3+(648n+324p_0)^2))⌉, For p_0=0 we obtain the formula for A074279: L(n)= ⌈1/2( -1 + 1/3^1/3W + W/3^2/3) ⌉,where W=(108 n+√(3)√(-1+3888 n^2))^1/3. For p_0=1 we obtain L(n): 1,1, 2,2,2, 2,2, 3,3,3, 3,3,3, 3,3,3, 3, . .. Example 2.2.2. Let the sequence β is quadratic function with the coefficients p_2=m-22,p_1=-m-42,p_0=0,m ∈ℤ^+, m ≥ 3. Thenb_s=(m-2)s^2-(m-4)s2form the sequence of polygonal numbers [3]. Using (1) and (5) we getB(s)=(m-2)s(s+1)(2s+1)/12 - (m-4)s(s+1)/4.The cubic equation takes the form (2m-4)x^3 + 6x^2-(2m-10)x -12n=0. ThenU=-156+84m-12m^2, V=-2592+1512 m-216 m^2+5184 n-5184 m n+1296 m^2 n, L(n) = ⌈-1/m-2 - U/3·2^2/3· (m-2) √(V+√(4U^3+V^2)) +1/6 · 2^1/3· (m-2)√(V+√(4U^3+V^2))⌉, For m > 19, the cubic polynomial is in casus irreducibilis, with three distinct real roots. Therefore, we must use a trigonometric solution to find the roots.For m=5 the sequenceb_s is the sequence of pentagonal numbers A000326.Then L(n): 1, 2,2,2,2,2, 3,3,3,3,3,3,3,3,3,3,3,3, . .. Example 2.2.3. Let the sequence β is quadratic function with the coefficients p_2=m2,p_1=-m2,p_0=1,m ∈ℤ^+. Thenb_s=ms^2-s2+1form the sequence of centered polygonal numbers [3]. Using (1) and (5) we getB(s)=ms(s+1)(2s+1)/12 - ms(s+1)/4 + s.The cubic equation takes the form mx^3 + (6-m)x-6n=0. ThenL(n) =⌈-2^1/3 (6-m)/√(162 m^2 n+√(108 (6-m)^3 m^3+26244 m^4 n^2))+ √(162 m^2 n+√(108 (6-m)^3 m^3+26244 m^4 n^2))/3· 2^1/3· m⌉ Form > 24, the cubic polynomial is in casus irreducibilis, with three distinct real roots. Consequently, we employ trigonometric solution to find the roots.For m=5 the sequence b_s is the sequence of centered pentagonal numbers A005891. Then L(n): 1, 2,2,2,2,2,2, 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3, . ..Example 2.3. Let the partitioning sequence β is cubic function b_s=p_3s^3 + p_2s^2 + p_1s + p_0, where p_0, p_1, p_2∈ℤ,p_3∈ℤ^+. B(s)=p_3s^2(s+1)^2/4+p_2s(s+1)(2s+1)/6+p_1s(s+1)/2+p_0s.There are formulas [5],[6] for solving the 4th degree equation3p_3x^4 + (6p_3+4p_2)x^3 + (3p_3+6p_2+6p_1)x^2 + (12p_0+6p_1+2p_2)x - 12n = 0.A different approach is to use numerical solutions of equation. Example 2.3.1. This is a special case of the previous example p_3=1, p_1=0,p_0≥ 1, b_s=s^3+p_0. For p_0=1 we get the equationx^2(x+1)^2+4x-4n=0. Using (3) we obtain L(n): 1, 1, 2,2,2, 2,2,2, 2,2,2, 3,3,3, 3,3,3, 3,3,3, 3,3,3, 3,3,3, 3,3,3, 3,3,3, 3,3,3, 3,3,3, 3, . ..Example 2.3.2. Let the sequence β is cubic function with the coefficients p_3=m-26,p_2=12,p_1=-m-56,p_0=0,m ∈ℤ^+, m ≥ 3.Thenb_s=16s(s+1)((m-2)s-(m-5))form the sequence of pyramidal numbers [3]. For m=5 we get the sequence of pentagonal pyramidal numbers A002411 and the equation x(x^3 + 4x^2+ 4x+1) -12n =0. Using (3) we obtain L(n):1, 2,2,2,2,2,2, 3,3,3,3,3,3, 3,3,3,3,3,3, 3,3,3,3,3,3 . ..Example 2.4. Let the partitioning sequence β is b_s=(m-1)m^s-1 form > 1ands≥ 1. Using (1), (2) and (3) we getB(s) = m^s - 1,L(n)=⌈log_m(n+1) ⌉, R(n)= n - m^⌈log_m(n+1) ⌉-1+1. The sequences A029837 and A081604 are examples of sequences generated by m=2 and m=3, respectively.§ INTRA-BLOCK PERMUTATION OF INTEGER POSITIVE NUMBERSLet β is the partitioning sequence. The sequence ξ is written as irregular array read by rows: B(0)+1, B(0)+2, ... , B(0)+b_1, B(1)+1,B(1)+2, ..., B(1)+b_3, . .. B(k)+1, B(k)+2, ..., B(k)+b_k+1. . ..Definition 3.1. A sequence α∈𝒜^+ is called an intra-block permutation of integer positive numbers if it maps each block (B(k)+1, B(k)+2, ..., B(k)+b_k+1) to itself.This means that each block of the sequences α a_B(k)+1, a_B(k)+2, ..., a_B(k)+b_k+1 is a permutation of the numbersB(k)+1, B(k)+2, ..., B(k)+b_k+1. Denote by π(n) a permutation of the first n natural numbers (p_1,p_2,p_3,...p_n).The group of all permutationsπ(n) is denoted by S_n, is called the symmetric group of degree n.The order of a permutation π, denoted by o(π), is defined as the smallest positive integer m such that π^m = id [5].The set of numbers a_B(k)+1-B(k), a_B(k)+2-B(k), ..., a_B(k)+b_k+1-B(k)is a permutation π(b_k+1). The sequence α is determined by the sequence β and the set of permutationsπ(b_1),π(b_2),π(b_3)... Let α∘α is self-composition α(α) of the sequence α [7]. This operation is equivalent to multiplying permutations.π(b_1) ∘π(b_1), π(b_2) ∘π(b_2),π(b_3) ∘π(b_3),... The sequence ξ consists of identity permutations. Definition 3.2. The order of a sequence α, denoted by o(α), is the smallest positive integer m such that m times self-composition α^m = ξ. The following properties hold.(P3.1.) The sequence α is permutation of the natural numbers.(P3.2.) The order of αo(α)=LCM (o(π(b_1),o(π(b_2),o(π(b_3),...).(P3.3.) The sequences α, α^2, α^3, ... form a cyclic group. Let a sequence γ: g_1, g_2, g_3,...∈ 𝒜^+ such that g_1 + g_2 + ... + + g_m_1 =b_1, g_m_1+1 + g_m_1+2 + ... + g_m_1+m_2 =b_2, g_m_1+m_2+1 + g_m_1+m_2+2 + ... + + g_m_1+m_2+m_3 =b_3, . .. Thus, the sequence γ partitions the sequence ξ into blocks, such that each block of the sequence β is a collection of disjoint blocks of γ, whose union is the block β. We denote by γ≤β. Let a sequence μ∈𝒜^+ is an intra-block permutation of integer positive numbers for partitioning sequence γ.(P3.4.) The sequences μ is intra-block permutation for the partitioning sequence β.(P3.5.) The set of sequences 𝒜^+,equipped with a binary relation ≤, form partially ordered set [8]. The minimal element is the sequence (1,1,1,...) A000012.(P3.6.) The sequences α∘μ is intra-block permutation for the partitioning sequence β. In all examples in this section, we shall use the partitioning sequence from example 2.1.2. β: b_s= 4s-1 for s ≥ 1. All permutationsπ(b_1),π(b_2),π(b_3)... have odd length. The sequence α = ξ. Example 3.1. Terms of the π(n):p_i=R^'(i). The order of permutations o(π(b_s))=2 for s ≥ 1. So o(α)=2 and thesequence α is self-inverse permutation of the natural numbers. The sequence as irregular array begins 3,2,1, 10,9,8,7,6,5,4, 21,20,19,18,17,16,15,14,13,12,11, . .. Example 3.2. The formula for terms of the π:p(i)= R^'(i), if R^'(i)≥ R(i)+1,R(i)-⌊R(i)+R^'(i)-12⌋,if R^'(i) < R(i)+1.The order of permutations o(π(b_1))=3,o(π(b_s))=12 for s ≥ 2.Thus o(α)=12. The sequence begins 3,1,2, 10,9,8,4,5,6,7, 21,20,19,18,17,11,12,13,14,15,16, . .. Example 3.3. The formula for terms of the π:p_i =⌊ 4L(i)-12⌋+R(i)+1, if R(i) > R^'(i), R(i) - ⌊4L(i)-12⌋, if R(i) ≤ R^'(i).The order of permutations o(π(b_s))=b_sfor s ≥ 1. So the sequence α has infinite order. Another formula: a(n)= (i+j-1)^2+i-j+3 +2(i + j - 1)(-1)^i + j2,wherei=n-t(t+1)2,j= t^2+3t+42-n,t=⌊√(8n-7)-12⌋ . The start of the sequence α: 3,1,2, 8,9,10,4,5,6,7, 17,18,19,20,21,11,12,13,14,15,16 . ..§ GENERALIZED RELUCTANT SEQUENCESDefinition 4.1. Let sequences α ∈ 𝒜 and ω ∈ 𝒜^+. The sequence ω is called the reluctant sequence of sequence α, if ω is the triangle array read by rows, with row number k coinciding with the first k elements of the sequence α [2].Formula for a reluctant sequence is:ω(n)=a_m, wherem=n-t(t+1)2,t=⌊√(8n-7)-12⌋. Definition 4.2. Let sequences α ∈ 𝒜 and ω ∈ 𝒜^+. The sequence ω is called the reverse reluctant sequence of sequence α, if ω is the triangle array read by rows, with row number k coinciding with the first k elements of the sequence α in reverse order [2].Formula for a reverse reluctant sequence is:ω(n)=a_m,wherem=t^2+3t+42-n, t=⌊√(8n-7)-12⌋.Let q ∈ℤ^+. Denote by (a_k+1,a_k+2,a_k+3,... a_k+m)^qq times concatenation of the block a_k+1,a_k+2,a_k+2,...a_k+m:a_k+1,a_k+2,a_k+3,...a_k+m,a_k+1,a_k+2,a_k+3,...a_k+m,... a_k+1,a_k+2,a_k+3,... a_k+m_q timesDefinition 4.3. Let a sequence α: a_1, a_2, a_3,... ∈𝒜. A sequences β: b_1, b_2, b_3,... ∈ 𝒜^+andq ∈ℤ^+. The sequence ω is called the generalized reluctant sequence of sequences α if ω is irregular array read by rows:(a_1, a_2, …a_b_1)^q,(a_1, a_2, …a_b_1, a_b_1+1, …a_b_1+b_2)^q,(a_1, a_2, …a_b_1, a_b_1+1, …a_b_1+b_2, a_b_1+b_2+1,a_b_1+b_2+2,…a_b_1+b_2+b_3)^q,… Definition 4.4. Let a sequence α: a_1, a_2, a_3,... ∈𝒜. A sequences β: b_1, b_2, b_3,... ∈ 𝒜^+andq ∈ℤ^+. The sequence ω^' is called the generalized reverse reluctant sequence of sequences α if ω^' is irregular array read by rows: (a_b_1, a_b_1-1, …, a_1)^q,(a_b_1+b_2, a_b_1+b_2-1, …a_b_1, a_b_1-1, …a_1)^q,(a_b_1+b_2+b_3, a_b_1+b_2+b_3-1, …a_b_1+b_2, a_b_1+b_2-1, …a_b_1, a_b_1-1, …a_1)^q,… As an illustration, consider the following examples. Let α = ξ, a partitioning sequences γ is increasing g_s < g_s+1, for s ≥ 1. ThenThe sequence R(n):1,2, ...g_1,1,2, ...g_1, ...g_2,1,2, ... g_1, ...g_2, ...g_3,. .. is generalized reluctant sequence of sequences ξ for q=1. Similarly, for the sequence R^'(n): g_1, g_1-1, ...1,g_2, g_2-1, ...g_1, g_1-1, ...1,g_3, g_3-1, ...g_2, g_2-1, ...g_1, g_1-1, ...1,. .. is generalized reverse reluctant sequence of sequences ξ for same γ and q=1.If the sequence β = A000012 and q=1 generalized reluctant sequence becomes reluctant sequence A002260.Similarly, for the same sequence β and q generalized reverse reluctant sequence becomes reverse reluctant sequenceA004736. There are some examples generalized reluctant sequence for q=1 andβ: b_1=1,b_s=2 for s ≥ 2 A071797,b_1=1,b_s=2s-1 for s ≥ 2 A064866,b_1=1,b_s=2^s-2for s ≥ 2 A062050,for q=2 andβ: b_s=1,s ≥ 1 A122197.The example of generalized reverse reluctant sequence is A080883 for q=1 andβ:b_1=1,b_s=2 for s ≥ 2. Let's create a formula to calculate L. Denote by ζ: c_1, c_2, c_3,... the partitioning sequence for the array (6), where c_s=qB(s).Denote byC(s) partial sums ζ:C(s)=0,C(s)= c_1+c_2+...+c_s.Using (2) and (3) we get L(n)=⌈ y(n) ⌉,where y(n) is the largestroot of the equation C(x)=n, R(n) = n- C(L(n)-1),R^'(n) = C(L(n))+1-n.Let's develop a formula for finding the term of the array (6). The term ω(n) is located in the row L(n) at the place R(n). The row L(n) contains the block of terms a_1, a_2, ... a_B(L(n))repeated q times. This row is numbered R(n) from 1 to c_L(n)=qB(L(n)). Then generalized reluctant sequence of sequences ω(n) =a(m), where m=1+ (R(n)-1) modB(L(n)). Similarly, the generalized reverse reluctant sequence of sequences ω^'(n) =a(m^'), where m^'=1+(R^'(n)-1) modB(L(n)). Example 4.0. Let p, q ∈ℤ^+, the sequencesβ: b_s=p for s≥ 1.Then B(s)=ps,c_s=pqs,C(s)=pqs(s+1)2,L(n)=⌈-pq+√(8npq+p^2q^2)2pq⌉, R(n)=n-pq(L(n)-1)L(n)2, R^'(n)=pqL(n)(L(n)+1)2 +1 - n . We get for generalized reluctant sequence ω and reverse reluctant sequence ω^':m=1+ (R(n)-1) modpL(n),m^'=1+ (R^'(n)-1) modpL(n).Let the sequence α = ξ,β: b_s=2 fors≥ 1and q=3. Then generalizedreluctant sequenceω: 1,2, 1,2, 1,2,1,2,3,4,1,2,3,4,1,2,3,4,1,2,3,4,5,6,1,2,3,4,5,6, 1,2,3,4,5,6 . .. Generalizedreverse reluctant sequenceω^': 2,1 2,1 2,1,4,3,2,1,4,3,2,1 4,3,2,1,6,5,4,3,2,16,5,4,3,2,1, 6,5,4,3,2,1, . ..Example 4.1. Let p_1, q ∈ℤ^+, the sequencesβ: b_s=p_1sfor s≥ 1and q ≥ 1.Then B(s)=p_1s(s+1)2,c_s=p_1qs(s+1)2,C(s)=p_1qs(s+1)(s+2)6, Using Cardano’s formula [5] we getL(n)=⌈-1 + p_1q√(3)U+U√(3^2)p_1q⌉, where U= (27np_1^2q^2 + √(3)√(243n^2p_1^4q^4 - p_1^6q^6))^1/3. R(n)=n-p_1q(L(n)-1)L(n)(L(n)+1)6, R^'(n)=p_1qL(n)(L(n)+1)(L(n)+2)6+1-n We get for generalized reluctant sequence ω and reverse reluctant sequence ω^':m=1+ (R(n)-1) modp_1L(n)(L(n)+1)2,m^'=1+ (R^'(n)-1) modp_1L(n)(L(n)+1)2.If the sequence α = ξ,β: b_s=2s fors≥ 1and q=3. Then generalizedreluctant sequenceω: 1,2, 1,2,1,2,1,2,3,4,5,6,1,2,3,4,5,6, 1,2,3,4,5,61,2,3,4,5,6,7,8,9,10,11,12, 1,2,3,4,5,6,7,8,9,10,11,12, 1,2,3,4,5,6,7,8,9,10,11,12 . .. Generalizedreverse reluctant sequenceω^': 2,1,2,1, 2,1,6,5,4,3,2,1,6,5,4,3,2,1,6,5,4,3,2,1,12,11,10,9,8,7,6,5,4,3,2,1,12,11,10,9,8,7,6,5,4,3,2,1,12,11,10,9,8,7,6,5,4,3,2,1 . ..Example 4.2. Let p, q ∈ℤ^+,p ≥2,q ≥ 1, the sequencesβ: b_1=p,b_s=p^s-p^s-1 for s≥ 2. ThenB(s)=p^s,c_s=qp^s,C(s)=pq(p^s-1)p-1,L(n)=⌈log_p (n(p-1)pq+1) ⌉,R(n)=n-pqp^L(n)-1-1p-1, R^'(n)=pqp^L(n)-1p-1+1 - n . We get for generalized reluctant sequence ω and reverse reluctant sequence ω^':m=1+ (R(n)-1) modp^L(n),m^'=1+ (R^'(n)-1) modp^L(n) .Let the sequence α = ξ,β: b_1=2,b_s=2^s-2^s-1 fors≥ 2and q=3. Then generalizedreluctant sequenceω: 1,2,1,2,1,2,1,2,3,4,1,2,3,4, 1,2,3,4,1,2,3,4,5,6,7,8,1,2,3,4,5,6,7,8,1,2,3,4,5,6,7,8, . .. Generalizedreverse reluctant sequenceω^': 2,1, 2,1,2,1,4,3,2,1,4,3,2,1,4,3,2,1,8,7,6,5,4,3,2,1,8,7,6,5,4,3,2,1,8,7,6,5,4,3,2,1, . .. 6 OEIS Online Encyclopedia of Integer Sequences (OEIS). http://oeis.org/ Putievskiy B.Putievskiy, Transformations [of] Integer Sequences And Pairing Functions, arXiv:1212.2732v1 [math.CO] Deza E. Deza and M. M. Deza, Figurate numbers. –World Scientific Publishing, 2012 Nyblom M. A. Nyblom, Some curious sequences involving floor and ceiling functions, Am. Math. Monthly 109 (2002) 559-564. Rotman Joseph J. Rotman, An introduction to the theory of groups, 4th ed., Graduate Texts in Mathematics, vol. 148, Springer-Verlag, New York, 1995 Planetmath Planetmath https://planetmath.org/QuarticFormula Khovanova T. Khovanova, How to Create a New Integer Sequence, arXiv:0712.2244 [math.CO]Davey B. A. Davey and A. Priestley, Introduction to Lattices and Order 2nd ed., University of Oxford, 2002E-mail:[email protected] | http://arxiv.org/abs/2310.18466v1 | {
"authors": [
"Boris Putievskiy"
],
"categories": [
"math.CO"
],
"primary_category": "math.CO",
"published": "20231027202145",
"title": "Integer Sequences: Irregular Arrays and Intra-Block Permutations"
} |
A near-UV reconnaissance of metal-poor massive starsChris Evans Wagner MarcolinoJean-Claude Bouret Miriam GarciaAccepted: 8 August 2023 =============================================================================Inspired by the dual-process theory of human cognition, we introduce DUMA, a novel conversational agent framework that embodies a dual-mind mechanism through the utilization of two generative Large Language Models (LLMs) dedicated to fast and slow thinking respectively. The fast thinking model serves as the primary interface for external interactions and initial response generation, evaluating the necessity for engaging the slow thinking model based on the complexity of the complete response. When invoked, the slow thinking model takes over the conversation, engaging in meticulous planning, reasoning, and tool utilization to provide a well-analyzed response. This dual-mind configuration allows for a seamless transition between intuitive responses and deliberate problem-solving processes based on the situation. We have constructed a conversational agent to handle online inquiries in the real estate industry. The experåiment proves that our method balances effectiveness and efficiency, and has a significant improvement compared to the baseline.§ INTRODUCTIONIn the era of rapid progress of LLMs<cit.>, creating conversational agents<cit.> that can emulate human-like interactions is both a challenge and an aspiration. Drawing inspiration from the dual-process theory<cit.> of human cognition, which proposes two distinct cognitive processes—a fast, intuitive one and a slower, analytical one. we introduce DUMA, a novel conversational agent framework<cit.>. While there have been efforts like SwiftSage<cit.> that delve into integrating fast and slow thinking processes in AI agents, DUMA stands out by prioritizing conversational scenarios.DUMA, symbolizing a Dual-Mind Conversational Agent, merges two generative Large Language Models (LLMs) to accommodate distinct cognitive processes: fast and slow thinking. The Fast Mind module of DUMA stands out for its agility and efficiency, readily addressing straightforward scenarios. However, its nimbleness might falter in more intricate situations. Conversely, the Slow Mind takes its time, operating at a deliberate pace which might seem less efficient. Yet, this deliberation equips it to grapple with complex challenges, particularly those demanding in-depth reasoning or the invocation of external tools. Together, these two minds allow DUMA to deliver a balanced conversational experience, oscillating smoothly between immediate replies and profound problem-solving.DUMA's dual-mind structure is reminiscent of human cognitive processes, differentiating between the Fast Mind and the Slow Mind. For routine queries, the Fast Mind takes the lead with immediate responses. However, when complex queries arise that demand deeper analysis, like mathematical or logical challenges, it calls upon the Slow Mind. Unlike its counterpart, the Slow Mind doesn't interact directly with users. Instead, it delves into the problem using a methodology inspired by ReAct<cit.>, often calling upon external tools for assistance. Once its comprehensive analysis is done, the insights are relayed back to the Fast Mind, which crafts the response. Importantly, these insights are archived in the Fast Mind's "Memory Area", ensuring efficiency in future related dialogues, epitomizing DUMA's blend of agility and depth.We've conducted experiments in Chinese real estate online communication scenarios, confirming the efficacy of our approach. Although our experiments were specific to the Chinese context and real estate domain, we believe that the methodology behind DUMA possesses a broader applicability.Our main contributions in this paper are two folds:* Introducing DUMA, a novel conversational agent framework built upon the dual-process theory, integrating two LLMs for fast and slow cognitive processes. * Demonstrating the application and efficacy of DUMA in real-world scenarios, specifically in the real estate industry, where it showcases significant improvements over baseline models.§ METHODOLOGY In this section, we will first introduce the overall structure of DUMA, then set forth the design and operation process of Fast Mind and Slow Mind respectively, and finally elaborate the internal and external interaction of DUMA. §.§ DUMA Overall Structure The thinking center of DUMA contains two minds, Fast Mind (defined as Mind_Fast) and Slow Mind (defined as Mind_Slow), as shown in the Figure <ref>.As discussed above, simple questions in general conversations do not require in-depth thinking. Mind_Fast is responsible for quickly thinking, replying to those simple questions, and directly interacting with human. For those difficult questions, such as mathematics, complex reasoning, etc., after Mind_Fast perceives the complexity of the question, it will send a signal to Mind_Slow, thereby triggering DUMA's deep thinking mechanism.Mind_Slow will not directly interact with human. After deep thinking, Mind_Slow will transmit the results and data obtained after calling the tool back to Mind_Fast. The current turn of dialogue will be generated by Mind_Fast based on the inference results of Mind_Slow. Finally, DUMA will return the result through Mind_Fast. §.§ Fast Mind For each turn of dialogue, Mind_Fast may have two input sources, human utterance or Mind_Slow thinking results. Mind_Fast infers each turn of responses based on historical dialogue and current input.Formally, for time step t, assuming the human utterance is Q_t, the Mind_Slow thinking result is S_t, the input will be processed into the following format: I^Fast_t =User[Q_t], HumanInput SlowMind[S_t], Mind_Slow Input According to the previous dialogue, Mind_Fast quickly thinking and generating answer. The results generated by Mind_Fast can indicate whether Mind_Slow needs to be triggered and what needs to be replied. Mind_Fast's response in round t is defined as O_t: O^Fast_t = Invoke[V], V ∈{True, False}Response[Mind_Fast output_t] When the value in "Invoke" is True, Mind_Slow will be awakened. At this time, DUMA will think deeply and thoroughly. The content in "Response" is the reply to the t-th turn of conversation. Analogous to human, Mind_Fast conducts multi-turn of dialogue with individuals. For the current turn of response, Mind_Fast needs to "consider" the context of past dialogue, therefore, we need to process historical conversation into multi-turn format that the model can understand: Context_t = M_bI_0M_eO_0_conv_0M_bI_1M_eI_1_conv_1…M_bI_t-1M_eO_t-1_conv_t-1M_bI_tM_eM_b and M_e are the pattern of the LLM's multi-turn dialogue1 1The pattern of multi-turn of dialogue may be different for different open source LLMs. . Mind_Fast performs quick thinking based on Context_t, and generates the result O_t: O^Fast_t = Mind_Fast(Context_t)§.§ Slow MindWhen Mind_Slow is awakened, just like human, Mind_Slow will first retrospect the past dialogue. Assume that the Mind_Fast response in round t is A_t, and the past dialogue will be structured by Mind_Fast in the following format: Dialogue^his_t = {Query0: Q_0Answer0: A_0 ⋯ Queryt-1: Q_t-1 Answert-1: A_t-1 Queryt: Q_t.Upon encapsulation of Dialogue^his_t by Mind_Fast, the content is subsequently conveyed to Mind_Slow to denote the historical dialogue exchanged between Mind_Fast and individuals. Subsequent to this, Mind_Slow engages in reasoning, action, and observation: O^Slow_t = Mind_Slow(Dialogue^his_t) Inspired by ReAct<cit.>, we divide the Mind_Slow thinking process into four parts (Reason, Act, Obs, Finish), as shown in the Figure <ref>.When encountering complex problems, Mind_Slow may reason in multiple steps. For example, when external tool support is needed, Mind_Slow calls the tool through "Act" and observes the results of the tool through "Obs". Then the current review of the t-th turn of dialogue, all past reasoning, action and observation will be encapsulated into a new chain of thought and input into Mind_Slow which will judge whether the next step of in-depth thinking is needed. If necessary, the process stated above will be looped, otherwise will be finished.The value in "Obs" or "Finish" is the final reasoning result of Mind_Slow, which is encapsulated into the Formula <ref> (Mind_Slow Input) and then input into Mind_Fast.§.§ DUMA: Fast Mind + Slow Mind Mind_Fast interacts with the real world to obtain the latest question and interacts with the Mind_Slow (if necessary) to obtain in-depth thinking results. "Dialogue Memory" shown in Figure <ref> consists of two parts: the interaction between Mind_Fast and the real world, and the results returned by Mind_Slow (including observation, inference results, etc.). For the same situation that needs to trigger Mind_Slow, DUMA only needs to think deeply once, and the results are saved in the "Dialogue Memory" of Mind_Fast in the form of dialogue. When asked questions related to memory, Mind_Fast quickly generates and responds based on the interactive memory with the real world and Mind_Slow, thereby improving the conversation efficiency of the entire agent.As shown in the Figure <ref>, when DUMA receives a new question, the system prompt, historical data and current questions will be spliced and input into Mind_Fast for generating. When Mind_Slow's in-depth thinking is triggered, system prompt and dialogue reviews (described in Section <ref>) are spliced and input into Mind_Slow for reasoning, actions, and observations. This process may be looped, and Mind_Slow decides when to end on its own. After Mind_Slow's thinking is completed, the information in "Obs" is returned to Mind_Fast. Mind_Fast makes inference responses based on the results of Mind_Slow, and finally returns O_f and O_b to the real world. The specific application decides whether O_s needs to be exposed. § EXPERIMENTS We can employ LLMs to construct an agent grounded on the DUMA architecture without training. However, given the complexity of online real estate conversation scenarios, Fast Mind employs Baichuan-13B-Chat<cit.> as its foundational model for training. Simultaneously, considering that Slow Mind might necessitate multi-step reasoning, ChatGLM2-6B<cit.> is utilized as the base model for training to enhance the agent's performance and expedite the inference efficiency.§.§ Data Collection§.§.§ Fast MindUpon training with online real estate dialogue datasets, the model assimilates the conversational style and tone characteristic of actual individuals. However, we concurrently identify a concerning issue related to hallucination, such as consistency discrepancies, notably when the Fast Mind assimilates inputs from the Slow Mind, leading to the generation of erroneous content, as well as fabricating facts problems. To enhance the model's factuality, we implemented a two-phase SFT process: Dialogue Training followed by Factuality Enhancement.SFT Stage I - Dialogue Training We obtain original 13M dialogue data from online dialogue logs and established a data processing pipeline, which mainly includes data desensitization, rule based filtering, logic optimization, intent balancing, etc. Finally, we collect 66349 multi-turn dialogues. Before training, the data will be processed into the model's multi-turn dialogue format. The first stage SFT data uses complete dialogue.SFT Stage II - Factuality Enhancement To enhance the factuality of the model, we resampled 4M dialogues from online logs. After pipeline data processing (in order to ensure that the intent distribution is consistent with online, intent balancing is not performed), we acquire 65009 multi-turn dialogues. Based on the online probability distribution of intent and the likelihood of a given intent appearing at various positions within a dialogue, we randomly sampled 3,000 dialogues for factual calibration annotation. During annotation, only the response to the current turn of question required factual calibration. Given that the annotation might alter the logical coherence of subsequent dialogue, to ensure logical coherence, any content following the annotated dialogue is removed. The differences in data construction methods between Stage I and Stage II are illustrated in the Figure <ref>. §.§.§ Slow Mind We sample 0.5M raw dialogues from online logs. To facilitate better collaboration between Fast Mind and Slow Mind, the data underwent processing through a pipeline identical to Section <ref> (excluding intent balancing), yielding 8,000 samples. To reduce the labeling cost, we label the data (has been desensitized) by GPT-4 pre-labeling and then manual correction. The method for data construction is elaborated in Section <ref>. §.§ Experiments Setups When training Fast Mind, the questions in the multi-turn dialogue of the first stage of SFT are not involved in the loss calculation. In the second stage of SFT, only the last turn of responses are involved in the loss calculation, and the gradients of the remaining questions and answers will be masked. To make the model better retain the dialogue logical coherence, during the second stage of training, we randomly sample 300 multi-turn dialogues from the first stage SFT training data (data ratio is 1:10) to conduct mixed training. The two-stage gradient calculation method is shown in the Figure <ref>. We use the ChatGPT3.5 interface and adopt the ReAct<cit.> method as the baseline (denote as ChatGPT_react). When performing the first stage SFT of Fast Mind, we use Baichuan-13B-Chat as the basic model with learning rate of 1e-4, the second stage SFT uses the first stage checkpoint, and Slow Mind training with ChatGLM2-6B. The learning rate of the second stage SFT of Fast Mind and Slow Mind training are both 1e-5. Throughout all training procedures, the maximum length is 4096, training for 4 epochs, with a batch size of 32. We use a cosine LR schedule down to 10% of the original learning rate, with 3% warmup. All the models are trained with BFloat16 mixed precision for training stability.§.§ Metrics To effectively compare the capabilities of different system architectures, we conduct a manual assessment based on two primary dimensions: Knowledge and Reasoning.Knowledge competency is assessed in three areas: House Expertise, Tool Calling Ability, and Industry Familiarity. House Expertise gauges the agent's ability to respond to human utterance about housing, Tool Calling Ability evaluates the capability of the agent's use of tools, and Industry Familiarity measures the agent's general knowledge in the real estate domain.Reasoning ability encompasses three evaluation metrics: Service Attitude, Demand Mining, and Promote invitation. Service Attitude evaluates whether the agent interacts pleasantly and responds in a human-like manner. Demand Mining assesses the agent's efficacy in uncovering and understanding the latent needs of the user during interactions. Promote Invitation gauges the agent's aptitude in seeking contact details from humans and inviting them for offline meetings at appropriate times.Referring to the intent distribution of online dialogue logs, through expert dialogues with ChatGPT_react, DUMA, and DUMA_StageI, each produced 80 groups of dialogues, named test^chatgpt, test^duma, test^dumaI. During the dialogue process, without affecting the dialogue logic, we reduce the deviation of the evaluation results by trying to ensure that the i-th group of test dialogue questions are the same: Q(test^chatgpt) ≈ Q(test^duma)Q(test^chatgpt) ≈ Q(test^dumaI) Each evaluation metric will be scored as 0, 1, or 2 points. The specific scoring criteria are detailed in the Table <ref>.§.§ Results and Analysis The experimental results are shown in Figure <ref>, with detailed scores presented in Table <ref>. While ChatGPT produces relatively average results, it underperformes in every evaluation metric compared to both DUMA_StageI and DUMA, demonstrating the effectiveness of the DUMA framework.Through a two-stage SFT, DUMA displayes a noticeable improvement in House Expertise over DUMA_StageI, gaining 0.724 points. Since in the second stage of SFT annotation, we corrected the erroneous appointment timings and service attitudes in the data, Promote Initiation and Service Attitude improved slightly, increasing by 0.257 points and 0.239 points respectively. These findings validate the effectiveness of the two-stage SFT.Owing to the incorporation of 10% of first stage dialogue data during the second stage SFT by Mind_Fast, DUMA's logical coherence remained relatively stable, with only minor decreases of 0.064 points in Tool Calling Ability and 0.050 points in Demand Mining. This demonstrates the viability of the mixed training approach in the second stage. § RELATED WORK The development and potential of AI agents have been topics of significant interest in the AI community. An AI agent is defined as an artificial entity that senses its environment, makes decisions, and takes actions.<cit.>.The emergence of Large Language Models (LLMs) is recognized as a potential catalyst for achieving Artificial General Intelligence (AGI) <cit.>. Recently, many works have proposed comprehensive LLM-based agent architectures<cit.>.The key to dialogue agents being able to handle complex dialogue scenarios and apply knowledge lies in planning and tool utilization. * Planning: LLMs exhibit Chain-of-Thought (CoT) reasoning, eliciting rationales through CoT prompts<cit.>. Yet, applying this reasoning in dialogues continues to be a challenge. ReAct<cit.> defines a behavioral pattern of thinking and acting, allowing LLM to reason before each action planning. * Tool Use: LLMs, as demonstrated by <cit.>, are adept at leveraging external resources, such as tools and APIs. The ability to extract knowledge from external sources has been showcased by works like WebGPT <cit.> and ExpeL <cit.>. SwiftSage<cit.> proposed an agent that combines fast and slow thinking, which is used in action planning for complex interactive reasoning tasks. Our work also draws on the dual-process theory of human cognition, but we focus on building an Agent in a conversation scenario. § CONCLUSIONS AND FUTURE WORK In this study, we introduced the DUMA framework, which intertwines the principles of fast and slow thinking within conversational scenarios. Our initial results, based on a specific Chinese real estate context, are promising. However, it's essential to approach these findings with caution until further validations in broader settings are conducted.Our future efforts aim to test DUMA in more universal English-centric settings. Additionally, we recognize the need for a comparative study between standalone Slow Mind and Fast Mind versus their combined use. Future experiments will address these aspects, ensuring a clearer understanding and enhancing the framework’s versatility. acl | http://arxiv.org/abs/2310.18075v4 | {
"authors": [
"Xiaoyu Tian",
"Liangyu Chen",
"Na Liu",
"Yaxuan Liu",
"Wei Zou",
"Kaijiang Chen",
"Ming Cui"
],
"categories": [
"cs.CL",
"cs.AI"
],
"primary_category": "cs.CL",
"published": "20231027114346",
"title": "DUMA: a Dual-Mind Conversational Agent with Fast and Slow Thinking"
} |
Transductive conformal inference with adaptive scores Ulysse Gazin[Université Paris Cité and Sorbonne Université, CNRS, Laboratoire de Probabilités, Statistique et Modélisation. Email: [email protected]]Gilles Blanchard[Université Paris Saclay, Institut Mathématique d'Orsay. Email: [email protected]]Etienne Roquain[Sorbonne Université and Université Paris Cité, CNRS, Laboratoire de Probabilités, Statistique et Modélisation. Email: [email protected]] January 14, 2024 =======================================================================================================================================================================================================================================================================================================================================================================================================================================================Conformal inference is a fundamental and versatile tool that provides distribution-free guarantees for many machine learning tasks. We consider the transductive setting, where decisions are made on a test sample of m new points, giving rise tom conformal p-values. While classical results only concern their marginal distribution, we show that their joint distributionfollows a Pólya urn model,and establish a concentration inequality for their empirical distribution function. The results hold for arbitrary exchangeable scores, including adaptive ones that can use the covariates of the test+calibration samples at training stage for increased accuracy. We demonstrate the usefulness of thesetheoretical results through uniform, in-probability guarantees for two machine learning tasks of current interest: interval prediction for transductive transfer learning and novelty detection based on two-class classification.Keywords: Conformal inference, Multiple testing, False Discovery Rate,Uniform error control § INTRODUCTION Conformal inference is a general framework aiming at providing sharp uncertainty quantification guarantees for the output of machine learning algorithms used as “black boxes”. A central tool of that field is the construction of a “(non)-conformity score” S_i for each sample point. The score functions can be learnt on a training set using various machine learning methods depending on the task at hand. The scores observedon a data sample called “calibration sample”serve as references for the scores of a “test sample”(which may or may not be observed, depending on the setting). The central property of these scores is that they are an exchangeable family of random variables. §.§ Motivating tasks To be more concrete, we start with two specific settings serving both as motivation and as application. (PI) Prediction intervals: we observe 𝒟_=(X_1,Y_1), …, (X_n,Y_n) a sample of i.i.d. variables with unknown distribution P, where X_i∈^d is a regression covariate and Y_i∈ is the outcome. Given a new independent datum (X_n+1,Y_n+1) generated from P, the task is to build aprediction interval for Y_n+1 given X_n+1 and 𝒟_. More generally, in the transductive conformal setting <cit.>, the task is repeated m≥ 1 times:given m new data points 𝒟_=(X_n+1,Y_n+1), …, (X_n+m,Y_n+m) i.i.d from P, build mprediction intervals for Y_n+1,…,Y_n+m given X_n+1,…,X_n+m and 𝒟_. (ND) Novelty detection: we observe𝒟_=(X_1,…,X_n), a sample of nominal data points in ^d, drawn i.i.d. from an unknown (null) distribution P_0, and atest sample 𝒟_=(X_n+1,…,X_n+m) of independent points in ^d, each of which is distributed as P_0 or not.The task is todecide if each X_n+i is distributed as the training sample (i.e., from P_0) or is a “novelty”. For both inference tasks, the usual pipeline is based on the construction of non-conformity real-valued scores S_1,…,S_n+m for each member of 𝒟_∪𝒟_, which requires an additional independent training sample 𝒟_ (in the so-called “split conformal” approach):(PI) the scores are (for instance) the regression residuals S_i=|Y_i-μ(X_i;𝒟_)|, 1≤ i≤ n+m, where the function μ(x;𝒟_) is a point prediction of Y_i given X_i=x, learnt from the sample 𝒟_.(ND) the scores are of the form S_i=g(X_i;𝒟_), 1≤ i≤ n+m, where the score function g(·;𝒟_) is learnt using the sample 𝒟_; g(x) is meant to be large if x is fairly different from the members of 𝒟_ (so that it is “not likely” to have been generated from P_0).In both cases,inference is based on the so-called split conformal p-values <cit.>:p_i=(n+1)^-1[3]1+∑_j=1^n S_j≥ S_n+i, i∈m. In other words, (n+1)p_i is equal to the rank of S_n+i in the set of values {S_1,…, S_n,S_n+i}, and a small p-value p_i indicates that the test score S_n+i is abnormally high within the set of reference scores. The linkto the two abovetasks is as follows: for (PI), the prediction interval 𝒞(α) for Y_n+i with coverage probability (1-α) is obtained by inverting the inequality p_i>α w.r.t. Y_n+i, see (<ref>) below. For (ND), the members of the test sample declared as novelties are those with a p-value p_i≤ t for somethreshold t.Studying the behavior of the conformal p-value family is thus a cornerstone ofconformal inference. Still, classical results only concern the marginal distribution of the p-values while the joint distribution remains largely unexplored in full generality. §.§ Contributions and overview of the paper In Section <ref>, we present new results for the joint distribution of the conformal p-values(<ref>) for general exchangeable scores (for any sample sizes n and m). First, in Section <ref>, we show that the dependence structure involved only depends on n and m, andfollows a Pólya urn model; this entails both explicit formula and useful characterizations. Second, we deduce a new finite sample DKW-type concentration inequality <cit.> for the empirical distribution function (ecdf) of the conformal p-values. We emphasize the following favorable features of our results for application of the conformal methodology: (i) The weak assumption of exchangeable (rather than i.i.d.) scores allows to handle adaptive score training:the score functions can depend on the training sample, and on the calibration+test sample (in a way that maintains exchangeability).(ii) Simultaneous and uniform inference: since m decisions are taken simultaneously (transductive setting), the joint error distribution should be taken into account for global risk assessment. We provide error bounds with high probability anduniform validity over a family of possible decisions (allowing for user or data-driven choice).These findings are then applied in detail to (PI)in Section <ref> and (ND) in Section <ref>. For (ND), we consider adaptive scores proposed by <cit.> leveraging two-class classification. For (PI), we consider a setting of domain shift between training and calibration+test, and introduce a novel approach (to our knowledge)of transductive transfer for PI, leveraging transfer learning algorithms. In both cases, use of adaptive scores significantly improves inference quality (see Figure <ref> for our approach to transductive transfer PI). We give sharp bounds in probability for thefalse coverage proportion (FCP) (for PI) and the false discovery proportion (FDP) (for ND), with a possibly data-driven choice of the prediction intervals for (PI) and of the size of rejection threshold for (ND). This is in contrastto previous results only providing in-expectation guarantees of FCP/FDP.Our work hence brings more fine-grained reliability, which can be crucial when the practitioner faces sensible data.§.§ Relation to previous workFor fundamentals on conformal prediction, see <cit.>. We only consider the split conformal approach, also named inductive conformal approach in the seminal work of <cit.>. The split conformal approach uses a separate training set butis considered the most practically amenable approach for big data (in contrast to the “full conformal” approach which can be sharper but computationally intractable).The most important consequence of score exchangeability is that the marginal distribution of a conformal p-value is a discrete uniform under the joint (calibration and test) data distribution. There has beensignificant recent interest for the conditional distribution of a marginal p-value, conditional to the calibration sample, under the stronger assumption of i.i.d. scores. The corresponding results take the form of bounds on (p_1≤ t| ) holding with high probability over(,where in the two latter references the results are in addition uniformly valid in t). However, the i.i.d. scores assumption prevents handling adaptive scores (point (i) above), for which only exchangeability is guaranteed; moreover, these works are restricted to a single predictor, and do not address point (ii) either.Simultaneous inference for the (PI) task has beenproposed by <cit.> (see alsofor an earlier occurrence for one p-value with multiple new examples), referred to as transductive conformal inference, and which includes a Bonferroni-type correction.Closest to our work, <cit.>analyzes the false coverage proportion (FCP) of the usual prediction interval family 𝒞(α) repeated over m test points: the exact distribution of the FCP under data exchangeability is provided, and related to a Pòlya urn model with two colors. We show the more general result that the full joint distribution of (p_1,…,p_m) follows a Pòlya urn model with (n+1) colors, which entails the result of <cit.> as a corollary (see Appendix <ref>). This brings substantial innovations: our bounds on FCP are uniform in α, and we provide both the exact joint distribution and an explicit non-asymptotic approximation via a DKW-type concentration bound. The (ND) setting is alternatively referred to as Conformal Anomaly Detection (see Chapter 4 of ). We specifically consider here the (transductive) setting of <cit.> where the test sample contains novelties, and the corresponding p-values for `novelty' entries are not discrete uniformbut expected to be stochastically smaller. Due to strong connections to multiple testing,ideas and procedures stemming from that area can be adapted to address (ND), specifically by controlling the false discovery rate (FDR, the expectation of the FDP), such as as the Benjamini-Hochberg (BH) procedure<cit.>.Use of adaptive scores and corresponding FDR control has been investigated by <cit.>. Our contribution with respect to that work comes from getting uniform and in-probability bounds for the FDP(rather than only in expectation, for the FDR). § MAIN RESULTS§.§ Setting We denote integer ranges using i={1,…,i}, i,j={i,…,j}. Let (S_i)_i ∈n+m be real random variables corresponding to non-conformity scores, for which (S_j)_j ∈n are the “reference” scores and (S_n+i)_i ∈m are the “test” scores.We assumeExchUnder (<ref>), the p-values (<ref>) have super-uniform marginals (see, e.g., ). In addition, the marginal distributions are all equal and uniformly distributed on {ℓ/(n+1),ℓ∈n+1} under the additional mild assumption:NoTies While the marginal distribution is well identified, the joint distribution of the p-values is not well studied yet. In particular, we will be interested in theempirical distribution function of the p-value family, defined asF_m(t):=m^-1∑_i=1^m p_i≤ t, t∈ [0,1].Note that the p-values are not i.i.d. under (<ref>), so that most classical concentration inequalities, such as DKW's inequality <cit.>, or Bernstein's inequality, cannot be directly used. Instead, we should take into account the specific dependence structure underlying these p-values.§.§ Key propertiesWe start witha straightforward result, under the stronger assumptionIIDFor this, introduce, for any fixed vector U=(U_1,…,U_n)∈ [0,1]^n, the discrete distribution P^U on the set [1]ℓ/n+1, ℓ∈n+1,defined as P^U({ł/(n+1)})=U_(ł)-U_(ł-1),ł∈n+1,where 0=U_(0)≤ U_(1)≤…≤ U_(n)≤ U_(n+1)=1are the increasingly ordered values of U=(U_1,…,U_n). In words, the n values of U divide the interval [0,1] into (n+1) distinct cells (labeled ℓ/n+1, ℓ∈n+1), and P^U is the probability distribution of the label of the cell a Unif[0,1] variable would fall into.Note that P^U has for c.d.f. F^U(x)=U_(⌊ (n+1)x⌋),x∈ [0,1]. Assume (<ref>) and (<ref>) and consider the p-values (p_i,i∈m) given by (<ref>). Then conditionally on𝒟_=(S_1,…,S_n), the p-valuesare i.i.d. of common distribution given by p_1| 𝒟_∼ P^U,whereU=(U_1,…,U_n)=[1]1-F(S_1),…,1-F(S_n) are pseudo-scores and F is the common c.d.f. of the scores of 𝒟_, that is, F(s)=(S_1≤ s), s∈. In addition the pseudo-score vector U is i.i.d. Unif[0,1] distributed.Proof sketch.The conditional distribution of p_i only depends on score ordering which is unambiguous due to (<ref>), and is thus invariant by monotone transformation of the scores by (1-F). Writing explicitly the cdf of p_i from the uniformly distributed transformed scores yields (<ref>). See Appendix <ref> for details.In the literature, such a result is usedto control the conditional failureprobability (p_1≤α | 𝒟_) around its expectation (which is ensured to be smaller than, and close to, α) with concentration inequalities valid under an i.i.d. assumption <cit.>. By integration over U, a direct consequence of Proposition <ref> is that, under (<ref>) and (<ref>), and unconditionally on 𝒟_, the family of conformal p-values (p_i,i∈m) has the “universal” distributionP_n,m on [0,1]^m defined as follows:P_n,m = 𝒟(q_i,i∈m) ,where(q_1,…,q_m |U)i.i.d.∼ P^U;andU=(U_1,…,U_n)i.i.d.∼Unif([0,1]).Our first result is to note that the latter holds beyond the i.i.d. assumption. Assume (<ref>) and (<ref>), then the family of p-values (p_i,i∈m) given by (<ref>) has joint distribution P_n,m, which is defined by (<ref>)-(<ref>) and is independent of the specific score distribution.Proof sketch. The joint distribution of the p-values only depends on the ranks of the (n+m) scores. Since the scores have exchangeable distribution and (<ref>) holds, their ranks form a random permutation of n+m. Thus,the same rank distribution (and consequently joint p-value distribution) is generated when the scores are i.i.d. Applying Proposition <ref>, the p-value distribution can be represented as (<ref>)-(<ref>). See also Appendix <ref>. The next proposition is an alternative and useful characterization of the distribution P_n,m.P_n,mis the distribution of the colors of m successive draws in a standard Pólya urn model with n+1 colors labeled {ł/n+1, ł∈n+1}.Proposition <ref> is proved in Appendix <ref>, where several explicit formulas for P_n,m are also provided. We also show that this generalizes the previous work of <cit.>.Comparing Proposition <ref> and Proposition <ref>, we see that having i.i.d. scores is more favorable because guarantees are valid conditionally on 𝒟_ (with an explicit expression for U=U()). However, as we will see in Sections <ref> and <ref>, the class of exchangeable scores is much more flexible and includes adaptive scores, which can improve substantially inference sharpness in specific situations. For this reason, we work with the unconditional distribution as in Proposition <ref> in the sequel.§.§ ConsequencesWe now provide a DKW-type envelope forthe empirical distribution function (<ref>) of conformal p-values. Let us introduce the discretized identity functionI_n(t) =⌊ (n+1)t⌋/(n+1)=F_m(t), t∈ [0,1], and the following bound:B^(λ,n,m) :=1_λ <1[1+2√(2π)λτ_n,m/(n+m)^1/2]e^-2τ_n,mλ^2,where τ_n,m:=nm/(n+m)∈ [(n∧ m)/2, n∧ m] is an“effective sample size”. Let us consider the process F_m defined by (<ref>), the discrete identity function I_n(t) defined by (<ref>), and assume (<ref>) and (<ref>). Then we havefor all λ>0, n,m≥ 1,(sup_t∈ [0,1](F_m(t) - I_n(t)) > λ) ≤ B^(λ,n,m).In addition, B^(λ^_δ,n,m,n,m)≤δ forλ^_δ,n,m=Ψ^(r)(1); Ψ(x)=1∧( log(1/δ)+log(1+ √(2π)2τ_n,mx/(n+m)^1/2)/2τ_n,m)^1/2,where Ψ^(r) denotes the function Ψ iterated r times (for an arbitrary integer r≥ 1). Proof sketch. Use the representation (<ref>), apply the DKW inequality separately to (U_1,…,U_n) and to (q_1,…,q_m) conditional to U, and integrate over U. See Appendix <ref> for details (a slightly more accurate bound is also proposed).Since the distribution P_n,m can be easily sampled from, λ^_δ,n,m in (<ref>)can be further improved by considering the sharper but implicit quantileλ^(δ,n,m)=min{x≥ 0 : π_n,m,x≤δ}, withπ_n,m,x:=P_n,m(sup_ł∈n+1(F_m(ℓ/n+1) - ℓ/n+1) > x).In addition, numerical confidence envelopes for F_m with other shapes can be investigated.For instance, for any set 𝒦⊂m of size K, we can calibrate thresholds t_1,…,t_K>0 such that _p∼ P_n,m(∀ k∈𝒦, p_(k+1)> t_k)= _p∼ P_n,m(∀ k∈𝒦, F_m(t_k)≤ k/m)≥ 1-δ.A method is to start from a “template” one-parameter family (t_k(λ))_k∈𝒦 and then adjust λ to obtain the desired control <cit.>. This approach is developed in detail in Appendix <ref>. § APPLICATION TO PREDICTION INTERVALS In this section, we apply our results to build simultaneous conformal prediction intervals, with an angle towards adaptive scores and transfer learning. §.§ Setting Let us consider a conformal prediction framework for a regression task, see, e.g., <cit.>, with three independent samples of points (X_i,Y_i), where X_i∈^d is the covariable and Y_i∈ is the outcome: * Training sample 𝒟_: observed and used to build predictors; * Calibration sample 𝒟_={(X_i,Y_i),i∈n}; observed and used to calibrate the size(s) of the prediction intervals; * Test sample 𝒟_={(X_n+i,Y_n+i), i ∈m};only the X_i's are observed and the aim is to provide prediction intervals for the labels.In addition, we consider the followingtransfer learning setting: while the data points are i.i.d. within each sample and the distributions of 𝒟_ and 𝒟_ are the same,the distribution of 𝒟_ can be different. However, 𝒟_ can still help to build a good predictor by using a transfer learning toolbox, considered here as a black box(see, e.g.,for a survey on transfer learning). A typical situation of use is when the training labeled data 𝒟_ is abundant but there is a domain shift for the test data, and we have a limited number of labeled data 𝒟_ from the new domain.§.§ Adaptive scores and procedures Formally, the aim is to build ℐ=(ℐ_i)_i∈m, a family of m random intervals ofsuch that the amount of coverage errors (Y_n+i∉_i)_i∈m is controlled.The construction of a rule ℐ is based on non-conformity scores S_i, 1≤ i≤ n+m,corresponding to residuals between Y_i and the prediction at point X_i: S_i:=|Y_i-μ̂(X_i;(,))|,i∈n+m,where the predictor μ̂is learnt usingand the calibration + test covariates =(X_1,…,X_n+m). More sophisticated scores than the residuals have been proposed in earlier literature <cit.>, in particular allowing for conditinal variance or quantile prediction and resulting prediction intervals of varying length. Our theory extends to those as well and we consider here (<ref>) for simplicity. We call the scores (<ref>) adaptive because they can use the unlabeled data , which is particularly suitable in the transfer learning framework where the covariates of 𝒟_ should be mapped to those ofto build a good predictor.Classical scores can also be recovered via (<ref>)if the predictor ignores . The predictor μ can be any “black box" (an unspecified transfer learning algorithm) provided the following mild assumption is satisfied, ensuring score exchangeability:.PermInvSince (X_i,Y_i)_i ∈n+m are i.i.d. and thus exchangeable, one caneasily show that(<ref>) holds for the adaptive scores (<ref>) when the predictor satisfies (<ref>). Predictors based on transfer machine learning procedures typically satisfy (<ref>). In addition, (<ref>) is a mild assumption: add a negligible noise to the scores is an appropriate tie breaking that makes(<ref>) hold.Given the scores (<ref>), we build the conformal p-values via (<ref>) and define the specific conformal procedure 𝒞(α)=(𝒞_i(α))_i∈m obtained by inverting {p_i>α} with respect to Y_n+i, that is, {p_i>α}={Y_n+i∈𝒞_i(α)}almost surely with𝒞_i(α):=[ μ̂(X_n+i; (,)) ± S_(⌈ (n+1)(1-α)⌉)],where S_(1)≤…≤ S_(n)≤ S_(n+1):=+∞ denote the order statistics of the calibration scores (S_1,…,S_n).Observe that the radius of the interval S_(⌈ (n+1)(1-α)⌉) can be equivalently described as the (1-α)-quantile of the distribution ∑_i=1^n 1/n+1δ_S_i + 1/n+1δ_+∞. Note also that 𝒞(α)=^m if α<1/(n+1), that is, if the desired coverage error is too small w.r.t. the size of the calibration sample. §.§ Transductive error rates By Proposition <ref>, the following marginal control holds for the conformal procedure 𝒞(α) (<ref>):(Y_n+i∉𝒞_i(α))≤α, i∈m.This is classical for non-adaptive scores and our result already brings an extension to adaptive scores in the transfer learning setting. In addition, we take into account the prediction multiplicity by considering false coverage proportion (FCP) of some procedure ℐ=(_i)_i∈m, given by(ℐ) :=m^-1∑_i=1^m Y_n+i∉_i.It is clear from (<ref>) that the procedure 𝒞(α) (<ref>) controls the false coverage rate, that is, (𝒞(α))):=[(𝒞(α))]≤α.However,the error (𝒞(α)) naturally fluctuates around its mean and the event {(𝒞(α))≤α} is not guaranteed. Hence, we aim at the following control in probability of the FCP:[(𝒞(α))≤α] ≥ 1-δ.Several scenarios can be considered: α is fixed and we want to find a suitable bound α = _α,δ for the “traditional” conformal procedure 𝒞(α); or conversely, α is fixed and we want to adjust the parameter α = t_α,δ of the procedure to ensure the probabilistic control at target level α. For α=0, this reduces to [∀ i∈m,Y_n+i∈_i]≥ 1-δ, i.e., no false coverage with high probability. By applying a union bound, the procedure 𝒞(δ/m)satisfies the latter control, as already proposed by <cit.>. However,in this case the predicted intervals can be trivial, that is,𝒞(δ/m)=^m, if the test sample is too large, namely, m> δ(n+1). Moreover, in a more general scenario the practitioner may want to adjust the parameter α=α on their own depending on the data, for example based on some personal tradeoff between the probabilistic control obtained and the length of the corresponding prediction intervals — this is the common practice of a “post-hoc” choice (made after looking at the data). This motivates us to aim at a uniform (in α) bound, that is, find a family of random variables (_α,δ)_α∈ (0,1) such that ∀α∈ (0,1),(𝒞(α))≤_α,δ ≥ 1-δ .Establishing such bounds is investigated in the next section. This gives a guarantee on the FCP in any of the above scenarios, in particular a post-hoc choice of the parameter α. As a concrete example, one may want to choose a data-dependent α to ensure prediction intervals 𝒞(α) of radius at most L, namely, α(L)=(n+1)^-1∑_i=1^n S_i≤ L.Guarantee (<ref>) yields a (1-δ)-confidence error bound_α(L),δ for this choice. §.§ Controlling the error rates To establish (<ref>) and (<ref>), we use that from (<ref>), (<ref>) and (<ref>), (𝒞(t))=F_m(t) and thus for all t∈ [0,1],{(𝒞(t))≤α} =[1]F_m(t) ≤α=[1]mF_m(t) ≤⌊α m⌋=[1]p_(⌊α m⌋+1)>t,where p_(1)≤…≤ p_(m) denote the ordered conformal p-values. We deduce the following result.Let n,m≥ 1. Consider the setting of Section <ref>, the conformal procedure 𝒞(α) given by (<ref>) and P_n,m given by (<ref>). Then the following holds: (i) for any α∈ [0,1], δ∈ (0,1),𝒞(α=t_α,δ) satisfies (<ref>) provided that t_α,δ is chosen s.t._p∼ P_n,m( p_(⌊α m⌋+1)≤ t_α,δ)≤δ. (ii) for any δ∈ (0,1),[1]_α,δ_α∈ (0,1) satisfies(<ref>)provided that _p∼ P_n,m[1]∃α∈ (0,1) : F_m(α) > _α,δ≤δ. Applying Corollary <ref> (i), for conformal prediction with guaranteed FCP, we obtain an adjusted level parameterwhich can be computed numerically (an explicit formula can also be given for α=0, see Appendix <ref>). Applying Corollary <ref> (ii), and thanks to (<ref>), the following family bound (_α,δ)_α∈ (0,1)is valid for (<ref>)^_α,δ = [1]α + λ^_δ,n,mα≥ 1/(n+1),with λ^_δ,n,m>0 given by (<ref>). Obviously, numerical bounds can also be developed according to Remark <ref>. §.§ Numerical experiments To illustrate the performance of the method, we consider the following proof-of-concept regression model: (W_i,Y_i) i.i.d. with Y_i|W_i ∼𝒩(μ(W_i),σ^2)for some unknown function μ and parameter σ>0. To accommodate the transfer learning setting, we assume that we observe X_i=f_1(W_i) inand X_i=f_2(W_i) in ∪ for some transformations f_1 and f_2.Three conformal procedures[Python code for (PI) based on implementation of <cit.>.] ℐ=𝒞(α)=(𝒞_i(α))_i∈m are considered which differ only in the construction of the scores: first, ℐ^ consists in using a predictor of the usual form μ̂(·,𝒟_) hence ignoring the distribution difference betweenand ∪ (no transfer) with a RBF kernel ridge regression; the second procedure ℐ^ ignores completelyand works by splittingin two new samples of equal size to apply the usual approach with these new (reduced) samples (transfer not needed); the third approach ℐ^ is the proposed one, and uses the transfer predictor μ̂(·;(,))based on optimal transport proposed by <cit.>. While all methods provide the correct (1-α) marginal coverage, we see from Figure <ref> that ℐ^ is much more accurate, which shows the benefit of using transfer learning and adaptive scores. Here, | |=5000,n=m=75, μ(x)=cos(x),W_i∼𝒰(0,5), f_1(x)=x, f_2(x)=0.6x+x^2/25 and σ=0.1. Next, for each of the three methods, the FCP and corresponding bounds (<ref>) are displayed in Figure <ref>. This illustrates both that each bound is uniformly valid in L and that transfer learning reduces the FCP (and thus also the FCP bounds).§ APPLICATION TO NOVELTY DETECTION§.§ Setting In the novelty detection problem, weobserve the two following independent samples: * a training null sample 𝒟_ of n_0 nominal data points in ^d which are i.i.d. with common distribution P_0;* a test sample 𝒟_=(X_i, i∈m) of independent points in ^d either distributed as P_0 or not. The aim is to decide if each X_i is distributed as the training sample (that is, as P_0) or not.This long standing problem in machine learning has been recently revisited with the aim of controlling the proportion of errors among the items declared as novelties <cit.>; let _0={i∈m :X_i∼ P_0} corresponding to the set of non-novelty in the test sample and consider the false discovery proportion(R)=|R∩_0|/|R|∨ 1,for any (possibly random) subset R⊂m corresponding to the X_i's declared as novelties.The advantage of considering (R) for measuring the errors has been widely recognized in the multiple testing literature since the fundamental work of <cit.> and its popularity is nowadays increasing in large scale machine learning theory, see <cit.>, among others.The main advantage of (R) is that the number of errors|R∩_0| is rescaled by the number of declared novelties |R|, which makes it scale invariant with respect to the size m of the test sample, so that novelty detection can still be possible in large scale setting. §.§ Adaptive scores Following <cit.>, we assume that scores are computed as follows:* Split the null sample 𝒟_ into 𝒟_ and𝒟_=(X_i, i∈n) for some chosen n∈ (1,n_0);* Compute novelty scores S_i=g(X_i), i∈n+m,for some score function g:^d→ (discussed below);* Compute conformal p-values as in (<ref>).In the work of <cit.>, the score function is built from 𝒟_ only, using a one-class classification method (classifier solely based on null examples), which makes the scores independent conditional to . The follow-up work <cit.> considers a score function depending both on 𝒟_ and 𝒟_∪𝒟_ (in a permutation-invariant way of the sample 𝒟_∪𝒟_), which allows to use a two-class classification method including test examples. Doing so, the scores are adaptive to the form of the novelties present in the test sample, whichsignificantly improves novelty detection (in a nutshell: it is much easier to detect an object when we have some examples of it).While the independence of the scores is lost, an appropriate exchangeability property is maintained so that we can apply our theory in that case, by assuming in addition (<ref>).§.§ Methods and FDP bounds Let us consider any thresholding novelty procedureℛ(t):={i∈m : p_i≤ t},t∈ (0,1).Then the following result holds true.In the above novelty detection setting and under Assumption <ref>, the family of thresholding novelty procedures (<ref>) is such that, with probability at least 1-δ, we have for allt∈ (0,1),(ℛ(t))≤m̂_0 I_n(t) + m̂_0 λ^_δ,n,m̂_0/1∨ |ℛ(t)|=:^_t,δ,where λ^_δ,n,m̂_0 is given by (<ref>) andm̂_0 is any random variable such thatm̂_0≥max{r : inf_t∑_i=1^mp_i> t + r λ^_δ,n,r/1-I_n(t)≥ r},where r is in the range m and the maximum is equal to m if the set is empty. The proof is provided in Appendix <ref>.Among thresholding procedures (<ref>), AdaDetect <cit.> is obtained by applying the Benjamini-Hochberg (BH) procedure <cit.> to the conformal p-values. It is proved to control the expectation of the FDP (that is, the false discovery rate, FDR) at level α. Applying Corollary <ref> provides in addition an FDP bound for AdaDetect, uniform in α, seeAppendix <ref>.§.§ Numerical experiments We follow the numerical experiments on “Shuttle” datasets of <cit.>[The Python code uses the implementation of the procedure AdaDetect of<cit.>.].In Figure <ref>, we displayed the true FDP and the corresponding bound (<ref>) when computing p-values based on different scores: the non-adaptive scores of <cit.> obtained withisolation forest one-class classifier; and the adaptive scores of <cit.> obtained with random forest two-class classifier. While the advantage of considering adaptive scores is clear (smaller FDP and bound) , it illustrates that the bound is correct simultaneously on t.Additional experiments are provided in Appendix <ref>. § CONCLUSION The main takeaway from this work is the characterization of a “universal” joint distribution P_n,m for conformal p-values based on n calibration points andm test points. We derived as a consequence a non-asymptotic concentration inequality for the p-value empirical distribution function; numerical procedures can also be of use for calibration in practice. This entails uniform error bounds on the false coverage/false discovery proportion that hold with high probability, while standard results are only marginal or in expectation and not uniform in the decision. Since the results hold under the score exchangeability assumption only, they are applicable to adaptive score procedures using the calibration and test sets for training.§ ACKNOWLEDGEMENTS We would like to thank Anna Ben-Hamou and Claire Boyer for constructive discussions and Ariane Marandon for her support with the code. The authors acknowledge the grants ANR-21-CE23-0035 (ASCAI) and ANR-19-CHIA-0021-01 (BISCOTTE) of the French National Research Agency ANR and the Emergence project MARS. § EXACT FORMULAS FOR P_N,M In this section, we provide new formulas for the distribution P_n,m given by (<ref>). First let for j=(j_1,…,j_m)∈n+1^m, M(j):=(M_k(j))_k∈n+1 whereM_k(j):=|{i∈m :j_i=k}| is the number of coordinates of j equal to k, for k∈n+1, and M(j)!:=∏_k=1^n+1 (M_k(j)!).P_n,m corresponds to the distribution of the colors of m successive draws in a standard Pólya urn model with n+1 colors labeled as [1]ł/n+1,ł∈n+1 (with an urn starting with 1 ball of each color). That is,for p∼ P_n,m in (<ref>), we have (i) Sequential distribution: for all i∈0,m-1, the distribution of p_i+1 conditionally on p_1,…,p_i does not depend on m and is given by 𝒟(p_i+1 |p_1,…,p_i) = ∑_j=1^n+11+∑_k=1^ip_k=j/(n+1)/n+1+iδ_j/(n+1). (ii) Joint distribution: for all vectors j∈n+1^m,p=j/n+1 =M(j)!n!/(n+m)!, (iii) Histogram distribution: the histogram of p is uniformly distributed on the set of histograms of m-sample into n+1 bins, that is, for all m=(m_1,…,m_n+1)∈0,m^n+1 with m_1+…+m_n+1=m, (M[1](n+1)p=m) = n+mm^-1. In particular, conditionally on M[1](n+1)p, the variable p is uniformly distributed on the set of possible trajectories, that is,for all vectors j∈n+1^m, p=j/n+1 | M[1](n+1)p=M[1]j = M[1]j!/m!. Theorem <ref> is proved in Section <ref> for completeness.Theorem <ref> (i) gives the mechanism of the Pólya urn model: Namely, the urn first contains one ball of each of the n+1 colors, so p_1 has a uniform distributed on [1]ł/n+1,ł∈n+1; then, given p_1=ℓ/(n+1), we have drawn a ball of color ℓ and we put back this ball in the urn with another one of the same color ℓ, so p_2 is generated according to the distribution on [1]ł/n+1,ł∈n+1 with equal chance (=1/(n+2)) of generating k/(n+1), k≠ł, and twice more chance (=2/(n+2)) of generating ł/(n+1). Recursively, given p_1,…,p_i, the random variable p_i+1 is generated in [1]ł/n+1,ł∈n+1 according to the sizes of the histogram of the sample ((n+1)p_1,…, (n+1)p_i), see Figure <ref>.Theorem <ref> (ii) provides the exact dependency structure between the p-values: for instance, M(j)!=1 when the coordinates of j=(j_1,…,j_m) are all distinct, while M(j)!=m! when the coordinates of j=(j_1,…,j_m) are the same. This means that the distribution slightly favors the j with repeated entries. This shows that the conformal p-values are not i.i.d. but have a positive structure of dependency. This is in accordance with the specific positive dependence property (called PRDS) already shown by <cit.>. Theorem <ref> (iii) shows an interesting non-concentration behavior of P_n,m when n is kept small: if the p_i's were i.i.d. uniform on [1]ł/n+1,ł∈n+1 then the histogram M((n+1)p) would follow a multinomial distributionand the histogram would concentrate around the uniform histogram as m tends to infinity. Rather, the p_i's are hereonly exchangeable, not i.i.d., and the histogram does not concentrate when m tends to infinity while n is small. As a case in point, for n=1, M_1((n+1)p) is uniform on m, whatever m is, see (<ref>).Nevertheless, we will show in the next section that a concentration occurs when both mand n tend to infinity. Note that P^U in (<ref>) is the conditional distribution that one would get by applying the de Finetti theorem to the infinite exchangeable sequence (p_i)_i≥ 1 with(p_1,…,p_m)∼ P_n,m for all m. Relation to <cit.>. As a consequence of (<ref>), given any I⊂ℓ/n+1, ℓ∈n+1, we have(p_i+1∈I |p_1,…,p_i) = |I|+N_i(I)/n+1+i =(p_i+1∈I |N_i(I)),where N_i(I) = | k ∈i: p_k ∈ I)|. In words, it means that the Pólya urn model continues to hold if we group (or “re-paint”) the initial (n+1) colors into only two colors, determined by whether the original color label belongs to I or not.In particular, we recover the Pólya urn model put forward by <cit.>: letting Z_i=p_i>α, we have thatfor all i∈0,m-1, the distribution of Z_i+1 conditionally on Z_1,…,Z_i does not depend on m and is given by 𝒟(Z_i+1 |Z_1,…,Z_i) =⌊α(n+1)⌋+∑_k=1^iZ_k=j/n+1+iδ_0 +⌈ (1-α)(n+1)⌉+∑_k=1^iZ_k=j/n+1+iδ_1.Hence, the distribution of (Z_1,…,Z_m) corresponds to the distribution of the colors of m successive draws in a standard Pólya urn model with 2 colors labeled as {0,1} (with an urn starting with ⌊α(n+1)⌋ balls 0 and ⌈ (1-α)(n+1)⌉ balls 1).In particular, we recover Theorem 1 of <cit.>.In the setting of Theorem <ref>, we have for all α∈ (0,1) and k∈m, by denoting k_0=⌈α(n+1)⌉,F_m(α) = k/m=mk(n-k_0+1)… (n-k_0 + m-k)× k_0 …(k_0+k-1)/(n+1)… (n+m) . By Proposition <ref>, (<ref>) and the notation of (<ref>), we have F_m(α) = k/m =mk[ (U_(k_0))^k (1-U_(k_0))^m-k]=mkn!/(k_0-1)! (n-k_0)!∫_0^1u^k+k_0-1 (1-u)^m-k+n-k_0 du=mkn!/(k_0-1)! (n-k_0)!(k+k_0-1)!(m+n-k-k_0)!/(m+n)!,by using that U_(k_0) follows a beta distribution with parameter (k_0,n+1-k_0)and by using the beta distribution with parameter (k+k_0,m+n+1-k-k_0). This shows the result.§ NUMERICAL BOUNDS AND TEMPLATES The bound proposed in Theorem <ref> are explicit and elegant, but can be conservative in some cases and we develop here the numerical approach mentionedin Remark <ref>. We rely on showing (<ref>), which immediately implies a confidence envelope on F_m because ∀ k∈𝒦 : F_m(t_k)≤k/m =∀ k∈𝒦 : F_m(t_k)< k+1/m={∀ k∈𝒦 :p_(k+1) >t_k }.To establish (<ref>), we use the notion of template introduced by <cit.>, see also <cit.>. A template is a one-parameter family t_k(λ), λ∈ [0,1], k∈𝒦⊂m, such that t_k(0) = 0 and t_k(·) is continuous increasing on [0, 1].From above, we have for all λ,{∀ k∈𝒦 : F_m(t_k(λ))≤ k/m } ={∀ k∈𝒦 :p_(k+1) >t_k(λ) }={min_k∈𝒦{t_k^-1(p_(k+1))}>λ}.Hence, let us consider λ(δ,n,m) = maxλ∈Λ : _p∼ P_n,m[2]min_k∈𝒦{t_k^-1(p_(k))}>λ≥ 1-δ,where Λ is the finite set {t_k^-1(ℓ/(n+1)),k∈𝒦, ℓ∈n+1}.Then by Proposition <ref> we have the following result.Let us consider the process F_m defined by (<ref>), the distribution P_n,m given by (<ref>), a template t_k(λ), λ∈ [0,1], k∈𝒦 as above, and assume (<ref>) and (<ref>). Then we have for all δ∈ (0,1), n,m≥ 1,∀ k∈𝒦 : F_m[2]t_k[1]λ(δ,n,m)≤k/m≥ 1-δ,for λ(δ,n,m) given by (<ref>). Here are two template choices: * The linear template t_k(λ)=k λ /m, 𝒦=m, which leads to the inequality∃ t ∈ (0,1): F_m( t)> ⌈ tm/λ(δ,n,m)⌉/ m≤δ,which recovers the Simes inequality (<ref>) with an adjusted scaling parameter. * The “beta template” <cit.>, for which t_k(λ) is the λ-quantile of the distribution (k,m+1-k) and thus Λ={F_(k,m+1-k)(ℓ/(n+1)),k∈𝒦, ℓ∈n+1}. For instance, it can be used with 𝒦={1+k⌈log(m)⌉ ,k∈K}. § PROOFS§.§ Proof of Proposition <ref>Assumption (<ref>) implies that marginal score distribution is atomless, so that F is continuous and 1-F(S_i) has Unif[0,1] distribution. Therefore, (U_1,…,U_n+m)=(1-F(S_1),…,1- F(S_n+m)) are i.i.d. ∼Unif[0,1]. Recallp_i=(n+1)^-1[3]1+∑_j=1^n S_j≥ S_n+i, i∈m,since p_i is a function of S_n+i andonly, it follows that conditionally on , the variables p_1,…,p_m are independent (and identically distributed).Since F is continuous, it holds F^†(F(S_i))=S_i almost surely, where F^† is the generalized inverse of F. Therefore S_j ≥ S_n+i = U_j ≤ U_n+i almost surely. Hence, p_1 is distributed as(n+1)^-1[3]1+∑_j=1^n U_j≤ U_n+1 =(n+1)^-1[3]1+∑_j=1^n U_(j)≤ U_n+1, where U_(1)≤…≤ U_(n) denotes the order statistics of (U_1,…,U_n). Therefore, we have for all x∈ [0,1], (p_1≤ x | ) =[3]1+∑_j=1^n U_(j)≤ U_n+1≤ x(n+1) | =[3]1+∑_j=1^n U_(j)≤ U_n+1≤⌊ x(n+1) ⌋| = (U_n+1<U_(⌊ x(n+1) ⌋) |) = U_(⌊ x(n+1) ⌋),which finishes the proof. §.§ Proof of Proposition <ref> If there are no tied scores, which by assumption (<ref>) happens with probability 1, the ranks R_i of the ordered scores are well-defined and the vector (p_1,…,p_m) is only a function of the rank vector (R_1,…,R_n+m). Namely, R_i≤R_j if and only if S_i≤ S_j, and the conformal p-values (<ref>) can be written asp_i=(n+1)^-1[3]1+∑_j=1^n R_j≥R_n+i, i∈m. Now, by (<ref>), the vector (R_1,…,R_n+m) is uniformly distributed on the permutations of n+m. Any score distribution satisfying (<ref>) and (<ref>) therefore gives rise to the same rank distribution, and thus the same joint p-value distribution. This joint distribution has been identified as (<ref>)-(<ref>) from the result of Proposition <ref> in the particular case of i.i.d. scores. (Thus the i.i.d. assumption turns out to be unnecessary for what concerns the joint, unconditional distribution of the p-values, but provides a convenient representation.)§.§ Proof of Theorem <ref>Proof of (ii)By (<ref>),(<ref>) the permutation that orders the scores (S_1,…,S_n+m) that is σ such that S_()=(S_σ(1)> … >S_σ(n+m)), is uniformly distributed in the set of permutations of n+m. In addition, σ is independent of the order statistics S_() and we seek for identifying the distribution of (p_1,…,p_m) conditionally on S_(). Next, using again (<ref>), we can assume without loss of generality that j_1≤…≤ j_m when computing the probability in (<ref>).Now, due to the definition (<ref>), the event {(p_1,…,p_m)=(j_1/(n+1),…,j_m/(n+1)} corresponds to a specific event on σ. Namely, by denoting (a_1,…,a_ł) the vector of unique values of the set {j_1,…,j_m} with 1≤ a_1<…<a_ł≤ n, and M_k=∑_i=1^m j_i=a_k, 1≤ k≤ℓ, the corresponding multiplicities, the above event corresponds to the situation S_σ(1)>⋯>S_σ( a_1-1)_ a_1-1null scores>S_σ(a_1)>⋯>S_σ(a_1+M_1-1)_M_1 test scores in {S_n+1,…,S_n+M_1}>S_σ( a_1+M_1)>⋯>S_σ( a_2+M_1-1)_a_2- a_1 null scores>S_σ(a_2+M_1)>⋯>S_σ_1(a_2+M_1+M_2-1)_M_2 test scores in {S_n+M_1+1,…,S_n+M_1+M_2}>⋯S_σ( a_ℓ-1+M_1+⋯+M_ℓ-1)>⋯>S_σ( a_ℓ+M_1+⋯+M_ℓ-1-1)_a_ℓ- a_ℓ-1null scores>S_σ(a_ℓ+M_1+⋯+M_ℓ-1)>⋯>S_σ(a_ℓ+m-1)_M_ℓ test scores in {S_ n+M_1+⋯+M_ℓ-1+1,…,S_n+m}>S_( a_ℓ+m)>⋯>S_(n+m)_n- a_ℓ+1 null scores.This event can be formally described as follows: {∀ k ∈ℓ :σ[1]{a_ℓ+M_1+⋯+M_k-1,…,a_ℓ+M_1+⋯+M_k-1} = {n+M_1+⋯+M_k-1+1,…,n+M_1+⋯+M_k}}.Since σ is uniformly distributed in the set of permutations of n+m, the probability of this event (conditionally on S_()) is equal to n! (∏_k=1^ℓ (M_k!))/(n+m)!, which yields(<ref>). Proof of (i)By using (<ref>) of (ii), we have (p_i+1=j_i+1/(n+1) |(p_1,…,p_i)=(j_1/(n+1),…,j_i/(n+1))) = M(j_1,…,j_i+1)! n!/(n+i+1)!/M(j_1,…,j_i)! n!/(n+i)!.Now, we haveM(j_1,…,j_i+1)! =∏_j=1^n+1[(∑_k=1^i+1j_k=j)!]=∏_j=1^n+1[(∑_k=1^ij_k=j+j_i+1=j)!]=∏_j=1^n+1(∑_k=1^ij_k=j)! [1+j_i+1=j∑_k=1^ij_k=j]=M(j_1,…,j_i)![1+j_i+1=j∑_k=1^ij_k=j].This proves (<ref>). Proof of (iii)For all m=(m_1,…,m_n+1)∈0,m^n+1 with m_1+…+m_n+1=m,we have(M((n+1)p)=m)=∑_j∈n+1^mM(j)=m((n+1)p=j)=m! n!/(n+m)!∑_j∈n+1^mM(j)=m=m! n!/(n+m)!m!/m!= n!m!/(n+m)!,where we have used (ii) and the multinomial coefficient.§.§ Proof of Theorem <ref> First observe that the LHS of (<ref>) is 0 if λ≥ 1 so that we can assume λ<1.Let usprove (<ref>) with the more complex boundB^(λ,n,m) := n/n+m e^-2m λ^2 + m/n+m e^-2nλ^2 +C_λ,n,m2√(2π)λ nm/(n+m)^3/2e^-2nm/n+mλ^2,where C_λ,n,m=(𝒩(λμ,σ^2)∈ [0,λ])<1, for σ^2=(4(n+m))^-1 and μ=n(n+m)^-1.Let us comment the expression (<ref>) of B^(λ,n,m). As we can see, the role ofn and m are symmetric (except in C_λ,n,m, that we can always further upper-bound by 1), and the two first terms are a convex combination of the usual DKW bounds for m and n i.i.d. variables, respectively. The third term is a “crossed” term between n and m, which becomes negligible if n≫ m or n≪ m but should be taken into account otherwise. Below, we establish [3]sup_t∈ [0,1][1]F_m(t) - I_n(t) > λ ≤ B^(λ,n,m); [3]sup_t∈ [0,1][1]-F_m(t) + I_n(t) > λ ≤ B^(λ,n,m); [1]F_m - I_n_∞ > λ ≤2B^(λ,n,m).The result will be proved from (<ref>) because B^(λ,n,m)≤ B^(λ,n,m) since n∨ m≥ nm/(n+m) and C_λ,n,m≤ 1. The proof relies on Proposition <ref> and the representation (<ref>). Let U=(U_1,…,U_n) i.i.d. ∼ U(0,1),and denote F^U(x)=U_(⌊ (n+1)x⌋), x∈ [0,1]. Conditionally on U, draw (q_i(U),i∈m)i.i.d. of common c.d.f. F^U and letG_m(t)=m^-1∑_i=1^m q_i(U)≤ t, t∈ [0,1],the empirical c.d.f. of (q_i(U),i∈m). By Proposition <ref>, we have that F_m has the same distribution as G_m (unconditionally on U), so that for any fixed n,m≥ 1 and λ>0,[3]sup_t∈ [0,1][1]F_m(t) - I_n(t) > λ =[3]sup_t∈ [0,1][1]G_m(t) - I_n(t) > λ |U. We now prove the bound (<ref>) (the proof for (<ref>) is analogous). Denote Z=sup_t∈ [0,1][1]F^U(t)-I_n(t)∈ [0,1].We write by (<ref>) and the triangle inequalitysup_t∈ [0,1][1]F_m(t) - I_n(t) > λ ≤(sup_t∈ [0,1](F_m(t) - F^U(t)) + Z> λ |U)≤(sup_t∈ [0,1](F_m(t) - F^U(t))≥(λ- Z)_+ |U)≤ e^- 2m(λ- Z)^2_+.The last inequality above is the DKW inequality <cit.> applied to control the inner conditional probability, since conditionally to U, F_m is the e.c.d.f. of (q_i(U),∈m), which are i.i.d. ∼ F^U; and Z conditional to U is a constant. Now the last bound can be rewritten as∫_0^1e^- 2m(λ- Z)^2_+> v dv= e^-2m λ^2 +∫_e^-2m λ^2^1(λ- Z)_+< √(log(1/v)/(2m)) dv = e^-2m λ^2 +∫_e^-2m λ^2^1 λ- Z< √(log(1/v)/(2m)) dv = e^-2m λ^2 +∫_e^-2m λ^2^1 Z> [1]λ-√(log(1/v)/(2m)) dv.To upper bound the integrand above, denote H_n the ecdf of (U_1,…,U_n); it holds for any x ∈ [0,1]:Z>x= [3]sup_t∈ [0,1](U_(⌊ (n+1)t⌋)-⌊ (n+1)t⌋/(n+1) )> x=(∃ k∈n :U_(k) > x+k/(n+1))=[3]∃ k∈n : ∑_i=1^nU_i≤ x+k/(n+1)≤ k-1=(∃ k∈n : H_n[1]x+k/(n+1) -[x+k/(n+1)] ≤ (k-1)/n -[x+k/(n+1)] ).≤ P(∃ k∈n : H_n[1]x+k/(n+1) -[x+k/(n+1)] ≤ -x )≤ e^-2n x^2 ,where we used (k-1)/n≤ k/(n+1) in the first inequality, and the left-tail DKW inequality for the last one. Plugging this into (<ref>) yields∫_0^1(e^- 2m(λ- Z)^2_+> v) dv≤ e^-2m λ^2 +∫_e^-2m λ^2^1 e^-2 n (λ-√(log(1/v)/(2m)))^2dv.Now letting u=√(log(1/v)/(2m)) (hence v=e^-2m u^2, dv=-4mu e^-2m u^2du), we obtain[3]sup_t∈ [0,1][1]F_m(t) - I_n(t) > λ ≤ e^-2m λ^2 +4 m ∫_0^λ ue^-2 n (λ-u)^2 e^-2m u^2du.Now,by denoting σ^2=[1]4(n+m)^-1 and μ=n(n+m)^-1, we gete^2nm/n+mλ^2∫_0^λ ue^-2 n (λ-u)^2 e^-2m u^2du = ∫_0^λ ue^-2(n+m)(u-nλ/n+m)^2 du=∫_0^λ ue^-1/2σ^2 (u-λμ )^2du=∫_0^λ (u-λμ)e^-1/2σ^2 (u-λμ )^2du+∫_0^λλμ e^-1/2σ^2 (u-λμ )^2du= σ^2e^-2λ^2n^2/m+n-σ^2e^-2λ^2m^2/m+n+λμ√(2π)σ C_λ,n,m.where C_λ,n,m=[1]𝒩(λμ,σ^2)∈ [0,λ]. Hence, ∫_0^λ ue^-2 n (λ-u)^2 e^-2m u^2du =e^-2nm/n+mλ^2(σ^2e^-2λ^2n^2/m+n-σ^2e^-2λ^2m^2/m+n+λμ√(2π)σ C_λ,n,m)=σ^2e^-2nλ^2-σ^2 e^-2mλ^2+λμ√(2π)σ C_λ,n,m e^-2nm/n+mλ^2.This leads to e^-2m λ^2+4 m ∫_0^λ ue^-2 n (λ-u)^2 e^-2m u^2du=n/n+m e^-2m λ^2 + m/n+me^-2nλ^2 + λ√(2π)nm/(n+m)^3/2 2C_λ,n,me^-2nm/n+mλ^2,which finishes the proof of (<ref>).Finally, let us prove B^(λ^_δ,n,m,n,m)≤δ for λ^_δ,n,m=Ψ^(r)(1) where Ψ^(r) denotes the function Ψ iterated r times (for an arbitrary integer r≥ 1), whereΨ(x)=1∧Ψ(x);Ψ(x) := log(1/δ)+log(1+ √(2π)2τ_n,mx/(n+m)^1/2)/2τ_n,m^1/2.If Ψ(1) = 1, then Ψ^(r)(1) = 1 for all r and the announced claim holds since B^(1,n,m)=0 by definition. We therefore assume Ψ(1) <1 from now on. Since Ψ is non-decreasing, by an immediate recursion we have Ψ^(r+1)(1) ≤Ψ^(r)(1) <1, for all integers r.On the other hand, note that for any x∈(0,1) satisfying Ψ(x) ≤ x <1, it holds Ψ(x) = Ψ(x) and thusB^(Ψ(x),n,m) = 1+2√(2π)Ψ(x) τ_n,m/(n+m)^1/21+2√(2π)x τ_n,m/(n+m)^1/2^-1δ≤δ.Since we established that x=Ψ^(r)(1) satisfies Ψ(x) ≤ x for any integer r the claim follows. § EXPLICIT CONTROL OF (<REF>) FOR Α=0By applying (<ref>) with k=0, the control (<ref>) for α=0 is satisfied by choosingt_0,δ=maxk/(n+1) : (n-k+1)… (n-k + m)/(n+1)… (n+m) ≥ 1-δ, k∈n+1. § PROOF OF COROLLARY <REF> Let m_0=|_0|. We establish the following more general result. Let m̅_0 achieving min_r∈m_0,m(r λ^_δ,n,r). With probability at least 1-δ, we have both∀ t∈ (0,1),(ℛ(t))≤min_r ∈m̅_0,m(r I_n(t)+r λ^_δ,n,r)/1∨ |ℛ(t)|;m_0≤maxr∈m : inf_t ∑_i=1^mp_i> t + r λ^_δ,n,r/1-I_n(t)≥ r.Lemma <ref> implies Corollary <ref> because if m̂_0 is as in (<ref>), with probability at least 1-δ, m̂_0≥m_0 by (<ref>), and by (<ref>)∀ t∈ (0,1),(ℛ(t))≤min_r ∈m̅_0,m(r I_n(t)+r λ^_δ,n,r)/1∨ |ℛ(t)|≤m̂_0 I_n(t)+m̂_0 λ^_δ,n,m̂_0/1∨ |ℛ(t)|. Now, let us prove Lemma <ref>.First, in the work of <cit.>, it is proved that (S_1, …, S_n, S_n+i,i∈_0) is exchangeable conditionally on (S_n+i,i∈_1) (see Lemma 3.2 therein). Hence, the vector (S_1, …, S_n, S_n+i,i∈_0), of size n+m_0, and the p-value vector (p_i,i∈_0), of size m_0, fall into the setting described in Section <ref> with calibration scores being (S_i)_i∈n and test scores being (S_n+i)_i∈_0. By Proposition <ref>, this means (p_i,i∈_0)∼ P_n,m_0. Second, for m̅_0 defined as in the statement, let F_m̅_0( t)=(m̅_0)^-1∑_i=1^m̅_0q_i≤ t, t∈ [0,1], with (q_1,…,q_m̅_0)∼ P_n,m̅_0 and (q_i, i∈m_0)=(p_i,i∈_0) (this construction is possible because the restriction of P_n,m̅_0 to the m_0 first coordinates is the distribution P_n, m_0 thanks to Theorem <ref> (i)) and consider the eventΩ=sup_t∈ [0,1](F_m̅_0(t) - I_n(t)) ≤λ^_δ,n,m̅_0.By applying Theorem <ref> and the explicit bound (<ref>), we have(Ω)≥ 1-δ. Next, |ℛ(t)∩_0|=m_0F_m_0(t)≤m̅_0F_m̅_0(t), which is at most m̅_0 I_n(t) + m̅_0 λ^_δ,n,m̅_0 on Ω. This gives(<ref>)because m̅_0 I_n(t)+m̅_0 λ^_δ,n,m̅_0/1∨ |ℛ(t)| = m̅_0 I_n(t)+min_r ∈m̅_0,m(r λ^_δ,n,r)/1∨ |ℛ(t)| = min_r ∈m̅_0,m(r I_n(t)+r λ^_δ,n,r)/1∨ |ℛ(t)| . Let us now turn to prove (<ref>) on Ω. For this, let us observe that on this event, we have for all t∈(0,1),∑_i=1^mp_i> t≥∑_i=1^m̅_0q_i> t = m̅_0(1-F_m̅_0(t)) ≥m̅_0(1-I_n(t)) - m̅_0λ^_δ,n,m̅_0Hence, m̅_0 is an integer r∈m such that inf_t (∑_i=1^mp_i> t + r λ^_δ,n,r/1-I_n(t))≥ r, which gives (<ref>). § THE SIMES INEQUALITY As proved in <cit.>, and since the joint distribution of the conformal p-values does not change from one context to another (Proposition <ref>), the conformal p-values are positively regressively dependent on each one of a subset (PRDS)under (<ref>) and (<ref>), see <cit.> for a formal definition of the latter.Hence, by <cit.>, the Simes inequality <cit.> is valid, that is, for all λ>0, we havesup_t∈ (0,1](F_m(t)/t) ≥λ ≤ 1/λ.This envelope can be applied in the two applications of the paper as follows: (PI) Under the condition of Corollary <ref>, the bound^_α,δ =(α/δ) α≥ 1/(n+1)is valid for (<ref>).(ND) Under the condition of Corollary <ref> the following control is valid( ∀ t∈ (0,1),(ℛ(t))≤^_t,δ)≥ 1-δ,for^_t,δ :=m̂_0 t/δ/1∨ |ℛ(t)|,for any estimator m̂_0≥ m∧inf_t∈ (0,δ)∑_i=1^m p_i>t/1-t/δ. § UNIFORM FDP BOUND FOR ADADETECT AdaDetect <cit.> is obtained by applying the Benjamini-Hochberg (BH) procedure <cit.> to the conformal p-values, that is, _α:=ℛ(αk̂_α/m), wherek̂_α := maxk∈0,m : ∑_i=1^m p_i≤α k/m≥ k.It is proved there to control the false discovery rate (FDR), defined as the mean of the FDP:(_α):=[(_α)] ≤α m_0/m. Applying Corollary <ref>, we obtain on the top of the in-expectation guarantee (<ref>) the following uniform FDP bound for _α: with probability at least 1-δ, we have∀α∈ (0,1),(_α)≤^_α,δ^_α,δ :=αm̂_0/m + m̂_0 λ^_δ,n,m̂_0/k̂_α∨ 1k̂_α>0,where k̂_α is the rejection number (<ref>) of _α and m̂_0 satisfies (<ref>).In addition, we consider ^_α,δ :=m̂_0 α/mδk̂_α>0,for any estimator m̂_0 given by (<ref>).§ ADDITIONAL EXPERIMENTSIn this section, we provide experiments to illustrate the FDP confidence bounds for AdaDetect, as mentioned in Remark <ref> and Section <ref>. The two procedures used are of the AdaDetect type (<ref>) but with two different score functions: the Random Forest classifier from <cit.> (adaptive score), and the one class classifier Isolation Forest as in(non adaptive score). The hyperparameters of these two machine learning algorithms are those given by <cit.>.The FDP and the corresponding bounds are computed for the two procedures. The true discovery proportion is defined by(R)=|R∩_1|/|_1|∨ 1,where _1= m∖_0; this criterion will be considered in addition to the FDP to evaluate the detection power of the procedures.Following the numerical experiments of <cit.> and <cit.>, we consider the three different real data from OpenML dataset <cit.> given in Table <ref>. The results are displayed in Figure <ref> for comparison of adaptive versus non-adaptive scores for the different FDP confidence bounds and the TDP. On Figure <ref>, we focus on the adaptive scores and corresponding FDP bounds only; we compare the effect (on the bounds) of demanding a more conservative error guarantee (δ=0.05 versus δ=0.2), as well as the effect of estimating m_0 via (<ref>) instead of just using the inequality (<ref>) with m̂_0 = m.The high-level conclusions are the following: * using adaptive scores rather that non-adaptive ones results in a performance improvement (better true discovery proportion for the same target FDR level)* for small target FDR level α, the Simes upper bounds ^_α,δ are sharper than the DKW bound, elsewhere the new DKW bound is sharper than Simes. Furthermore, the relevant region for the Simes bound having the advantage becomes all the more tenuous as the error guarantee for the bound becomes more stringent (smaller δ). The reason is that the Simes upper bound is linear in δ^-1, while the DKW is only (square root) logarithmic.* estimating the estimator m̂_0 from (<ref>) yields sharper bounds on the FDP and is therefore advantageous. | http://arxiv.org/abs/2310.18108v1 | {
"authors": [
"Ulysse Gazin",
"Gilles Blanchard",
"Etienne Roquain"
],
"categories": [
"stat.ME",
"cs.LG"
],
"primary_category": "stat.ME",
"published": "20231027124830",
"title": "Transductive conformal inference with adaptive scores"
} |
0000-0001-9561-8134]Kwang-il SeonKorea Astronomy & Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 34055, Republic of Korea; [email protected] and Space Science Major, University of Science and Technology, 217, Gajeong-ro, Yuseong-gu, Daejeon 34113, Republic of KoreaObservations of metallic doublet emission lines, particularly Mg2 λλ2796, 2803, provide crucial information for understanding galaxies and their circumgalactic medium. This study explores the effects of resonant scattering on the Mg2 doublet lines and the stellar continuum in spherical and cylindrical geometries. Our findings show that under certain circumstances, resonance scattering can cause an increase in the doublet flux ratio and the escaping flux of the lines beyond what are expected in optically thin spherical media. As expected, the doublet ratio is consistently lower than the intrinsic ratio when the scattering medium is spherically symmetric and dusty. However, if the scattering medium has a disk shape, such as face-on disk galaxies, and is viewed face-on, the doublet ratio is predicted to be higher than two. These results may provide a valuable insight regarding the complexity of the shape and orientation of distant, spatially-unresolved galaxies. The importance of the continuum-pumped emission lines and expanding media is discussed to understand various observational aspects, including doublet flux ratios, which can be lower than 1.5 or higher than two, as well as symmetric or asymmetric line profiles. It is also discussed that the diffuse warm neutral medium would be an essential source of Mg II emission lines. § INTRODUCTION The majority of advances in the circumgalactic medium (CGM) study have been achieved by observing ultraviolet resonance lines, such as Lyα λ1216, Mg2 λλ2796, 2803, and C4 λλ1548, 1551, which are some of the most prominent lines in the spectra produced by the interstellar medium (ISM) and CGM. <cit.> put forward the idea that most quasar absorption lines can be attributed to gas present in the extended halos of normal galaxies, which has a larger cross-sectional area indicated by the galaxy's optical or radio appearance. Since then, resonance lines have been extensively employed for studying the CGM.In particular, Mg2 absorption or emission lines have been demonstrated to be effective in tracing gas in galaxies and their surroundings <cit.>. The studies by <cit.> and <cit.> demonstrated that the manifestation of the Mg2 doublet, either as emission lines or absorption lines, is dependent on the stellar masses and UV spectral slopes of galaxies. The studies revealed that galaxies with lower stellar masses and bluer spectral slopes tend to exhibit Mg2 emission, while galaxies with higher stellar masses and redder spectral slopes tend to show Mg2 absorption. <cit.> made the first discovery of spatially extended Mg2 emission from a bright, starburst galaxy at z=0.69. This discovery was later confirmed by <cit.> using spatially resolved spectroscopy. In recent years, advances in integral field unit spectrograph technology have allowed for the measurement of spatially resolved Mg2 emission in the CGMs of star-forming galaxies <cit.> and the intragroup medium <cit.>. The observations of the extended emission provide strong constraints not only on the spatial extent of the outflowing gas but also on the mass-outflow rates of the galaxy when combined with outflow velocity and column density measurements.Resonance lines like Mg2 are dispersed in both space and wavelength as a result of repeated resonance scatterings. Furthermore, the 2796Å line, with its shorter wavelength, has a higher resonance scattering cross-section than the 2803Å line. This increases the probability of the former being absorbed by dust more than that of the latter. These characteristics make it possible to use the spectral shape and flux ratio (F_2796/F_2803) of the Mg2 doublet lines as proxies of the physical conditions of the ISM and CGM. In particular, it may serve as an indirect indicator of Lyman continuum (LyC) leakage <cit.>. <cit.> discovered that in green pea galaxies, Mg2 line profiles tend to be wider and more redshifted when the estimated escape fractions for Lyα and Mg2are low. This suggests that the escape fractions and velocity profiles of Lyα and Mg2are influenced by resonance scattering. Hence, the suggestion has been made to utilize the flux ratio between the Mg2 emission lines as a potential indirect measure for estimating the escape fraction of LyC in the epoch of reionization. In their study of the spatially resolved spectroscopic data of Mg2 in a LyC leaking galaxy located at z=0.36, <cit.> found that the flux ratio R=F_2796/F_2803 ranges from 0.8 to 2.7 across the galaxy. It was discussed that R would decrease as the Mg2 optical depth along the line of sight increases, suggesting that LyC photons escape through regions of high R. They also found that the Mg2 2796Å line tends to be slightly broader than the Mg2 2803Å, particularly in regions with high R; this observation suggest the involvement of resonance scattering. They and <cit.> also found that the anticipated LyC escape fraction, derived from the Mg2 emission lines, shows a correlation with the observed fraction in samples of galaxies exhibiting strong Mg2 emission lines. More recently, <cit.> found that strong LyC emitters (LCEs) tend to exhibit larger equivalent widths (EWs) of Mg2 emission, while non-LCEs show evidence of more scattering and absorption features in Mg2.Achieving precise modeling of resonance radiative transfer (RT) processes is crucial to gain a proper understanding of the observational results of Mg2. <cit.> used Monte Carlo RT techniques to study the propagation of metallic resonance doublet lines, specially the Mg2 λλ2796, 2803 doublet and Fe2 UV1 multiplet at λ≈2600Å. <cit.> also developed a semi-analytical line transfer model to examine the absorption and re-emission line profiles from expanding galactic envelopes. <cit.> developed a 3D RT code RASCAS, which can be used to model the propagation of various resonant lines in numerical simulations of astrophysical objects. <cit.> utilized the code developed by <cit.> to compare the RT models with observational data from a star-forming galaxy. <cit.> investigated the potential of Mg2 as an indicator of LyC leakage by analyzing cosmological radiation hydrodynamics simulations with photoionization and resonant-line RT. The study found that a majority of bright, star-forming galaxies with high LyC escape fractions are likely to also emit Mg2. They also found that the Mg2 doublet flux ratio is more sensitive to the amount of dust than to neutral hydrogen, which may limit its usefulness as a LyC leakage indicator to only galaxies in the optically thin regime. <cit.> studied theoretical predictions for Mg2 emission in the CGM of star-forming galaxies in the high-resolution TNG50 cosmological simulations. However, no resonance scattering was considered.As stated above, despite some theoretical efforts, there has been sparse fundamental modeling conducted thus far to comprehend the escape fraction and the line flux ratio of the Mg2 resonance doublet. The primary aim of the present study is to establish a core understanding of how to interpret observational data of the Mg II emission line, especially through a comparison of spherical and non-spherical geometries. This study can also be extended to similar data from other metallic doublet lines, such as C4. The occurrence of absorption or P-Cygni profiles in relatively luminous galaxies implies that there is an involvement of resonance scattering from the stellar continuum radiation near the Mg2 lines. It is therefore investigated how continuum photons produce the Mg2 emission and absorption line features. The main objective of this study is not to provide a detailed comparison between the model predictions and observations, but rather to present the general properties of Mg2 RT.This paper is organized as follows. Section <ref> describes the Monte Carlo RT methods, definitions, and assumptions. Section <ref> presents the simulation results for the emission lines in both spherical and cylindrical mediums. The results for the continuum-pumped emission and absorption line features are also discussed in both geometries. The spatial variation of the doublet flux ratio is examined for sample models in the spherical geometry. Section <ref> discusses the observational implications of the present results, Mg2 emission mechanisms, other metallic resonance lines, and related subjects. A summary is provided in Section <ref>.§ METHODS§.§ Definitions The fine structures of the n=3 quantum state of Mg^+ resemble the Lyα doublet state of neutral hydrogen. However, unlike Lyα, their level splittings are significant and, thus, must not be disregarded. We refer the transition ^2S_1/2↔^2P_1/2^ o (corresponding to the lower frequency) to as “H” and the transition ^2S_1/2↔^2P_3/2^ o (higher frequency) to as “K,” as in the Ca2 λλ3933,3968 doublet lines. The frequencies for the H and K transitions are represented by ν_ H and ν_ K, respectively. By integrating over the one-dimensional Maxwellian velocity distribution of the ionic gas at temperature T, the cross section in a reference frame comoving with the gas fluid results inσ_ν=1/√(π)Δν_ D[χ_ KH(x,a)+χ_ HH(x+x_ HK,a)],where χ_ K=f_ K(π e^2/m_ec) and χ_ H=f_ H(π e^2/m_ec). Here, f_ K=0.608 and f_ H=0.303 are the oscillator strengths of the Mg2 K and H lines, respectively. The K transition has a twice-higher oscillator strength than the H transition due to the difference in the statistical weights of 2J + 1. H(x,a) is the Voigt-Hjerting function given byH(x,a) =a/π∫_-∞^∞e^-y^2/(x-y)^2+a^2dy.In this paper, x is defined as the relative frequency of the photon measured from ν_ K and normalized to the thermal Doppler frequency width Δν_ D=ν_ K(V_ th/c):x =(ν-ν_ K)/Δν_ D,Here, V_ th=(2k_ BT/m_Mg)^1/2=8.27(T/10^5K)^1/2 km s^-1 and a=Γ/(4πΔν_ D) are the thermal speed of gas and the natural width parameter of H(x,a), respectively. The damping constant (the Einstein A coefficient) of the Mg2 transitions is Γ=2.590×10^8 s^-1. If an additional turbulence motion, characterized by V_ turb, is taken into account, the Doppler parameter is given by Δν_ D=ν_ K(b/c) where b=(V_ th^2+V_ turb^2)^1/2. The frequency difference between the Mg2 K and H lines is Δν_ HK=ν_ K-ν_ H=2745.2 GHz, which is equivalent to a Doppler shift of ∼770 km s^-1. The normalized frequency difference between ν_ K and ν_ H is x_ HK=Δν_ HK/Δν_ D≃93(T/10^5K)^-1/2 for Mg2. For comparison, it's worth noting that x_ HK≃0.032(T/10^5K)^-1/2 for Lyα. Therefore, in most cases, the Mg2 doublet transitions can be treated separately, unless there is a significant velocity variation in the gas, as considerable as ∼770 km s^-1.The optical depth τ_ν(s) of a photon with frequency ν traveling along a path length s is given byτ_ν(s)=∫_0^s∫_-∞^∞n(V_∥)σ_νdV_∥dℓ,where n(V_∥) represents the number density of Mg^+ with the velocity component V_∥ parallel to the photon's propagation direction. In this paper, the total amount of the Mg^+ gas is measured using the column density N_Mg^+ or the optical depth τ_0 at the K line center. §.§ Monte Carlo Algorithms The RT calculation of Mg2 was carried out by updating LaRT, which was originally developed for Lyα RT. A detailed description of the basic RT algorithms employed LaRT can be found in <cit.>, <cit.> and <cit.>. LaRT has been updated to deal with metallic resonance lines other than Lyα and fluorescence emission lines caused by resonant absorption, such as Fe2^* 2626Å and Si2^* 1533Å. The RT algorithms are similar to those of <cit.> and <cit.>, except that LaRT can handle scattering and polarization using a quantum-mechanically correct scattering phase function. In contrast, their codes assume the scattering phase function to be isotropic. A complete description of the update will be given elsewhere. In the following, only the contents relevant to the Mg2 doublet lines are described.The velocity component u_∥=V_∥/V_ th (or u_∥=V_∥/b) of the scattering atom, which is parallel to the photon's propagation direction, is sampled from the following composite distribution function:f_ FS(u_∥|x)=𝒫_ Kf(u_∥|x)+(1-𝒫_ K)f(u_∥|x+x_ HK),where𝒫_ K=2H(x,a)/2H(x,a)+H(x+x_ HK,a),f(u_∥|x) =a/π H(x,a)e^-u_∥^2/a^2+(x-u_∥)^2.If a uniform random number ξ (0≤ξ≤1) is selected and found to be smaller than 𝒫_ K, the photon is scattered through the K transition; otherwise, it is scattered through the H transition. The algorithm developed by <cit.> is used to obtain a random parallel velocity component of the scattering atom, given a specific transition type.The scattering phase function is given by𝒫(cosθ) =3/4E_1(cos^2θ+1)+(1-E_1)as for Lyα. Here, E_1 is the function of frequency given by E_1=2(ν-ν_ K)(ν-ν_ H)+(ν-ν_ H)^2/(ν-ν_ K)^2+2(ν-ν_ H)^2 =2x(x+x_ KH)+(x+x_ KH)^2/x^2+2(x+x_ KH)^2.The parameter E_1 is 1/2 for the K and 0 for the H transition. Random numbers for the scattering angle θ are obtained following the method described by <cit.>. In this study, we will not address the calculation of polarization for the Mg2 resonance lines, although LaRT has the capability to perform such calculations.To examine optically thin or moderately thick cases similar to those encountered in the Mg2 lines, the first forced-scattering algorithm was implemented in LaRT. In regions with low optical depth, most photon packets will escape from the system without interactions, resulting in poor statistics of the scattered light. The technique of forced scattering offers a solution to overcome this low efficiency. The forced scattering technique has been incorporated into the majority of Monte Carlo dust RT codes <cit.>. This technique employs a photon weight w that is initially set to 1. Instead of sampling from the standard exponential probability density function (PDF) p(τ)=exp(-τ), a random optical depth τ is generated by following a “truncated” exponential distribution. This distribution is truncated at the optical depth τ_ path, which is calculated from the current position of the photon to the system boundary along its trajectory, as follows:p(τ)= e^-τ(1-e^-τ_path)^-1 τ≤τ_ path,0τ>τ_ pathA random optical depth is sampled as follows:τ=-ln[1-ξ(1-e^-τ_ path)],where ξ is a uniform random number between 0 and 1. The truncated PDF guarantees to produce an interaction before the photon exits the system. To compensate for this biasing of the PDF, the photon weight w is reduced to w'=w(1-e^-τ_ path), where w and w' are the photon weights before and after the scattering, respectively. <cit.> forced the scattering until the photon weight becomes lower than a predefined critical value. In the present study, forcing is limited only to the first scattering, while subsequent scattering is carried out using the standard exponential PDF without a cutoff. §.§ Column Density, Optical Depth, and Doppler Parameter The column density of Mg^+ can be expressed in terms of the hydrogen column density N_H, as done by <cit.>: N_Mg^+=3.08×10^13(N_Mg^+/N_Mg)(δ_Mg/0.426)(N_Mg/N_O/0.0813) ×(N_O/N_H/8.91×10^-5)(N_H/10^19 cm^-2) cm^-2.Unlike the Mg/H abundance ratio, the O/H ratio is more readily observable. Therefore, the column density of Mg^+ is parameterized based on the O abundance. The abundance ratio between Mg and O is expected to be similar to that of the sun <cit.> and does not significantly change because both are primarily produced by core-collapse supernovae <cit.>. The gas phase abundance of Mg is depleted onto dust and reduced by a factor of δ_Mg∼0.426 (corresponding to ∼0.37 dex) in the warm neutral medium (WNM) with a temperature of ∼10^4 K <cit.>. In this paper, we use the O/H of a LyC leaker J1503+3644 (hereafter J1503) from <cit.>, which is ∼1/5 of the solar abundance, to parameterize the Mg/H ratio. We note that the mean O/H abundance ratio of eleven LyC emitters at z∼0.3-0.4, as discussed in <cit.>, is ∼8.13×10^-5, which is consistent with the value adopted in this study.<cit.> estimated the width of the Mg2 emission line profile in J1503 to be ∼90 km s^-1 after subtracting the instrumental effects. The full width of half maxium (FWHM) of Mg2 2796 from green pea galaxies, as reported by <cit.>, was found to range from approximately 100 km s^-1 to 300 km s^-1, corresponding to b≈60-180 km s^-1. Their research suggests that the resonance scattering effect has a relatively minor impact, implying that the observed line widths are primarily attributed to gas motion, including both ISM turbulence and galactic rotation. <cit.> demonstrated that the prescription of incorporating the trubulent motion into the thermal motion provides an excellent method for predicting the Lyα emergent spectrum from a turbulent medium in both cases of microturbulence and macroturbulence. The results would certainly be applicable to Mg2 line. This study, therefore, assumes a Doppler width parameter of b=90 km s^-1 for the Mg^+ gas unless stated otherwise. The initial line profile adopted in all our models is assumed to be the Voigt function, with a line width determined by the same Doppler parameter b=90 km s^-1. Unresolved turbulence motion will give rise to an effect equivalent to producing an initial line profile corresponding to the gas motion. Although not explicitly presented in the paper, additional models were computed using b=15 km s^-1 (equivalent to T∼3.3×10^5 K) and yielded comparable results to those presented.The optical depth τ_0 of the Mg^+ gas is then given byτ_0 ≡(χ_ K/Δν_ D)N_Mg^+ϕ_x(0)≃(χ_ K/√(π)Δν_ D)N_Mg^+, =0.871(90 km s^-1/b)(N_Mg^+/3.08×10^13 cm^-2),where ϕ_x=H(x,a)/√(π) is the normalized Voigt profile and H(0,a)≃1. This definition of τ_0 refers to the monochromatic optical depth measured at the line center (x=0) of the K line. The optical depth at the H line center is τ_0/2. In this study, the optical depth varies in the range of τ_0≈3×10^-3-10^3 and the column density N_Mg^+≈10^11-3×10^16 cm^-2.It should be noted that in <cit.>, the optical depth is defined as that integrated over the H line profile. The integrated optical depth (τ_*) for the H line is related to the monochromatic one at the K line center by τ_*≃√(π)τ_0/2 <cit.>. Furthermore, it is important to note that, in their Eq. (12), they assumed a Doppler parameter of b=1 km s^-1. Consequently, their column density is 90 times lower than ours for a given optical depth. §.§ Dust Extinction For photons that can be resonantly trapped, such as Lyα and Mg2, the influence of dust can be considerably amplified in comparison to the non-resonance line photons, such as [O3] λ5008. Resonantly trapped photons will travel significantly greater distances before escaping the medium compared to non-resonance photons. As a result, they may experience significantly higher opacity due to dust. In <cit.>, it was assumed that dust only absorbs photons without scattering them. However, the present study also considers the scattering of photons by dust.The dust effect is examined by assuming the properties of mean Milky Way dust. The dust scattering albedo and asymmetry factor near the wavelength of Mg2 are a=0.57 and g=0.55, respectively <cit.>. The scattering angle is sampled following the Henyey-Greenstein phase function with the asymmetry factor g <cit.>. The dust extinction cross section per hydrogen atom is σ_ ext/ H≃1×10^-21 cm^-2 at the wavelength of Mg2. The quantity of dust is assumed to be proportional to the Mg abundance in the same manner as in the Milky Way. The dust extinction optical depth is then given byτ_ dust=1.73×10^-3(σ_ ext/H/1×10^-21 cm^-2)(1.78×10^-5/N_Mg/N_ H) ×(N_Mg/N_Mg^+)(N_Mg^+/3.08×10^13 cm^-2).Here, we adopt the Mg abundance of Mg/H =1.78×10^-5, measured in the WNM of our Galaxy <cit.>. In the present models, the dust extinction optical depth varies from τ_dust∼6×10^-6 to ∼2. It is, therefore, anticipated that the impact of dust extinction would be significant only when the Mg^+ column density reaches ≳10^15 cm^-2. Mg2 lines observed from compact galaxies akin to J1503 would likely not have experienced substantial dust attenuation effects, as will be elaborated upon later.§.§ Scattering Medium This paper explores two fundamental geometries (sphere and cylinder). The first model to be examined assumes a spherically symmetric medium, which expands radially with a constant density. The radial velocity of a fluid element at a distance r from the center is assumed to beV(r)=V_ maxr/r_ max,where r_ max and V_ max are the maximum radius and the velocity at r=r_ max, respectively. In this Hubble-like expansion model, the maximum velocity varies from V_ max=0 to 300 km s^-1. The optical depth τ_0 and column density N_Mg^+ are measured radially from the center of the sphere to the outer boundary.In the second model, the scattering medium has a cylindrical shape with a height of 2H and a radius of R_ cyl, as illustrated in Figure <ref>. A medium characterized by a low height-to-radius ratio H/R_ cyl≲0.1 can be regarded as a disk galaxy, while a medium with a high H/R_ cyl≈1 may be considered to represent a relatively round galaxy. Virtual observers are assumed to measure the system at various inclination angles (or equivalently viewing angles), denoted as β_ inc, spanning from 0^∘ to 90^∘. A face-on galaxy corresponds to an inclination angle of β_ inc=0^∘, while an edge-on disk galaxy corresponds to an inclination angle of β_ inc=90^∘. It is assumed that the density of the medium is constant. The optical depth τ_0 is measured along the height direction of the cylinder from the center to the boundary; hence, the total optical depth is 2τ_0 when observed face-on.In the spherical model, photons are emitted from the center unless otherwise specified. In contrast, photons are spatially uniformly ejected in the cylindrical model. Additional calculations were performed for cases in which photons originate from the center of the cylinder. These cases yielded similar results, although they are not presented in this paper. §.§ Definition of Equivalent Widths Not only will emission line photons be resonantly scattered by Mg^+ gas, but continuum photons near Mg2 lines will also undergo the same scattering process. Resonance scattering of the stellar continuum produces both emission line-like features and absorption lines. In order to investigate this continuum effect, the equivalent widths (EWs) for the emission (W^ e) and absorption (W^ a) are calculated for both the K (2796) and H (2803) lines as follows: W_2796, 2803^ e=∫_F_λ>F_0(1-F_λ/F_0)dλ,W_2796, 2803^ a=∫_F_λ<F_0(1-F_λ/F_0)dλ,where F_0 is the initial, flat continuum spectrum. Therefore, the emission EWs have negative values, while the absorption EWs are positive. In this paper, even if an emission equivalent width (EW) has a negative value, the terms `high' and `low' will be employed to indicate the magnitude of its absolute value.§ RESULTS This section begins by describing the doublet ratio and escape fraction of the Mg2 emission line calculated for the spherical and cylindrical models. In the cylindrical model, the dependencies of the doublet ratio and escape fraction on the observer's viewing angle are also explored. Following that, the emission and absorption features that arise from resonance scattering of the stellar continuum are discussed in both geometries. Lastly, this section also explores the spatial variation of the Mg2 doublet ratio in the spherical models. §.§ Spherical Model - Line Emission Some example spectra obtained from the spherical models are shown in Figure <ref>. The figure shows spectra for a static medium (left panel) and an expanding medium (right panel), for various column densities. The peaks of the spectra are normalized to unity for ease of comparing spectral shapes. In relatively optically thin (τ_0≲3) and static (V_ exp=0) media, the predicted line profiles of Mg2 emission lines do not exhibit double peaks (equivalently, there is no absorption at the line center) or show a significant resonance scattering signature. When τ_0≳3 (N_Mg^+≳10^14 cm^-2), the Mg2 line shape starts exhibiting double peaks in both the K and H lines. As the expanding velocity increases, the double peaks feature disappears even in models with high column density. The right panel illustrates the complete disappearance of double peaks in the spectra for cases with the maximum expansion velocity of V_ exp=300 km s^-1, irrespective of the gas column density.Notably, K-line photons undergo more scatterings than H-line photons, resulting in a broader peak separation (or line width) of double peak and a more significant wavelength shift in the K line compared to the H line. In the static models (left panel), for example ,when N_Mg^+=10^16 cm^-2, the peak separation of the K line is 3.72Å (equivalent to 400 km s^-1 in velocity), whereas that of the H line is 3.36Å(361 km s^-1). In the right panel (expanding medium), the wavelength (velocity) shifts of the K and H lines in the model with N_Mg^+=10^16 cm^-2 are 1.41Å (151 km s^-1) and 1.23Å (132 km s^-1), respectively. These differences mainly emerge within the relatively central region around the source, where most scattering events take place. In the outer region, photons undergo fewer resonance scatterings because their wavelengths have already substantially shifted away from the central wavelength. For instance, the frequency shift in the central region of an expanding medium becomes more pronounced for K-line photons due to their experiencing a more significant number of scatterings than H-line photons. As discussed in Section <ref>, this effect can, in some cases, lead to fewer K-line photon scatterings in the outer region than H-line photons, ultimately resulting in spatial variation in the doublet flux ratio.In the presence of dust, the spectral shapes change only slightly, except for a variation in the flux level. However, the dust effect is appreciable only when N_Mg^+≳10^15 cm^-2 in a static medium. Therefore, for models that include dust, Figure <ref> shows only the spectra for N_Mg^+=10^16 cm^-2. When V_ exp=0 km s^-1 (300 km s^-1), the total fluxes of dusty models are reduced by factors of 0.99, 0.92, 0.73, 0.31, and 0.05 (0.99, 0.96, 0.88, 0.62, and 0.23) for N_Mg^+=3×10^14, 10^15, 3×10^15, 10^16, and 3×10^16 cm^-2, respectively, compared to the models without dust. In the figure, dashed lines represent the model spectra calculated for N_Mg^+=10^6 cm^-2. The overall spectral shape of the case with dust resembles that of the case without dust. Nevertheless, there is a significant increase in flux attenuation near the line center, where resonance trapping is most pronounced. In addition, the K line experiences stronger dust attenuation than the H line, which slightly enhances the H line in the normalized spectra presented in the figure. The spectrum of the model with V_ exp=300 km s^-1 exhibits a slightly more elongated tail due to stronger suppression at the line center than in the wing.It is important to note that in spherically symmetric models, the estimated doublet flux ratio R=F_2796/F_2803, averaged over all lines of sight, is always equal to the optically thin value of 2 when there is no dust in the medium, except in cases where mixing of the K and H lines due to fluid motion and line emission due to continuum pumping occur. The escape fraction also remains at 100% due to the absence of photon loss. The variation in the doublet ratio and escape fraction occurs only in the presence of dust.Figure <ref> shows the doublet ratio (R, top panel) and escape fraction (f_ esc, bottom panel) as functions of the column density of Mg^+ gas for the spherical models in the presence of dust grains. In models with N_Mg^+≳10^16 cm^-2 and V_ exp≳100 km s^-1, the K and H lines are found to merge. In this case, the two lines were considered to be separated at the wavelength corresponding to the minimum flux, and the line fluxes were calculated for wavelengths less than or greater than the wavelength of the minimum flux. The doublet flux ratio and escape fraction both show a decrease with increasing N_Mg^+ for a given V_ exp. They also decrease in general when V_ exp decreases for a given N_Mg^+. However, the doublet ratio R of the model with V_ exp=300 km s^-1 and N_Mg^+=10^16 cm^-2 shows an abrupt drop, deviating from the trend in other models. This drop is attributed to the transfer of some of the K line flux to the H line. It is clear that the maximum deviations from the dust-free case are found in the static medium with V_ max=0 km s^-1. As expected, optically thin or moderate cases (τ_0≲30, N_Mg^+≲10^15 cm^-2, τ_ dust≲0.06) show no significant reduction in the doublet ratio and escape fraction due to the presence of dust. The effects of dust become appreciable only in optically thicker cases. This result indicates that the doublet ratios (R<2) found in compact star-forming galaxies, such as J1503, which exhibits a median ratio of R≃1.7 and is supposed to have a low Mg^+ column density of <10^15 cm^-2, cannot be explained solely by pure dust attenuation in spherical models. Therefore, we must consider the effects arising from non-spherical geometry and continuum effects, as described in the following sections.§.§ Cylindrical Model - Line Emission Figure <ref> shows examples of spectra obtained for nine combinations of the optical depth (τ_0=1, 5, and 10) and the height-to-radius ratio (H/R_ cyl=0.1, 0.5, and 1.0) when observing the cylindrical models at various inclination angles (β_ inc=0^∘, 30^∘, 45^∘, 60^∘, 75^∘, and 90^∘). The initial spectrum, expected when there is no scattering, is also shown as the dotted line in the figure. The doublet ratio (R=F_2796/F_2803) for each model is also denoted within parentheses in the figure.There are several noteworthy features in Figure <ref>. (1) The flux in a disk-like cylinder (H/R_ cyl<1) is enhanced when viewed face-on, compared to the initial input flux, while it is reduced in the edge-on view. (2) This variation in flux as a function of the viewing angle tends to be larger in a flatter disk with a smaller H/R_ cyl ratio. In contrast, when H/R_ cyl≈1, the line flux becomes relatively independent of the inclination angle because the system would approximately approach a sphere. We also note that, in the models with H/R_ cyl=1, the line flux is minimized when measured at β_ inc=0^∘, as this direction corresponds to the maximum optical depths across the entire projected area. (3) The K line begins to show double peaks when τ_0≳5, whereas the H line exhibits double peaks at a higher optical depth, τ_0≳10. This is because the K line becomes optically thick first as τ_0 increases. (4) As the optical depth increases, the double peaks in a flat disk model begin to appear at a lower optical depth compared to a round system. This is because τ_0 is defined to be measured along the vertical direction of the cylinder; as a result, the optical depth along the radial direction and the total amount of material are proportional to the inverse of the height-to-radius (R_ cyl/H) for a fixed τ_0. (5) The doublet ratio (shown within parentheses) varies depending on the viewing angle. It can be even larger than the intrinsic value 2, particularly when a flat disk system (H/R_ cyl=0.1) is observed face-on. The change in the ratio is more significant for smaller H/R_ cyl values, whereas it is negligible for a round model with H/R_ cyl≈1.The following investigates the properties of the cylindrical model mentioned above, including the variation of doublet ratio and escaping flux with viewing angle and the influence of H/R_ cyl, in more detail. Figure <ref> shows the variation of the doublet ratio R, in the absence of dust, as a function of the inclination angle β_ inc for various combinations of the height-to-radius ratio H/R_ cyl and optical depth τ_0. It is noticeable that the doublet ratio is more or less close to the optically thin value of R=2 when the medium is round (H/R_ cyl≳0.5). However, when the medium is disk-like (H/R_ cyl≲0.5), and relatively optically thin or moderate (0.1≲τ_ H≲10, 4×10^12 cm^-2 ≲ N_Mg^+≲ 4×10^14 cm^-2), the ratio deviates significantly from R=2. In these cases, the ratio R becomes lower than 2 when viewed edge-on (β_ inc≳60^∘), while it becomes greater than 2 when viewed face-on (β_ inc≲60^∘). In the optically thick cases, R is always close to the optically thin value of R=2, irrespective of the inclination angle and height-to-radius ratio. In very optically thin cases (τ_0<0.1), R is close to 2 when β_ inc≲75^∘ and becomes lower when viewed edge-on (β_ inc≳75^∘).Figure <ref> shows the variation of the doublet ratio R when dust is present in the medium. No appreciable dust effect is found in the optically thin or moderate cases (τ_0≲10). When τ_0≳10 (N_Mg^+≳4×10^14 cm^-2), the doublet ratio is slightly lower compared to the case with no dust. However, it is worth mentioning that even at its highest optical depth, the impact of dust is not substantial; this results in only a slight decrease in R to 1.8. Once again, this finding indicates that the influence of dust is unlikely to be sufficient to produce a doublet ratio of R∼1.7 or even lower in compact star-forming galaxies with τ_0<10. Instead, such low doublet ratios can be explained when the galaxies are geometrically thin disks (H/R_ cyl≲0.1) viewed edge-on (β_ inc≳80^∘) or contain large and relatively flat Mg^+2 gas clouds situated edge-on.Figure <ref> presents the variation of the escape fraction of Mg2 as a function of the inclination angle for various combinations of H/R_ cyl and τ_0, in the absence of dust. At first glance, it is surprising that the escape fraction can exceed 1 for face-on disk-like media with H/R_ cyl≲0.75 and β_ inc≲60^∘, especially when τ_0≳0.1. On the other hand, when viewed edge-on (β_ inc≳60^∘), the optically thick media yield f_ esc<1. The deviation from f_ esc=1 becomes increasingly significant with increasing optical depth. The escape fraction of Mg2 in the presence of dust is shown in Figure <ref>. When the medium is optically thin (τ_0≲10), the results are consistent with those shown in the absence of dust. The escape fraction decreases significantly due to dust only in the optically thick cases (τ_0≳10). Particularly, when τ_0≳500, the escape fraction consistently falls below 50%, irrespective of H/R_ cyl and β_ inc.It is now discussed why resonance scattering in an asymmetric medium results in rather unexpected doublet flux ratios of R>2 and escape fractions exceeding 1 (f_ esc>1). Three schematic diagrams in Figure <ref> illustrate various scattering processes occurring in a relatively flat cylindrical model, depending on the medium's optical depth. In an optically thin medium (τ_0≪1), illustrated in Figure <ref> (a), both in the vertical and radial directions, photons will undergo few scatterings, and therefore, the doublet ratio and escape fraction are not altered significantly from their intrinsic values. Figure <ref> (b) shows a case where the optical depth along the vertical direction is τ_0≈1. In this situation, photons tend to escape preferentially in the vertical direction due to the lower optical thickness. Photons originating deep within the medium find it easier to escape vertically rather than radially. Only a limited number of photons originating near the boundaries (marked in gray) can manage to escape radially. Consequently, when observed face-on, the escape fraction can exceed 1; however, in an edge-on orientation, it is less than 1. Moreover, K-line photons encounter an optical depth twice as high as that of H-line photons, leading to an increased probability of scattering for K-line photons. This difference in optical depth causes more K-line photons to escape through scattering in the vertical direction compared to what H-line photons do. As a result, when viewed face-on, the doublet ratio R appears higher than 2, whereas it appears lower than 2 when viewed edge-on. In an optically very thick medium (τ_0≫1), both K- and H-line photons undergo multiple scattering and become trapped within the inner region, represented in white in Figure <ref>(c). Within this inner region, the radiation field is more or less isotropic, and the doublet ratio will remain at its intrinsic ratio of 2 (when there is no dust). Once photons reach the outer regions, shown as a blueish area for K-line photons and a reddish area for H-line photons in Figure <ref>(c), they will predominantly escape through a single scattering on average. Consequently, the tendency for more K-line photons to escape than H-line photons in the vertical direction disappears, resulting in a doublet ratio of R≈2 in both directions. However, the probability of vertical escape would be much higher, as photons need to undergo more scatterings to be transferred radially than when transferred vertically. In addition, the spectrum escaping in the radial direction will be considerably broader than that escaping vertically, as shown in Figure <ref> (for example, refer to the top right panel with H/R_ cyl=0.1 and τ_0=10). This effect is due to a significantly higher number of scatterings required to escape in that direction.If dust is present in the medium, it is evident that K-line photons will be more readily absorbed than H-line photons. Therefore, in an optically thick medium with dust, the doublet ratio and escape fraction would be less than their intrinsic values, irrespective of the height-to-radius ratio and inclination angle. However, this dust effect is negligible in an optically thin medium.§.§ Spherical Model - Continuum Figure <ref> shows example spectra calculated with the same parameters as in Figure <ref>, except that a constant continuum spectrum was used in this figure. The figure also shows the EWs of the Mg II 2796 line in the parentheses of the legend. In static media (V_ exp=0, left panel), the spectra show double peaks and absorption features caused by resonance scatterings near the line center. As expected, the absorption depth and the emission height increase as the Mg^+ column density increases. The spectra of expanding media in the right panel show well-known P-cygni profiles with blueshifted absorption and redshifted emission features. The figure also compares the spectra with and without dust for the model with N_Mg^+=1×10^16 cm^-2. It is noticeable that the presence of dust does not significantly alter the absorption line shape and depth; however, the emission line strength is substantially reduced by dust.It should be noted that even in the highest column density models, the spectra are not entirely carved out at the Mg2 line centers. This phenomenon is attributed to the “filling-in” effect caused by resonance scattering. The filling-in of the resonance absorption feature by the resonance scattering itself was also discussed in <cit.>. If there were no filling-in effect, the continuum near Mg2 for models with N_Mg^+≳3×10^14 cm^-2 (τ_0≳9) would have been completely removed.Figure <ref> demonstrates the impact of the filling-in effect on the EWs of absorption and emission features by decomposing the spectra obtained from various models into direct and scattered components. In a relatively optically thin static medium with τ_0≲1, as shown in the upper right panel, most of the absorption line feature (denoted in red) is compensated by the emission line (in blue), and no significant line features are evident in the final spectrum. However, as the optical depth increases (second panel) or the medium expands (third panel), the absorption line is not entirely filled, and the absorption and emission lines begin to be separated. Resonance scattering causes a diffusion in wavelength space, resulting in a broader emission profile compared to the absorption profile, as shown in the first and second panels. The broadening caused by scattering becomes more noticeable as the optical depth increases, leading to a mismatch in the compensation between absorption and emission profiles. Expansion of the medium, as shown in the third and fourth panels, also yields a discrepancy between the absorption and emission profiles by redshifting the absorption and blueshifting the emission features. Contraction of the medium would also give rise to a similar mismatching effect, but by blueshifting the absorption and redshifting the emission features. This mismatch between the absorption and emission line profiles manifests as the appearance of the absorption and emission features in continuum spectra.The last panel of Figure <ref> illustrates that if dust optical depth is high enough, dust absorption and scattering also play significant roles in the continuum level and, thus, the EWs of absorption and emission features. In the figure, to distinguish between the resonantly scattered and dust-scattered components, the dust-scattered spectrum, shown by the dashed green line (F_ scatt^ dust), is obtained by collecting photons that underwent dust scattering at the last scattering event before they escaped. The continuum outside the Mg2 lines is affected by dust absorption and scattering but not by resonant scattering. In other words, the continuum denoted in blue in the figure is solely due to dust scattering. The dust-scattered continuum makes up approximately 27% of the total continuum level in this example. On the other hand, near Mg2 lines, the resonance scattering predominantly gives rise to both the absorption and emission features. The filling-in in the absorption troughs is primarily due to resonant scattering, as evidenced by the lack of or minimal presence of the dust-scattered component in those regions. Regarding the emission line feature, its significant portion appears to be attributed to dust scattering, as indicated by the green dashed line at first glance. However, this component arises from photons that experienced resonant scatterings (and additional dust scatterings) before ultimately escaping through dust scattering. Photons are unlikely to undergo only pure dust scattering without being resonantly scattered (or such cases would be extremely rare).The extinction of continuum photons reduces the continuum level while scattering by dust from other directions into the line of sight enhances the continuum. Consequently, the final continuum level becomes higher than expected as the dust scattering effect is ignored. Neglecting this enhancement due to dust scattering could lead to overestimating emission EWs and underestimating absorption EWs. Indeed, a substantial portion of the UV radiation in our Galaxy and external galaxies is known to originate from dust scattering of starlight <cit.>. Nevertheless, the dust scattering effect is expected to be relatively weak in compact star-forming galaxies due to their low column density of Mg^+ and, consequently, low dust optical depth.Figure <ref> displays the ratio between the EWs of the K and H lines for both emission and absorption, plotted as a function of the K line EW, calculated in spherical models. The left panel shows the results in the case of no dust, while the right panel displays the results when dust is included in the model. The column density of the medium varies from N_Mg^+=3×10^13 cm^-2 to 3×10^16 cm^-2 and is represented by symbol size. Different symbols and colors are used to denote the expansion velocities of the medium.In the absence of dust, the EW of the K line increases in both the absorption (W_2796^ a) and emission (|W_2796^ e|) components as the column density of Mg^+ increases. This trend results from the increased number of resonance scatterings. The EW ratio, defined as the ratio of the EW of the K line to that of the H line, shows a decreasing trend and eventually tends to approach a constant value (≈1) with increasing N_Mg^+. The decreasing trend in the EW ratio can be qualitatively understood based on the curve-of-growth theory utilized to analyze pure absorption lines, without filling-in by scattering. In other words, the curve of growth corresponds to situations where there is no filling-in, and the emission and absorption profiles are distinct. The EW of a pure absorption line increases linearly in the optically thin regime with increasing optical depth and then enters the flat portion of the curve of growth, increasing very slowly as the line core saturates <cit.>. In the linear regime, the EW ratio of a doublet is approximately equal to the ratio of their oscillator strengths (∼2 for Mg2), and it subsequently decreases with increasing optical depth, becoming approximately unity in the saturated regime. However, in the present case, absorption lines are filled in by scattering. Nevertheless, as optical depth increases, the emission component begins to be separated from the absorption component. Then, the EWs will eventually tend to be close to those of pure absorption lines. Therefore, qualitatively similar trends are found, even in the cases discussed in the present study.One would expect the EW of absorption to ideally match that of emission in a spherical model, unless the fluid velocity field and line broadening cause a mixing of the K-line emission with the H-line absorption (or a mixing of the K-line absorption with the H-line emission in an infalling medium). Consequently, the EW ratio for absorption (W_2796^ a/W_2803^ a) should be approximately equal to that for emission (W_2796^ e/W_2803^ e), as confirmed in the left panel of Figure <ref>. The figure shows an exception in the model with the fastest expansion velocity of V_ exp=300 km s^-1. In this case, the K-line emission is redshifted, filling in a portion of the H-line absorption trough (see the lower panel of Figure <ref>). The transfer of the K-line emission flux leads to decreases in both |W_2796^ e| and W_2803^ a, resulting in a decrease of W_2796^ e/W_2803^ e and an increase of W_2796^ a/W_2803^ a. This trend causes an asymmetry of the EW ratio plot around W_ 2796=0, as indicated by purple cross symbols (left panel).The EW ratios for both absorption and emission lie within the range of 1 to 2, except in cases of a static medium with a relatively low column density of N_Mg^+≲3×10^14 cm^-2, where the ratios exceed 2. The figure also presents the prediction by the curve of growth, illustrated by the thick gray line. In the figure, the EW ratios are generally lower than those predicted by the curve of growth. This result is because the K line has a higher optical depth, and thus, its emission and absorption profiles are more easily mixed up than those of the H line. It is also noteworthy that models with a higher expansion velocity, but not too fast, tend to closely align with the curve of growth, particularly in the case of low column densities (and low |W_2796|). This is because expanding media produce relatively well-distinct absorption and emission line profiles. In optically thick cases, the EW ratios agree with the curve of growth theory once again because the absorption and emission profiles are well separated. However, in optically thin and static media, the emission and absorption profiles are well mixed (the first panel of Figure <ref>), significantly departing from the theory. The H line has more well-mixed absorption and emission components in these cases than the K line due to its lower optical depth. This difference yields fairly large EW ratios.The right panel of Figure <ref> shows the variation of the EW ratio in the presence of dust. Dust destroys continuum photons near Mg2 line centers more effectively than those far from the lines due to the trapping by multiple resonance scattering. This effect increases the absorption line depth and reduces the emission line strength. However, attenuation of the continuum by dust tends to restore the strength of absorption EW, making its reduction less significant. As a result, the emission EWs are substantially reduced for high column density models with N_Mg^+≳3×10^15 cm^-2, while the absorption EWs are less altered, as seen in Figure <ref> (see also the bottom panel of Figure <ref>). Figure <ref> compares the EWs calculated before and after dust is included and clearly shows that the emission EWs are more significantly affected by dust than the absorption EWs. Thus, the EW ratios for absorption lines remain more or less unaltered, whereas those for emission lines are significantly changed. Consequently, the right panel of Figure <ref> shows highly asymmetric patterns caused by a substantial reduction of W_2796^ e but no significant change of W_2796^ a.§.§ Cylindrical Model - Continuum Figure <ref> shows example spectra obtained from resonance scattering of a flat continuum in cylindrical models. The figure illustrates the cases with τ_0=1, 5, and 10, and H/R_ cyl=0.1, 0.5, and 1.0, which are the same as those in Figure <ref>. Dust is assumed to be absent. The net EWs for the K line, defined as the sum of emission and absorption EWs (W_2796^ e+W_2796^ a), are also shown in parentheses. Surprisingly, unlike the spherical models, these non-spherical models can give rise to pure absorption or pure emission spectra, depending on the height-to-radius ratio and the inclination angle. In an edge-on view, the spectra tend to show pure absorption, particularly in optically thin and geometrically thin models. On the other hand, in a face-on view, the spectra exhibit pure emission, except for the models with H/R_ cyl≈1. These properties can be understood as in Figure <ref>. Photons will easily escape in the vertical direction while they experience resonance scatterings. In contrast, photons scattered radially will have to undergo much more scattering before escaping, and thus, few photons will escape in the radial direction. These trends give rise to pure absorption spectra in an edge-on view, and pure emission spectra in a face-on view.The absorption and emission fluxes both reach their minima at β_ inc≈60^∘ (denoted in green) for a given H/R_ cyl and τ_0, indicating that they are effectively mixed and canceled. Similar to the intrinsic emission line model discussed in Section <ref>, spectra obtained from a round cylinder (H/R_ cyl=1) show relatively insensitivity to variations in the viewing angle. In addition, the spectra exhibit minima both in absorption depths and emission peaks when compared to flat models. In the bottom panels (H/R_ cyl=1), the spectra observed at β_ inc=90^∘ appear slightly different from those obtained at other angles. This difference occurs because, at this particular angle, the column density of Mg^+ gas is highest, resulting in the strongest absorptions. In the presence of dust, dust scattering and absorption effects occur similarly to those in the spherical model; hence, this paper does not present the results.The EW of the Mg2 K line, in the absence of dust, is shown as a function of β_ inc for various combinations of H/R_ cyl and τ_0 in Figure <ref>. The figure illustrates that the EW for emission is highest when viewed face-on and with the lowest H/R_ cyl ratio, while the EW for absorption is highest in the edge-on view (with the lowest H/R_ cyl). This property was also found in Figure <ref>. The highest achievable EW for emission in the parameter space studied in this paper is |W_2796^ e|≈6.5 Å, while the highest EW for absorption reaches W_2796^ a≈6 Å. When the medium becomes much round, both absorption and emission EWs are confined within the range of 0≲|W_2796^ e,a|≲2 Å. As H/R_ cyl approaches 1, the EWs tend to be independent of the viewing angle due to the system's increased sphericity, as discussed with in Figure <ref>. For a fixed H/R_ cyl and β_ inc, the EWs for both absorption and emission increase as τ_0 increases. This is not only because absorption increases with higher optical depth but also because the absorption and emission profiles become more distinct at greater optical depths.In the presence of dust, the absorption and emission EWs of the K line are presented as a function of β_ inc in Figure <ref>. The figure shows decreases in emission EWs by dust, as compared to Figure <ref>. Dust attenuation causes a reduction in the continuum level, which would increase the emission EW if the emission line's strength remained constant. However, dust more effectively destroys photons near Mg2 resonance wavelengths due to their elongated path lengths caused by resonances compared to continuum photons. This effect outweighs the reduction in the continuum, resulting in an overall decrease in the final emission EW. This effect becomes most noticeable when τ_0≳10. In particular, the emission EWs for the cases with the highest optical depths become lower than those with lower optical depths. Similar results were also found in spherical models shown in Figure <ref>.As opposed to the emission EWs, absorption EWs generally tend to increase, except in some instances. As previously described in the spherical model, the presence of dust causes the absorption features to become deeper, leading to an overall enhancement in the absorption EW. However, this enhancement is less substantial than the decrease in the emission EW. In exceptional cases, when geometrically thin (but optically thick) flat cylinders are viewed edge-on (H/R_ cyl≲0.25, τ_0≳50, and β_ inc≳75^∘), the resonantly scattered photons escape quickly perpendicular to the line of sight before being destroyed by dust. Dust-scattered continuum photons will also escape predominantly in the vertical direction. Therefore, continuum photons experience significant extinction due to dust along the line of sightwithout being compensated by dust-scattered light. As a result, in these particular cases, the absorption EW decreases due to dust, contrary to the general trend. Finally, since the absorption EW increases when viewed face-on (β_ inc≈0^∘), its inclination angle dependence in Figure <ref> is significantly reduced for the optically thick flat cylinders (H/R_ cyl=0.01 and 0.1).Figure <ref> compares the EWs of the Mg2 K line with those of the H line. In the figure, optical depths are denoted using different colors and symbols. The prediction by the curve of growth theory for pure absorption lines is also shown in a thick gray curve. In this curve, the emission EWs are assumed to be equal to the negative of the absorption EWs. It is noteworthy that three piecewise linear functions can well represent their relationship. When the EWs of the K and H lines are small (-0.45Å <W_2796<0.75Å), their relation is well described by W_2803=0.55W_2796, which is consistent with the prediction of the curve of growth in an optically thin limit, as shown in the last panel of the figure. Outside of this regime, the equations representing the relationships between the EWs of the K and H lines are presented in the first panel. The curve of growth reproduces the best-fit linear functions, and the simulation results within an error of at most 10% for ranges of W_2796^ e≲-2.0Åand 2.2Å ≲ W_2796^ a≲ 7.3Å.§.§ Spatial Variation of the Doublet Ratio in Spherical Model Understanding the spatial variation of the doublet flux ratio is quite complex due to the differences in the line width and frequency shift of the K and H lines arising from the difference in the number of scatterings they experience. These differences between the K and H lines are primarily established in the central region near the source, where most scattering events occur. Consequently, the differences originating from the central region subsequently influence the number of scatterings occurring in the outer region, eventually affecting the doublet ratio in the outer region. The presence of dust further complicates the interpretation of results because dust scattering operates independently on wavelength, while resonance scattering depends strongly on the wavelength shift from the line center. In the following, Figure <ref> discusses the results for the intrinsic Mg2 emission line source at the center in the spherical model. Figure <ref> shows the results for the Mg2 halo created by a continuum originating from a central source.Figure <ref> shows the radial profiles of the surface brightness and the doublet ratio obtained for various models with a spherical geometry. The left and middle panels only show the results for the scattered light (without smoothing). The right panel shows the radial profiles of the doublet ratio, including both scattered and direct light, obtained after smoothing. In the right panel, to mimic observations, a two-dimensional Gaussian smoothing kernel with a standard deviation of 0.1 times the maximum radius was convolved with the peeled-off data. The surface brightness profiles that include the directly escaping component are the same as those shown in the first panel, except for a central peak at r=0 in low column-density models. In the figure, the solid lines with no circles represent static models with various column densities ranging from 10^14 to 10^16 cm^-2. The solid lines with circles denote expanding media, and the dotted lines represent the models with dust.The direct component decreases with increasing column density while the extended, scattered component in the outer region is enhanced. This property results from the increased number of scatterings and subsequent spatial dispersion of photons as the column density increases. In static media with N_Mg^+≲10^15 cm^-2, the surface brightness profiles, including the direct emission, show a strong peak at r=0. When convolved with the same Gaussian kernel as in the right panel, the peak produces slightly stronger and broader central bump-like features than those shown in the left panel of Figure <ref>. However, as the column density increases further (N_Mg^+≳10^16 cm^-2), no photons escape directly, and photons become trapped in the central region and undergo a large number of scatterings. In such a high optical depth medium, photons will escape the trapped region through a `single longest flight' or `excursion' when their frequencies are shifted to a critical frequency at which the optical depth of the medium is approximately unity, similar to the Lyα RT process <cit.>. Consequently, the radial profile for N_Mg^+=10^16 cm^-2 shows a higher central peak and a slightly steeper slope in the outer region than that of N_Mg^+=10^15 cm^-2 because photons with frequencies that have been significantly shifted in the inner region are scattered less in the outer region.In the case of an expanding medium, the central region is enhanced, and the outer region is lowered compared to its corresponding static medium. This is because most photons escape the system after undergoing many scatterings in the central region, where the medium expands slowly, while only a small fraction of photons is scattered in the outer region. This property arises from a reduction of the effective optical depth in expanding media in the outer region and thus is pronounced when V_ exp is higher. In the presence of dust, as shown in the models expanding with V_ exp=200 or 300 km s^-1, the brightness of the central region is reduced because a substantial fraction of core photons near the line center is absorbed by dust in the central region. It is found unexpectedly that the outer region becomes brighter than a medium with no dust. This unexpected enhancement in the outer region brightness is caused by dust scattering, which acts independently of the photon frequency and, therefore, the fluid velocity. In contrast, resonance scattering is the only process that creates an outer halo when dust exists. However, resonance scattering rarely occurs in rapidly expanding outer regions due to the frequency mismatch. Consequently, the outer region in an expanding high-density medium becomes brighter when dust is present.It has been previously mentioned that the doublet ratio is always 2 in spherical media with no dust when averaged over the whole system. However, it is found that the ratio can vary spatially, as seen in the middle and right panels of Figure <ref>. This variation is due to the difference in locations where K- and H-line photons are primarily scattered and the dependence of these locations on the optical depth.In the middle panel, as the column density increases, the doublet ratio at the central region first decreases (N_Mg^+≲3×10^14 cm^-2) and then increases (N_Mg^+≳3×10^14 cm^-2). When N_Mg^+=10^14 cm^-2, most K-line photons are scattered once or twice, while H-line photons mostly escape without being scattered. Thus, the total flux of scattered H-line photons is much lower, and the surface brightness of the H-line halo drops very quickly with radius compared to the K-line halo, resulting in the doublet ratio in the halo always being larger than two and increasing with radius. In the model with N_Mg^+=3×10^14 cm^-2, K-line photons are multiply scattered, whereas H-line photons are singly scattered on average. Then, the K-line halo is more spatially extended than the H-line halo, leading to R<2 in the central region and R>2 in the outer region. When the column density is higher, both K- and H-line photons are multiply scattered, and their surface brightness profiles tend to become flat, with their ratio approaching R≈2. When the column density is even higher enough, K-line photons are trapped in a smaller region than H-line photons (but with the same optical thickness). Thus, the central regions have a doublet ratio slightly larger than, but not too much larger than, 2.In the right panel, the central dip with R<2 in the radial profile of the doublet ratio for static models with N_Mg^+≲3×10^14 cm^-2 is due to the contribution of direct emission, which has a doublet ratio given by the ratio between the K- and H-line fluxes directly escaping without being scattered, i.e., R_ direc=2e^-τ_0/e^-τ_0/2=2e^-τ_0/2. In the central region, R is dominated by direct light (R≈ R_ direc) when N_Mg^+ is low, and thus it decreases with increasing N_Mg^+. However, it then increases when N_Mg^+≳3×10^14 cm^-2 because of the increasing contribution of scattered light. When N_Mg^+≳10^15 cm^-2 (τ_0≳30), the contribution of direct emission appears negligible, and scattered light determines the doublet ratio.In the outer region (r≳0.25 in the figure), the doublet ratio tends to be higher than 2 when N_Mg^+<10^15 cm^-2. This is because K-line photons are more spatially extended in relatively optically thin models due to their higher optical depth than H-line photons. However, the doublet ratio approaches R≈2 for optically thick models as both K- and H-line photons experience enough scatterings. Consequently, the doublet ratio in the outer region decreases with increasing N_Mg^+ until it approaches 2.As the medium expands, the effective optical depth decreases, leading to an increase in the brightness of direct emissions and a slight decrease in scattered light. Therefore, in a low column density case of N_Mg^+=3×10^14 cm^-2, the central region of the expanding medium with V_ exp=200 km s^-1 shows higher doublet ratios than the static model. The outer region of this model also exhibits higher doublet ratios due to the velocity-induced reduction in optical depth. On the other hand, for high optical depth models, the outer region tends to show R<2. This is because K-line photons undergo more scattering and experience greater frequency shifts than H-line photons in the inner regions, resulting in less scattering of K-line photons in the outer regions. This effect gives rise to doublet ratios of R<2. In the presence of dust, dust scattering operates independently of the frequency shift and thus causes an increase in the doublet ratio.Figure <ref> shows the radial profiles of the surface brightness (left panel) and the doublet ratio (right panel) of the continuum-pumped Mg2 emission line. Compared to the intrinsic Mg2 line case, the most significant differences are that the surface brightness profiles of expanding media do not change significantly from those of the static cases, and the doublet ratio always falls within the range of 0.75 to 1.75. The continuum source supplies photons capable of resonant scattering, even for the fastest expanding regions, regardless of fluid velocity. Conversely, intrinsic Mg2 line photons are seldom scattered in high-velocity regions because they rarely undergo resonance. Therefore, the continuum source in expanding media creates surface brightness profiles similar to those in static media. In contrast, the intrinsic Mg2 line source produces much steeper profiles in expanding media than in static media. In the presence of dust, the continuum photons in fast-moving regions can be scattered not only by dust but also by resonance, whereas the Mg2 line photons are primarily scattered by dust alone. This difference results in a relatively small change in the surface brightness profile, independent of the medium's velocity, even when dust is included.The intrinsic Mg2 line source produces K- and H-line photons with a flux ratio of 2:1, while the continuum source can supply photons with the same fluxes at the K- and H-line frequencies. This condition makes the doublet ratio always less than two. In Figure <ref>, the doublet flux ratio for most models, except for models with the lowest column densities (N_Mg^+≲3×10^14 cm^-2), is found to be approximately 1. In relatively low column-density models, photons with H-line frequency are rarely scattered and thus produce a relatively weaker scattering halo than K-line photons, yielding higher doublet ratios. In contrast, when the column density is high enough, both K- and H-line photons are scattered sufficiently, producing doublet ratios ∼1 (i.e., the initial ratio from the continuum).§ DISCUSSION This section begins by discussing the observational implications of the present results, specifically regarding the doublet ratio and escape fraction. It then discusses the Mg2 emission mechanisms and the sites from which the Mg2 emission lines originate. Furthermore, it highlights the importance of distinguishing the RT effects in understanding Mg2 lines from those obtained using a simple foreground screen model. Additionally, this section covers other resonance lines with atomic-level structures resembling the Mg2 doublet. §.§ Doublet Flux Ratio and Escape Fraction In this paper, the Doppler parameter was assumed to be b=90 km s^-1 based on observations of compact star-forming galaxies. Regarding the line width, it is noteworthy that the double peaks in the Mg2 emission line profiles have not been clearly detected in most of the galaxies that exhibit the Mg2 emission. However, the non-detection of double peaks does not necessarily imply their absence. It was found, though not presented in this paper, that weak double peaks in models with N_Mg^+≲2×10^14 cm^-2 disappeared after convolution with a Gaussian function having a spectral resolution of R=8000 (equivalent to 37 km s^-1), as configured for the observation of J1503 by <cit.>. Therefore, the absence or weakness of double peaks in most observations of galaxies exhibiting the Mg2 emission implies that N_Mg^+≲2×10^14 cm^-2 (τ_0≲5) if the medium is close to being static. The line broadening due to resonance scatterings is also insignificant at such a relatively low optical depth. Thus, after considering the instrumental line spread function, the estimated line width from observational data would not significantly differ from the intrinsic width. The results, therefore, indicate that the input line width of Mg2 adopted in this paper is a reasonable choice.In the study of ten green pea galaxies by <cit.>, it was observed that the Mg2 lines are systematically redshifted by an average of 70 km s^-1, with line widths (FWHM) ranging from 100 to 300 km s^-1. These galaxies were also found to have a doublet flux ratio ranging from ∼0.3 to ∼2.7, with a median of ≈1.7. There was no evidence of Mg2 extending beyond the continuum. Similarly, <cit.> also found no spatially extended Mg2 emission beyond the continuum. However, in contrast to <cit.>, they found no strong line profile asymmetries. The spatially resolved map of the doublet flux ratio of J1503 by <cit.> shows a rather patchy pattern. Meanwhile, its Gaussian-smoothed image shows two blobs with a doublet ratio of R≈1.8-2, between which relatively lower ratios are found.The doublet flux ratio as low as R≲1.5 and its spatial variation, as observed in J1503, cannot simultaneously be explained by spherical models. Doublet ratios of R≈1.8-2 may be reproduced when dust is included; however, ratios as low as R≲1.5 cannot be explained. In the spatially resolved radial profiles, such low doublet flux ratios can be obtained in its outer regions if the medium expands and has a high column density (Figure <ref>). However, such high column densities and velocity redshifts are inconsistent with the observed spectra. Another option is that if the continuum and emission line sources coexist, and thus the continuum-pumped Mg2 emission is combined with the `intrinsic' emission line, the doublet flux ratio for emission may become much lower than two. However, in static media with a low column density, the combined doublet ratio would not be much different from those obtained from the emission line model alone because the continuum-pumped emission feature is very weak, as demonstrated in the upper left panel of Figure <ref>.The present study presented models expanding with V_exp=300 km s^-1 or slower. This velocity is close to the maximum velocity capable of producing a line separation corresponding to that between the Mg2 K and H lines. As shown in Figure <ref>, as the expanding velocity increases, the redshifted K line (or blueshifted H line in the case of a contracting medium) begins to overlap with the H line (or the K line), altering the doublet flux ratio of the continuum-pumped emission lines. This effect reduces the doublet flux ratio even lower than one, as demonstrated in Figure <ref> when a substantial amount of dust is present. However, such highly expanding models are inconsistent with the observational data of J1503 <cit.>, which shows no signature of velocity shifts. Nevertheless, galaxies exhibiting asymmetric line profiles or absorption features, as seen in the samples of <cit.> and <cit.>, could, at least qualitatively, be explained by combinations of the models for the intrinsic emission lines and continuum.Instead of simple geometries considered in this paper, more complicated geometries may be necessary to explain the observational results. For example, cylindrical models can yield such low doublet ratios when a relatively flat medium is viewed edge-on (Figures <ref> and <ref>). The present study assumed a simple cylindrical shape. However, in reality, many different geometrical shapes and densities may coexist. If an elongated or flat patch of the medium is observed in a face-on-like direction, it will give doublet flux ratios of R≳2 along that line of sight. In this context, it should not be ruled out that the doublet flux ratios R≳2 found in <cit.> might genuinely reflect the phenomenon rather than arising from statistical fluctuations. They considered values above 2 to be statistically consistent with the intrinsic value of 2 at the 1σ significance level. Conversely, if it happens to be oriented in a highly inclined (edge-on) direction, the line of sight would result in doublet ratios much lower than 2. Even lower values could arise when the resonantly-scattered continuum plays a role in the doublet ratios. §.§ Foreground Screen vs. Radiative Transfer Effects It's important to note that the escape fraction predicted in non-spherical, cylindrical media can exceed unity depending on the viewing angle. This implies that the optical depth and escape fraction estimated using the foreground screen model adopted by <cit.> could provide a misleading impression when estimating the actual escape fraction of Mg2 in galaxies. In their model, a background source is assumed to impinge upon a foreground screen of Mg^+ gas, with no consideration of scattered components directed toward an observer. Given that resonantly scattered Mg2 emission from different lines of sight contributes to the observed Mg2 fluxes, the observationally estimated optical depth and escape fraction should be considered as effective values that incorporate the scattered flux. As demonstrated in the next section, Mg2 photons originating from a spatially extended Mg2 source will experience relatively weaker resonance effects than those from H2 regions. Thus, in this case, using the foreground screen assumption will also lead to a somewhat smaller amount of Mg^+ gas.The situation is similar to distinguishing between `attenuation' and `extinction' to understand the dust effects on spectral energy densities or spectra of galaxies. As explained in <cit.> and <cit.>, extinction refers to the disappearance of light from a line of sight when observing a point-like source. In contrast, attenuation refers to a situation where the spatially extended emission source and scattering material are well mixed, and scattered light partially compensates for extinction in a spatially extended system. The same distinction should be applied when analyzing the Mg2 emission lines. The optical depth estimated using the foreground screen model is not an actual value but an effective one. The effective optical depths estimated from the `attenuation' situations are always smaller than the real ones. Systematic studies using complex, coherent, and clumpy media, as demonstrated by <cit.> in their investigation of dust attenuation curves, may help disentangle the observed doublet ratios and escape fractions of Mg2 from geometrical and RT effects. Research on Mg2 RT, similar to the work of <cit.> for dust RT, is deferred to future studies. §.§ Mg2 Emission Mechanisms and Sites In theory, there are two intrinsic mechanisms that can create Mg2 K and H emission lines: (1) recombination of Mg^+2 and (2) collisional excitation of Mg^+ in the ground state, followed by radiative decay. Photoionization of Mg^+ atoms requires an energy of 15.035 eV or higher (λ≲824.64Å). Therefore, doubly ionized Mg^+2 atoms will only be present near the central star(s) in H2 regions. Consequently, the Mg2 recombination line is expected to be produced primarily in the central part of H2 regions, and the total luminosity of the Mg2 recombination line is likely negligible due to the relatively small volume occupied by Mg^+2 gas. The majority of Mg2 will be produced through collisional excitation of Mg^+ followed by radiative decay.Indeed, the photoionization code Cloudy, which is last described in <cit.>, predicts only the collisionally excited Mg2 emission line. <cit.> and <cit.> calculated photoionization models for the Mg2 K and H emission lines originating from H2 regions. Their findings revealed that a significant amount of Mg2 line is emitted from H2 regions. <cit.> and <cit.> utilized the Cloudy code to calculate Mg2 line emissions originating from H2 regions in galaxies in cosmological simulations. However, the Mg2 emission calculated using the Cloudy code originates predominantly from the transition region between the fully ionized H2 region and the neutral outer region. Mg2 emission is produced in the boundary region where the photoionizing radiation field with energies E≳ 15 keV (λ≲825Å) is fully attenuated, and the gas temperature is high enough to excite Mg^+ atoms collisionally. Similarly, [S2] λ6716 and [N2] λ6583 lines are also mainly emitted from the boundary <cit.>. Detailed studies of this property of Mg2 are beyond the scope of this paper and will be presented elsewhere.Additionally, it should be noted that Mg2 can also be created in the diffuse WNM with a temperature of ∼10^4 K, which has often been overlooked in the literature. The diffuse far-ultraviolet (FUV) continuum radiation field at λ∼1620Å, which is composed of direct starlight (and the radiation from AGN if present) and its dust-scattered component, can produce Mg^+ gas. The diffuse FUV radiation field at λ∼1620Å in the neutral ISM will singly ionize Mg atoms because the ionization energies of Mg^0 and Mg^+ are 7.646 eV and 15.035 eV (corresponding to ∼8.6×10^4 K and ∼1.7×10^5 K), respectively. As a result, unless the stellar FUV radiation is significantly attenuated by dust, Mg^+ is expected to be the predominant form of Mg in both the cold neutral medium (CNM) and WNM. Collisions with electrons with temperatures of ≈10^4 K will excite the Mg^+ ions in the WNM, and subsequent radiative decay to the ground state will produce Mg2 emission. Therefore, it may be essential to consider the diffuse Mg2 emission, which is not directly associated with H2 regions. Mg2 emission from H2 regions would be confined to relatively compact volumes, whereas emission from the WNM will be distributed widely throughout galaxies.As a result, the observed Mg2 emission lines in galaxies (including the CGMs) would arise from a combination of three components: (1) Mg2 originating from the outer boundaries of H2 regions, where the fully ionized region meets with the ambient CNM or WNM, (2) Mg2 originating from the diffuse WNM, which widely spreads throughout and around galaxies, and (3) Mg2 emission pumped by the resonance scattering of the continuum radiation. The relative importance between H2 regions and the WNM will depend on their total luminosity. The luminosity of the Mg2 line is proportional to a product of the emissivity, the volume of the emission site, and the densities of Mg^+ and electrons. H2 regions occupy a relatively small volume but have high density, while the WNM occupies a relatively large volume with low density. Therefore, understanding the factors that determine the relative importance of these mechanisms and how they are interconnected becomes essential.<cit.> and <cit.> found no evidence of Mg2 extending beyond the stellar continuum in their observations of star-forming galaxies. These observations do not necessarily indicate that most Mg2 emission is directly associated with H2 regions. The diffuse Mg^+ gas is also likely to be dominantly produced by bright young stars, which emit most of the FUV continuum radiation at λ≲1620Å. Therefore, Mg2 from the WNM can have a similar spatial extent to the stellar continuum.If Mg2 originates from the Mg^+ gas surrounding fully-ionized H2 regions, the Mg2 photons will experience a relatively large number of resonance scatterings both in the H2 regions and WNM. The continuum UV radiation near the wavelengths of Mg2, which will be mostly emitted from bright young stars, will also undergo a similarly large number of resonance scatterings and thus produce Mg2 absorption, emission, or both, depending on the geometry and kinematics of the gas. In contrast, in a configuration where Mg2 is produced in the diffuse WNM, the resonance scattering effect would be relatively weak compared to the case of compact sources with the same amount of Mg^+ gas. Figure <ref> compares the spectra predicted from the models where the Mg2 source and Mg^+ gas are well mixed with those obtained from the models with a central source. Both model types assume a spherical medium. The figure clearly demonstrates that the spatially extended source (dashed lines) yields weeker resonance-scattering signatures than the point source (solid lines). This difference is attributed to the geometrical effects that the optical depth measured from an outer radius is smaller than that from the center.The fact that galaxies with strong Mg2 emission tend to exhibit bluer UV spectral slopes compared to those showing absorption <cit.> suggests that the WNM may play a significant role in generating the Mg2 emission line in these galaxies. This inference is drawn from the correlation between bluer UV spectral slopes and increased UV radiation around λ∼1620Å. In this case, Mg2 lines in these galaxies are likely to experience relatively weaker resonance effects. Conversely, in galaxies with redder UV slopes, the contribution of the WNM may be relatively minor. In such cases, Mg2 emission from H2 regions and the stellar continuum near Mg2 wavelengths becomes more prominent. The Mg2 emission line and continuum photons will undergo somewhat stronger resonance effects. The resonantly scattered continuum can then give rise to absorption lines, potentially dominating the emission from H2 regions. §.§ Alkali-Metal-Like Resonance Lines Recently, resonance doublet lines from alkali metal-like ions, whose ground states consist of a single `s' electron outside a closed shell, have gained interest in the literature and have been observed in high-redshift galaxies. This is because star-forming galaxies are believed to have been the primary source of the reionization of the IGM at high redshifts. It is difficult to measure the LyC directly, so researchers have attempted to find indirect tracers of LyC instead. For this purpose, it has been suggested that the C4 λλ1548, 1551 doublet could potentially be a valuable indicator of LyC photon escape at low redshift (z∼0.3-0.4; ) and high-redshift (z>6; ). <cit.> also reported the observations of intense nebular He2 and double-peaked C4 emission from two galaxies at z∼5-7. These results suggest that in these galaxies, a significant fraction of high-energy LyC photons can escape through paths of highly ionized gas with low column densities.In the context of probing the LyC escape, <cit.> found a tight correlation between the Lyα escape fraction in local compact star-forming galaxies and the Mg2 escape fraction. They used one-dimensional photoionization models to find the correlation between the intrinsic Mg2 emission line flux and oxygen lines [O3] 5007Å and [O2] 3727Å, often used to trace LyC leakage. <cit.> detected strong resonant Mg2 emission lines and non-resonant, fluorescence Fe2^* λλ2612, 2626 emission lines in the spectra of LyC leakers at z∼0.3-0.4. These results suggest that Mg2 emission lines may be a helpful indicator of escaping Lyα and LyC emission.Therefore, it is suggestive that both C4 and Mg2 lines can help in understanding the porosity condition of the ISM and CGM, through which LyC photons escape, in high-redshift galaxies. However, it should be noted that C4 is unlikely to originate from ordinary H2 regions or the WNM, as the ionization potential of C^+3 is 47.89 eV, which is much higher than that of a He^0 atom. The hot gas with a temperature ≳10^5 K from which C4 originates would primarily be produced by supernova shocks and/or by the hard radiation from X-ray binaries (and AGNs if present). Other high-ionization doublet lines, such as O6 λλ1032, 1038 and N6 λλ1239, 1243, would also serve as valuable tracers of low-density, high-temperature gas phases. These high-ionization lines have been studied both theoretically and observational in the Milky Way <cit.>. In particular, <cit.> have made a sky survey map of the C4 emission line and found that the hot gas in our Galaxy has a scale height of ∼ kpc. However, their analysis did not consider the resonance scattering of C4. Although our Galaxy might differ from compact star-forming galaxies in high redshift, a detailed understanding of the map would help us understand the nature of hot gas in high redshift galaxies.In contrast to the high-ionization lines, Mg2 traces a relatively warm and neutral gas phase. The typical Doppler parameter b of Mg2 absorption systems is found to be ≈5 km s^-1, constraining that the typical temperature of Mg^+ gas to be ≈30,000 K or less <cit.>. A clear understanding of the formation mechanisms and physical properties of these diffuse warm and hot gases on a galactic scale may be necessary. In particular, their volume-filling fractions are crucial factors determining the escape of LyC photons from galaxies.§ SUMMARY This paper investigated the RT of Mg2 doublet lines in two simple geometries (sphere and cylinder), providing valuable insights for interpreting observational data. Future research is expected to develop models that adopt more complex and clumpy media. The main results of this paper are summarized as follows: * In spherical models without dust, the doublet flux ratio and escape fraction of Mg2 are always 2 and 1, respectively.* When studying resonance doublet emission lines, it has been generally assumed that the flux ratio of doublet from optically thick media is always lower than their optically thin value (e.g., F_2796/F_2803≤2). Therefore, doublet ratios lower than two have been considered evidence of resonance scattering and the existence of an optically thick medium near or surrounding the emitting gas. However, in cylindrical models, the doublet flux ratio can also be much higher than the intrinsic value 2 when flat media are viewed face-on. Additionally, the escape fraction can be larger than 1 when observed face-on. In contrast, when a geometrically and optically thin disk is viewed edge-on, the doublet ratio can be as low as ∼1.2.* When dust is included, the doublet flux ratio and escape faction are reduced; however, the dust effects are noticeable only when the column density of Mg^+ is N_Mg^+≳10^15 cm^-2 (τ_0≳28), corresponding to the dust extinction optical depth of τ_ dust≳0.06, in static media.* The EWs of the absorption and emission lines resulting from resonance scattering of the stellar continuum can be qualitatively interpreted using the curve of growth theory for pure absorption lines. The absorption and emission features somewhat match and compensate in low column-density media but are separated in high column-density and expanding media. In the presence of dust, it is found that the EW of continuum-pumped Mg2 emission is significantly reduced compared to that of absorption. However, the reduction in the emission line is less than what would be expected if one ignores the dust scattering effect of the continuum.* It is important to note that, in cylindrical models, pure absorption and pure emission spectra due to the stellar continuum can emerge depending on the viewing angle. In an edge-on view, the spectra show pure absorption, while a face-on view gives rise to pure emission spectra.The following summarizes the observational implications of the results and the related topics discussed in this paper. * The doublet flux ratios of Mg2, as low as observed in star-forming galaxies showing Mg2 emission lines but no signatures of velocity shifts and double peaks, cannot be accounted for by spherically symmetric models, whether or not dust is included when considering only the RT of the intrinsic Mg2 emission line.* Instead of spherical models, they may be reasonably well explained when the galaxies are geometrically thin disks viewed highly inclined (β_ inc≳80^∘) or contain large and relatively flat Mg^+2 gas clouds situated edge-on. The continuum-pumped emission line will also be necessary to explain various spectral shapes and doublet flux ratios.* It is discussed that Mg2 emission originating from the diffuse WNM may be important when the UV spectral slope of a galaxy is relatively blue.* It is also pointed out that the optical depth derived using the foreground screen model should be regarded as effective rather than the actual value. This work was partially supported by a National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2020R1A2C1005788) and by the Korea Astronomy and Space Science Institute grant funded by the Korea government (MSIT; No. 2023183000 and 2023186903). Jaskot & Ravindranath(2016) [Adams(1972)]Adams1972Adams, T. F. 1972, , 174, 439[Asplund et al.(2009)]Asplund2009Asplund, M., Grevesse, N., Sauval, A. J., Scott, P., 2009, , 47, 481[Baes et al.(2011)]Baes2011Baes, M., Verstappen, J., De Looze, I., et al. 2011, , 196, 22[Bahcall & Spitzer(1969)]Bahcall1969Bahcall, J., & Spitzer, L. Jr. 1969, , 156, 63[Berg et al.(2019)]Berg2019Berg, D. A., Chisholm, J., Erb, D. K., et al. 2019, , 873, L3[Burchett et al.(2021)]Burchett2021Burchett, J. N., Rubin, K. H. R., Prochaska, J. X., et al. 2021, , 909, 151[Calzetti et al.(1994)]Calzetti1994Calzetti, D., Kinney, A. L., & Storchi Bergmann, T. 1994, , 429, 582[Chang & Gronke(2003)]Chang2023Chang, S.-J., & Gronke, M., 2023, submitted[Chisholm et al.(2020)]Chisholm2020Chisholm, J., Prochaska, J. X., Schaerer, D., et al. 2020, , 498, 2554[Ding et al.(2005)]Ding2005Ding, J., Charlton, J. C., & Churchill, C. W. 2005, , 621, 615[Draine(2003)]Draine2003Draine, B. T. 2003, , 598, 1017[Draine(2011)]Draine_book2011Draine, B. T. 2021, Physics of the Interstellar and Intergalactic Medium. Princeton Univ. Press, Princeton[Dutta et al.(2023)]Dutta2023Dutta, R., Fossati, M., Fumagalli, M., et al. 2023, , 522,535[Erb et al.(2012)]Erb2012Erb, D. K., Quider, A. M., Henry, A. L., et al. 2012, , 759, 26[Feltre et al.(2018)]Feltre2018Feltre, A., Bacon, R., Tresse, L., et al. 2018, , 617, A62[Finley et al.(2017)]Finley2017Finley, H., Bouché, N., Contini, T., et al. 2017, , 608, A7[Gordon et al.(2001)]Gordon2001Gordon, K. D., Misselt, K. A., Witt, A. N., et al. 2001, , 551, 269[Guseva et al.(2013)]Guseva2013Guseva, N. G., Izotov, Y. I., Fricke, K. J., et al. 2013, , 555, A90[Guseva et al.(2020)]Guseva2020Guseva, N. G., Izotov, Y. I., Schaerer, D., et al. 2020, , 497, 4293[Henry et al.(2018)]Henry2018Henry, A., Berg, D. A., Scarlata, C., et al. 2018, , 855, 96[Huang et al.(2021)]Huang2021Huang, Y.-H., Chen, H.-W., Shectman, S. A., et al. 2021, , 502, 4743[Izotov et al.(2016)]Izotov2016Izotov, Y. I., Schaerer, D. Thuan, T. X., et al. 2016, , 461, 3683[Jaskot & Ravindranath(2016)]Jaskot2016Jaskot, A. E., & Ravindranath, S. 2016, , 833, 136[Jenkins(2009)]Jenkins2009Jenkins, E. B., 2009, , 700, 1299[Jo et al.(2019)]Jo2019Jo, Y.-S., Seon, K.-I., Min, K.-W., et al., 2019, , 243, 9[Johnson(2019)]Johnson2019Johnson, J. A., 2009, Science, 363, 474[Katz et al.(2022)]Katz2022Katz, H., Garel, T., Rosdahl, J., et al. 2022, , 515, 4265[Leclercq et al.(2022)]Leclercq2022Leclercq, F., Verhamme, A., Epinat, B., et al. 2022, , 663, A11[Lee et al.(2021)]LeeJC2021Lee, J. C., Hwang, H. S., & Song, H. 2021, , 503, 4309[Chatzikos et al.(2023)]Chatzikos2023Chatzikos, M., Bianchi, S., Camilloni, F., et al. arXiv:2308.06396, accepted for publication in RMxAA[Churchill et al.(2020)]Churchill2020Churchill, C. W., Evans, J. L., Stemock, B., et al. 2020, , 904, 28[Martin et al.(2013)]Martin2013Martin, C. L., Shapley, A. E., Coil, A. L., et al. 2013, , 770, 41[Michel-Dansac et al.(2020)]Michel-Dansac2020Michel-Dansac, L., Blaizot, J., Garel, T., et al. 2020, , 635, A154[Nelson et al.(2021)]Nelson2021Nelson, D., Byrohl, C., Peroux, C., et al. 2021, , 507, 4445[Prochaska et al.(2011)]Prochaska2011Prochaska, J. X., Kasen, D., & Rubin, K. 2011, , 734, 24[Ramambason et al.(2020)]Ramambason2020Ramambason, L., Schaerer, D., Stasińska, G., et al. 2020, , 644, A21[Rigby et al.(2002)]Rigby2002Rigby, J. R., Charlton, J. C., & Churchill, C. W. 2022, , 565, 743[Rubin et al.(2011)]Rubin2011Rubin, K. H. R., Prochaska, J. X., Mènard, B., et al. 2011, , 728, 55[Rupke et al.(2019)]Rupke2019Rupke, D. S. N., Coil, A., Geach, J. E., et al. 2019, , 574, 643[Saxena et al.(2022)]Saxena2022Saxena, A., Cryer, E., Ellis, S., et al. 2022, , arXiv:2206.06161[Scarlata & Panagia(2015)]Scarlata2015Scarlata, C., & Panagia, N. 2015, , 801, 43[Schaerer et al.(2022)]Schaerer2022Schaerer, D., Izotov, Y. I., Worseck, G., et al. 2022, , 658, L11[Schroetter et al.(2015)]Schroetter2015Schroetter, I., Bouché, N., Péroux, C., et al. 2015, , 804, 83[Seon et al.(2011)]Seon2011Seon, K.-I., Edelstein, J., Korpela, E., et al. 2011, , 196, 15[Seon & Witt(2012)]Seon2012Seon, K.-I., & Witt, A. N. 2012, , 758, 109[Seon et al.(2014)]Seon2014Seon, K.-I., Witt, A. N., Shinn, J.-H., et al. 2014, , 785, L18[Seon & Draine(2016)]Seon2016Seon, K.-I., & Draine, B. T. 2016, , 833, 201[Seon & Kim(2020)]Seon2020Seon, K.-I., & Kim, C.-G. 2020, , 250, 9[Seon et al.(2022)]Seon2022Seon, K.-I., Song, H., & Chang, S.-J. 2022, , 259, 3[Seive et al.(2022)]Seive2022Seive, T., Chisholm, J., Leclercq, F., et al. , 515, 5556[Shaban et al.(2022)]Shaban2022Shaban, A., Bordoloi, R., Chisholm, J., et al. 2022, (accepted), arXiv:2109.13264[Shapley et al.(2003)]Shapley2003Shapley, A. E., Steidel, C. C., Pettini, M., et al. 2003, , 588, 65[Shelton & Kwak(2018)]Shelton2018Shelton, R. L., & Kwak, K. 2018, , 866, 34[Steinacker et al.(2013)]Steinacker2013Steinacker, J., Baes, M., & Gordon, K. D., 2013, , 51, 63[Yan et al.(2022)]Yan2022Yan, Dongdong, Seon, K.-I., Guo, J., et al. 2022, , 936, 177[Weiner et al.(2009)]Weiner2009Weiner, B. J., Coil, A. L., Prochaska, J. X., et al. 2009, , 692, 187[Xu et al.(2022)]Xu2022Xu, Xinfeng, Henry, A., Heckman, T., et al. 2022, , 933,202[Xu et al.(2023)]Xu2023Xu, Xinfeng, Henry, A., Heckman, T., et al. 2023, , 943, 94[Weingartner & Draine(2001)]Weingartner2001Weingartner, J. C., & Draine, B. T. 2001, , 548, 296[Witt(1977)]Witt1977Witt, A. N. 1977, , 35, 1[Zabl et al.(2021)]Zabl2021Zabl, J., Bouche, N. F., Wisotzki, L., et al. 2021, , 507, 4294 | http://arxiv.org/abs/2310.17908v1 | {
"authors": [
"Kwang-il Seon"
],
"categories": [
"astro-ph.GA"
],
"primary_category": "astro-ph.GA",
"published": "20231027055704",
"title": "On the Doublet Flux Ratio of Mg II Resonance Lines in and Around Galaxies"
} |
Stability and Accuracy analysis of the θ Method and 3-Point Time filterThe research was partially supported by NSF grant DMS-2110379.Nicholas HurlDepartment of Mathematics, Duquesne University, Pittsbugh, PA-15282 ([email protected]). Farjana SiddiquaDepartment of Mathematics, University of Pittsburgh, Pittsburgh, PA-15260([email protected] ). Shuxian XuDepartment of Mathematics, University of Pittsburgh ([email protected]).January 14, 2024 ===========================================================================================================================================================================================================================================================================================================In this paper, we investigate the kinetic stability of classical, collisional plasma – that is, plasma in which the mean-free-path λ of constituent particles is short compared to the length scale L over which fields and bulk motions in the plasma vary macroscopically, and the collision time is short compared to theevolution time. Fluid equations are typically used to describe such plasmas, since their distribution functions are close to being Maxwellian. The small deviations from the Maxwellian distribution are calculated via the Chapman-Enskog (CE) expansion in λ/L ≪ 1, and determine macroscopic momentum and heat fluxes in the plasma. Such a calculation is only valid if the underlying CE distribution function is stable at collisionless length scales and/or time scales. We find that at sufficiently high plasma β, the CE distribution function can be subject to numerous microinstabilities across a wide range of scales. For a particular form of the CE distribution function arising in strongly magnetised plasma (viz., plasma in which the Larmor periods of particles are much smaller than collision times), we provide a detailed analytic characterisation of all significant microinstabilities, including peak growth rates and their associated wavenumbers. Of specific note is the discovery of several new microinstabilities, including one at sub-electron-Larmor scales (the `whisper instability') whose growth rate in certain parameter regimes is large compared to other instabilities. Our approach enables us to construct the kinetic stability maps of classical, two-species collisional plasma in terms of λ, the electron inertial scale d_e and the plasma β. This work is of general consequence in emphasising the fact that high-β collisional plasmas can be kinetically unstable; for strongly magnetised CE plasmas, the condition for instability is β≳ L/λ. In this situation, the determination of transport coefficients via the standard CE approach is not valid.§ INTRODUCTIONAnswering the question of when a plasma can be described adequately by fluid equations is fundamental for a comprehensive understanding of plasma dynamics. It is well known that some physical effects in plasmas – for example, Landau damping –specifically require a fully kinetic description in terms of distribution functions of the plasma's constituent particles <cit.>.However, for many other plasma processes, a detailed description of the underlyingparticle distribution provides little additional understanding of the essential physics governing that process.Characterising such processes with fluid equations, which describe the evolution of macroscopic physical quantitiessuch as density, fluid velocity and temperature,often simplifies the description and therefore aids understanding.Fluid equations are also easier to solve numerically than kinetic equations: the latter reside in six-dimensional phase space (and time), with threeadditional dimensions – the velocity space – when compared to the former.The underlying difficulty associated with determining when a plasma is a fluidis finding a closed set of equations in the macroscopic plasma variables. The derivation of fluid equations from the Maxwell-Vlasov-Landau equationsgoverning the evolution of the plasma's distribution functions is carried out by taking moments(that is, integrating the governing equations and their outer products with velocity v over velocity space). However, the resulting equationsare not closed: the evolution equation of the zeroth-order moment (density) requires knowledge of the evolution of the first-order moment, the evolution equation for the first-order momentneeds the second-order moment, and so on. For plasma fluid equations to beable to describe the evolution of a plasma without reference to that plasma'sunderlying distribution functions, a closure hypothesis or an approximation relating higher-ordermoments to lower ones is required. For a collisional plasma – i.e., one in which the mean free paths λ_sand collision times τ_s of the ions and electrons (s = i, e) are much smaller than the typical length scale L and time scale τ_L on which macroscopic properties of the plasmachange – there is a procedure for achieving such a closure: the Chapman-Enskog (CE) expansion <cit.>. It is assumed that in a collisional plasma,the small perturbations of the distribution functions away from a Maxwellian equilibrium have typical size ϵ∼λ_s/L ∼τ_s/τ_L ≪ 1 (assuming sonic motions, and λ_i ∼λ_e).Since the perturbation is small, its form can be determinedexplicitly by performing an asymptotic expansion of the Maxwell-Vlasov-Landau equations. Once the underlying distribution is known, the relevant moments can becalculated – in particular, the momentum and heat fluxes are the second- and third-order moments of the O(ϵ) non-Maxwellian component of the distributionfunction. The CE expansion applied to a two-species magnetised plasma was worked out by <cit.>.Subsequent studies have refined and extended various aspects of his calculation <cit.>.In this paper, we will refer to the distribution functions associated with the CEexpansion as CE distribution functions, and plasmaswith particle distribution functions given by CE distribution functions as CEplasmas. However, the theory constructed as outlined above is incomplete. For the CEexpansion to provide an adequate fluid closure, the resulting distribution functionsmust be stable to all kinetic instabilities withlength scales shorter than the longest mean free path, and timescales shorter than themacroscopic plasma timescale τ_L. Such instabilities (if present) are known asmicroinstabilities. We emphasise that these microinstabilities should be distinguishedconceptually from instabilities describable by the closed set of plasma-fluid equations:for example, Rayleigh-Taylor <cit.>, magnetorotational <cit.>, magnetoviscous <cit.>, or magnetothermal/heat-flux-driven buoyancy instabilities <cit.>. Kinetic microinstabilities should also be distinguishedfrom the small-scale instabilities that arise in solving higher-order (O(ϵ^2)) fluid equations obtained from the CE asymptotic expansion <cit.>. Such instabilities are not physical because they arise at scaleswhere the equations themselves do not apply <cit.>. Fluid instabilities do not call into question thevalidity of the fluid equations themselves; in contrast, ifmicroinstabilities occur, the plasma-fluid equations obtained through the closurehypothesis are physically invalid, irrespective of their own stability. Microinstabilities have been studied in depth for awide range of classical plasmas by many authors; see, for example, <cit.>, <cit.>, and <cit.> for three different general perspectives on microinstability theory.Although it can be shown that a Maxwellian distribution is always immune to such instabilities <cit.>,anisotropic distribution functions are often not <cit.>. A notable example is the Weibel instability,which occurs in counter-streaming unmagnetised plasmas <cit.>. The linear theory of such instabilities is generally well known <cit.>.Microinstabilities in magnetised plasma have also been comprehensively studied. The ion firehose and mirror instabilitiesare known to occur in plasmas with sufficient ion-pressure anisotropy and large enough plasma β <cit.>,while electron-pressure anisotropy can also result in microinstabilities of various types <cit.>. A number of authors have noted that microinstabilities, if present, will have a significant effect on the macroscopic transport properties of plasmas <cit.>.Typically (although not always), once the small-scale magnetic and electric fieldsassociated with microinstabilities have grown, they will start to scatter particles,which in turn will alter the plasma's distribution functions. This has micro- and macroscopic consequences for plasma behaviour. From the microscopic perspective, it changes the course of the evolution of the microinstabilitiesthemselves – by, e.g., reducing the anisotropy of the underlying particle distribution functions <cit.>. From the macroscopic perspective, the changes to the distribution functions will alter both heat andmomentum fluxes in the plasma (which, as previously mentioned, are determined by non-Maxwellian terms in the distributionfunction). In this picture, a plasma subject to microinstabilities in some sense generates its owneffective anomalous collisionality <cit.>. The typical values of the altered fluxes attained must depend on the saturated state of microinstabilities <cit.>.Exploring the mechanisms leading to saturation of both unmagnetised, Weibel-type instabilities <cit.>and magnetised instabilities <cit.>continues to be an active research area. Simulation results <cit.> support the claim that the saturation amplitude ofsuch microinstabilities is typically such that the plasma maintains itself close tomarginality of the relevant instability. Do these kinetic instabilities afflict the CE distribution function? Naively, it mightbe assumed not, since it is `almost' Maxwellian. However, it turns out that, provided the plasma β is sufficiently high, smalldistortions from a Maxwellian can be sufficient to lead to instability.Instabilities of a CE distribution function in an unmagnetised plasma were first exploredby <cit.>, who considered a collisional electron plasma(mean free path λ_e) with macroscopic variations in density, temperature and velocity (scale ∼L). He showed thatthe CE distribution function in such a plasma would have two non-Maxwellian terms of order λ_e/L – an antisymmetric term associated with heat flux, and another term associated with velocity shear – andthat the latter term would result in the so-called transverse instability. <cit.>also claimed that this instability would lead to a significant change in theplasma viscosity, and other transport coefficients. <cit.> furtherdeveloped the theory of the transverse instability, including a quasi-lineartheory resulting in isotropisation of the underlying electron distributionfunction. The stability of the CE distribution function was later considered by <cit.>. They found that in an initially unmagnetised two-species plasma supportinga fluid-scale electron-temperature gradient (scale L_T, no flow shear), the second-order terms (in λ/L_T) in the electron distribution functioncould result in the formation of unstable waves, with typical real frequencies ϖ∝λ_e/L_T, and growth rates γ_ RL∝(λ_e/L_T)^2. Similarly to <cit.>, they argued that the presence of such instabilities wouldsuppress the macroscopic heat flux in the plasma (which in a collisional plasma is carried predominantly byelectrons). This particular instability has also been proposed as an explanation for the origin of the cosmic magnetic field <cit.>.Subsequent authors have explored further the idea that non-Maxwellian components of the electron distribution function required to support amacroscopic heat flux can lead to kinetic instability.<cit.> considered theeffect of introducing a uniform, macroscopic magnetic field into the same problem, and foundthat a faster instability feeding off first-order heat-flux terms in the CE distribution function – the whistler instability – arose at the electron Larmor scale, with γ_ whistler,T∝λ_e/L_T. Aquasi-linear theory of this instability wassubsequently constructed by <cit.>. Both <cit.> and <cit.> proposed that the instability at saturation would result in a suppressed heat flux <cit.>.More recently, the whistler instability has been studied in simulations of high-β plasma–with two groups independently finding both the onset of instability at electronscales, and evidence of a suppression of heat flux <cit.>.<cit.> constructed a theoretical model for whistler-regulated heat transport based on a set ofreasonable assumptions that were motivated by these prior simulations. The possibility of microinstabilities associated with the ion CE distributionfunction was also considered by <cit.>, who found that weakly collisional, magnetised plasma undergoing subsonic, turbulent shearing motionscan be linearly unstable to firehose and mirror instabilities atsufficiently high β_i (where β_i is the ion plasma beta). This is because the shearing motions give riseto an ion pressure anisotropy Δ_i ∼λ_i^2/L_V^2, where L_V is the length scale associated with the shearingmotions. For |Δ_i| ≳β_i^-1, the mirror and firehose instability thresholds canbe crossed (the mirror instability is trigged by sufficiently positive pressure anisotropy, the firehose instability by negative pressure anisotropy).Beyond its threshold, the maximum firehose instability growth rate γ_ fire was found tosatisfy γ_ fire∝ |Δ_i+2/β_i|^1/2, whilst for the mirror instability, the maximumgrowth rate was γ_ mirr∝Δ_i-1/β_i. Such destabilisation of shearing motions was confirmed numerically by <cit.>, followed by many others <cit.>.In this paper, we examine the criteria for the CE distribution function to be stable tomicroinstabilities at collisionless scales – i.e., at k λ_s ≫ 1 (where k is the microinstability wavenumber), and γτ_L ≫ 1.In a two-species plasma with a fixed mass ratio μ_e ≡ m_e/m_i and a charge Z that is not very large, these criteria turn out to be relationships betweenthree dimensionless parameters: λ/L, d_e/L, and β, where λ≡λ_e =λ_i is the mean free path for both ions and electrons, and d_e is the electron inertial scale.The first criterion (which we refer to as the β-stabilisation condition)is that the ratio λ/L be much smaller than the reciprocal of the plasmaβ, viz. λβ/L ≪ 1. This condition arises because themicroinstabilities discussed in this paper are stabilised (usually by Lorentz forces) at sufficiently lowβ. The second criterion (the collisional-stabilisation condition) is thatthe characteristic wavenumber k_peak of the fastest-growingmicroinstability in the absence of collisional effects be comparable to (or smaller than) thereciprocal of the mean-free-path: k_peakλ≲ 1.Unlike the β-stabilisation condition, we do not justify thiscondition rigorously, because our calculations are only valid for wavenumbers k such that k λ≫1; thus, we cannot say anything definitive about the k λ≲ 1regime. We do, however, show that another, more restrictivestabilisation condition that one might naively expect to exist on account of collisions – thatmicroinstabilities cannot occur if their growth rate γ is smaller thanthe collision frequency (viz., γτ_s ≲ 1) – does not, in fact, apply to the most significant microinstabilities in CE plasma.There are good physical reasons to believe that the CE distributionfunction is stable against collisionless microinstabilities if the collisional-stabilisationcondition k_peakλ≲ 1 is satisfied: not least that the typical growth time of the fastest microinstability in CE plasma (calculated neglecting collisional damping of microinstabilities) becomes comparable to the macroscopic evolution time scaleτ_L.We thus assume the validity ofthe collisional-stabilisation condition throughout this paper.How k_peak relates to the otherphysical parameters is in general somewhat complicated; however, typicallythe collisional-stabilisation condition can be written as a lower bound on the ratio d_e/L. For example,in the limit of very high β, it is d_e/L > (m_e/m_i)^-1/6(λ/L)^2/3 (see section <ref>). If both the β-stabilisation and collisional-stabilisation conditions are violated, we demonstrate that CE plasma will be subject to at least one microinstability, and quite possibly multiple microinstabilities across a wide range of scales. Some of these microinstabilities are thresholdless – that is, without including collisionaleffects, they will occur for CE distributions departing from a Maxwellian distribution by an asymptotically small amount. Note thatall significant microinstabilities associated with the CE distribution function are `low frequency':their growth rate γ satisfies γ≪ k v_ths, where k isthe typical wavenumber of the instability, and v_ths the thermal velocity ofthe particles of species s. This property enables a small anisotropy of thedistribution function to create forces capable ofdriving microinstabilities (see section <ref>). In this paper, we characterise all significant microinstabilities thatarise at different values of λ/L, β, and d_e/L for a particularform of the CE distribution function appropriate for a strongly magnetised plasma – that is, a plasma where the Larmor radii of ions and electrons are much smaller than thecorresponding mean free paths of these particles. We treat this particular case because of its importance to astrophysical systems, which almost always possess macroscopic magnetic fields of sufficientstrength to magnetise their constituent particles <cit.>.Our characterisation of microinstabilitiesfocuses on providing the maximum microinstability growth rates, as well as the wavenumbersat which this growth occurs. We find that there exist two general classes ofmicroinstabilities: those driven by the non-Maxwellian component of the CEdistribution associated with temperature gradients, and those driven by thenon-Maxwellian component associated with bulk velocity gradients (`shear'). We refer tothese two non-Maxwellian terms (which exist for both the ion and electron CE distribution functions)as the CE temperature-gradient terms and the CE shear termsrespectively. Microinstabilities driven by the CE temperature-gradient terms arecalled the CE temperature-gradient-driven (CET) microinstabilities, while those driven bythe CE shear terms are the CE shear-driven (CES) microinstabilities. As expected, within this general microinstability classification scheme,we recover a number of previously identified microinstabilities,including the (electron-shear-driven) transverse instability (which we discuss in sections <ref> and <ref>), the whistler instability (section <ref>), the electron mirror instability (section <ref>), the electron firehose instability (sections <ref> and <ref>), theordinary-mode instability (section <ref>), the (electron-temperature-gradient-driven) whistler heat-flux instability (sections <ref> and <ref>), and the (ion-shear-driven) mirror (section <ref>) and firehose (sections<ref>, <ref>, <ref>, <ref>, and <ref>) instabilities.We also find four microinstabilities that, to our knowledge, have not been previously discovered: two ion-temperature-gradient-driven ones at ion Larmor scales – theslow-hydromagnetic-wave instability (section <ref>) and the long-wavelength kinetic-Alfvén wave instability (section <ref>) – and two electron-shear-driven ones – the electron-scale-transition (EST) instability (section <ref>) and the whisper instability (section <ref>) – at electron-Larmor and sub-electron-Larmor scales, respectively.Of these microinstabilities, the whisper instability seems to be of particularsignificance: it has an extremely large growth rate in certainparameter regimes, and is associated with a new high-β wave in a Maxwellian plasma, which also appears to have previously escaped attention. For convenience, a complete index of microinstabilities discussed in this paper is given in table<ref>, while the peak growth rates of these microinstabilitiesand the scales at which they occur (for a hydrogen CE plasma) are given in table <ref>. There do exist microinstabilities in CE plasma that are not represented in tables <ref> and <ref>; however, we claim that the instabilities discussed in this paper are the most significant, on account of their large growth rates and/or lowβ-stabilisation thresholds compared to the unrepresented ones. Having systematically identified all significant microinstabilities, we can construct `stability maps' of strongly magnetised CE plasma using “phase diagrams” over a two-dimensional (λ/L, d_e/L)parameter space at a fixed β. An example of such a map (for a hydrogen plasma with equal ion and electron temperatures) is shownin figure <ref>. The entire region of the (λ/L, d_e/L) space depicted in figure <ref> could naively becharacterised as pertaining to classical, collisional plasma, and thusdescribable by fluid equations, with transport coefficients given by standardCE theory. However, there is a significant region of theparameter space (which is demarcated by boundaries corresponding to the β-stabilisation and collisional-stabilisation conditions) that is unstable to microinstabilities.In fact, in strongly magnetised plasma, the collisional-stabilisation condition is never satisfied, because there existmicroinstabilities whose characteristic length scales are the ion andelectron Larmor radii, respectively; this being the case, only the β-stabilisation condition guarantees kinetic stability. The effect of microinstabilities being present in CE plasma would be to change the non-Maxwellian components of thedistribution function, and therefore to alter the CE-prescribed resistivity, thermal conductivityand/or viscosity. Identifying the dominant microinstability or microinstabilities in such plasmas (as is done in figure <ref> for a hydrogen plasma) isthen necessary for calculating the true transport coefficients, which are likely determined by the effective collisionality associated with the saturated state of the dominant microinstability rather thanby Coulomb collisions. Although such calculations are not undertaken in this paper, it seemspossible that the modified transport coefficients could be determinedself-consistently in terms of macroscopic plasma properties such as temperature gradients or velocity shears. We note that the calculation presented here assumes that the CE distribution function is determined withoutthe microinstabilities and thus is only correct when the plasma is stable. Therefore, strictly speaking, the only conclusion one can make when the CE plasma is unstable is that the naive CE values of transport coefficients should not be taken ascorrect.We emphasise that kinetic instability of CE plasmas is a phenomenon of practical importance as well asacademic interest. We illustrate this in tables <ref> and<ref>, where the possibility of microinstabilities is considered for a selection of physical systems composed of classical,collisional plasma. We find that, while there exist some systems where CE plasmas are immune to microinstabilities – for example, the photosphere and chromosphere – thereare many other astrophysical plasma systems that are likely susceptible to them. Similar considerations apply to arange of laser plasmas, including plasmas generated in inertial-confinement-fusionand laboratory-astrophysics experiments. Indeed, a recent experiment carried out on the National Ignition Facility (NIF) – part of a wider programme of work exploring magnetic-field amplification in turbulentlaser-plasmas <cit.> – found evidence for the existence of large-amplitude local temperature fluctuations over a range of scales, a finding that was inconsistent with Spitzer thermal conduction <cit.>. This claim was corroborated by MHD simulations (with the code FLASH) of the experiment that modelled thermal conduction either using the Spitzer model, or no explicit thermal conduction model: the latter simulations were found to be much closer to the actual experimental data. Because the plasma created in the NIF experiment is also anticipated by our theory to be susceptible to CE microinstabilities, observations of a discrepancy with CE-derived transport coefficients are tantalising. We note that the idea of microinstabilities emerging in both collisionalastrophysical plasmas and laser plasmas is not a new one: see, e.g. <cit.> or <cit.> in the former context; in the latter, <cit.> or <cit.>.However, to our knowledge there does not exista systematic treatment of the general kinetic stability ofCE plasmas. This is the gap that this paper attempts to fill. This paper has the following structure. In section <ref>, we discuss kinetic and fluid descriptions ofclassical plasma. We then describe the CE expansion in collisional plasma: we work out the CE distribution function arising in a two-species strongly magnetised plasma,evaluate the friction forces, heat and momentum fluxes necessary toconstruct a closed set of plasma-fluid equations, and systematically estimate the size of the non-Maxwellian components of thisdistribution. Next, we discuss qualitativelythe existence and nature of microinstabilities potentially arising in CE plasma, beforepresenting the methodology that we later use to perform the full linear, kinetic stabilitycalculation. We provide an overview of this methodology in section <ref>,and then a much more detailed exposition of it in section <ref>: in particular,we describe in the latter how a simple form of the dispersion relation for the fastest microinstabilities can be obtained by considering the low-frequency limit γ≪ k v_ths of the hot-plasma dispersion relation,and how this simplified dispersion relation can be solved analytically.Readers who are uninterested in the technical details of this calculation areencouraged to pass over section <ref>; knowledge of itscontents is not a pre-requisite for subsequent sections.In sections <ref> and <ref>, we construct stability maps (analogous to figure <ref>) showing the parameter ranges in which theCE distribution function is stable, to CET and CES microinstabilities, respectively. The parameters are β and λ/L, and we construct separatestability maps for CET and CES microinstabilities in order to take into account the fact that L is in general not the same in the situations where these two types of microinstabilities occur.In section <ref>, we also discuss the significant CET microinstabilities that can occur (or not) at different values λ/L andβ, and provide simple analytic characterisations of them; in section <ref>, we do the same for significant CESmicroinstabilities. Finally, in section <ref>, we discuss the general implications of theseinstabilities for classical, collisional plasmas, and consider future research directions.Throughout this paper, most lengthy calculations areexiled to appendices; a glossary of mathematical notation is given in appendix<ref>. § PROBLEM SETUP §.§ Kinetic versus fluid description of classical plasma The evolution of classical plasma is most generally described by kinetic theory, via the solutionof Maxwell-Vlasov-Landau equations for the distribution functions of constituent particles. More specifically, in a kinetic description of a plasma, the distribution function f_s(r,v,t) of the particle of species s satisfies∂ f_s/∂ t + v f_s + Z_s e/m_s(E+ v×B/c) ∂ f_s/∂v=∑_s'ℭ(f_s,f_s') ,where t is time, r spatial position, v the velocity, e the elementarycharge, Z_s e the charge and m_s the mass of species s, E the electric field, B the magnetic field, c the speed oflight, and ℭ(f_s,f_s') the collision operator for interactions between species s and s'. Equation (<ref>) is coupled toMaxwell's equations:E =4 ∑_s Z_s e ∫d^3 v f_s,[3pt] B = 0, ×E =- 1/c B/t, ×B = 1/c E/t + 4 /c ∑_s Z_s e ∫d^3 v v f_s. Together, (<ref>) and (<ref>) form a closed set of governing equations. The density n_s, bulk fluid velocity V_s andtemperature T_s of species s can be formally defined in terms of moments of thedistribution function:n_s ≡∫d^3 v f_s, V_s ≡1/n_s ∫d^3 v v f_s, T_s ≡1/n_s∫d^3 v 1/3 m_s |v-V_s|^2f_s. Governing “fluid” equations are then derived by integrating (<ref>) or outer products of (<ref>)and the velocity variable v with respect to v:D n_s/D t|_s + n_s ·V_s= 0 ,m_s n_s D V_s D t |_s = -p_s - ·π_s +Z_s e n_s (E + V_s ×B/c) + R_s ,3/2 n_s D T_s D t |_s + p_s ·V_s = - ·q_s - π_s:V_s + 𝒬_s , where D/D t|_s ≡/t + V_sis the convective derivative with respect to the fluid motions of species s, p_s the pressure, π_s the viscosity tensor, and q_s the heat flux of species s, R_sthe friction force on this species due to collisional interactions withother species, and 𝒬_s theheating rate due to inter-species collisions. The latter quantities are formally defined in terms ofthe distribution function as follows:p_s ≡∫d^3 v 1/3 m_s |v-V_s|^2 f_s = n_s T_s ,π_s ≡- p_s I + ∫d^3 v m_s (v-V_s) (v-V_s) f_s ,q_s ≡∫d^3 v 1/2 m_s |v-V_s|^2 (v-V_s)f_s,R_s ≡∑_s' ∫d^3 v m_s v ℭ(f_s,f_s'), 𝒬_s ≡-R_s V_s + ∑_s' ∫d^3 v 1/2 m_s |v|^2ℭ(f_s,f_s').The distribution function only appears in Maxwell's equations viaits zeroth and first moments; namely, Gauss' law (<ref>a) and the Maxwell-Ampère law (<ref>d) can bewritten as E =4 ∑_s Z_s e n_s, [3pt]×B = 1/c E/t + 4 /c ∑_s Z_s e n_s V_s.Unlike the kinetic description, the fluid equations (<ref>) combined with Maxwell's equations (<ref>b), (<ref>d), (<ref>a), and (<ref>b) are nota closed system: knowledge of the distribution function, not just of n_s, V_s orT_s, is required to calculate momentum and heat fluxes, as well as the friction force orheating. As discussed in the Introduction, solving fluid equationsas opposed to kinetic equations is advantageous in many cases of interest. Since the dimensionality of the kinetic systemis greater (a six-dimensional phase space vs. three-dimensional position space), solving the kinetic system introduces both significantnumerical and conceptual complexity. However, the system of fluid equations (<ref>) is only usableif some type of closure can be introduced to calculateπ_s, q_s, R_s and 𝒬_sin terms of n_s, V_s and T_s. For classical plasmas, such aclosure is generally not possible, except in the case of strongly collisional plasmas.§.§ The Chapman-Enskog (CE) expansion §.§.§ The CE distribution functionsFor a classical, collisional plasma – i.e., a plasma where the mean free path λ_s of particles of species ssatisfies λ_s/L ≪ 1 for all s, L being the length scale over which the macroscopic properties of the plasma vary – a formal procedure exists for deriving a closed system of fluid equations from akinetic description of the plasma.This procedure is the Chapman-Enskog (CE) expansion, which gives distributionfunctions that are close to, but not exactly, Maxwellian.We call them Chapman-Enskog (CE) distribution functions.The non-Maxwellian components of the CE distribution functions of particle species s are proportional to λ_s/L, andmust be present in order to support gradients of n_s, V_s andT_s on O(L) length scales, because (<ref>b-e) are all zero for a Maxwellian plasma.We consider a collisional electron-ion plasma (in which, by definition, μ_e ≡ m_e/m_i ≪ 1) with the property thatall constituent particle species are strongly magnetised by the macroscopically varying magnetic field B:that is, the Larmor radius ρ_s ≡ m_s v_ths c/|Z_s| e |B| satisfies ρ_s ≪λ_s both for the ions and for the electrons (here v_ths≡√(2 T_s/m_s)is the thermal speed of species s). Equivalently, a strongly magnetised plasma is one in which the Larmor frequency Ω_s ≡ e |Z_s|/m_s c satisifies Ω_s τ_s ≫ 1, where τ_s is the collision time of species s. In such a plasma, the macroscopic variation of the fluid moments is locally anisotropic with respect to B; L is the typical length scale of variation in the directionlocally parallel to B. It can then be shown that, to first order of the Chapman-Enskog expansion in λ_s/L ≪ 1, and to zeroth order in ρ_s/λ_s≪ 1, the CE distributionfunctions of the electrons and ions aref_e(ṽ_e,ṽ_e) =n_e/v_the^3 ^3/2 exp(-ṽ_e^2)×{1+[η_e^T A_e^T(ṽ_e) + η_e^R A_e^R(ṽ_e) + η_e^u A_e^u(ṽ_e) ] ṽ_e+ ϵ_e C_e(ṽ_e) (ṽ_e^2- ṽ_e^2/2)} , f_i(ṽ_i,ṽ_i) =n_i/v_thi^3 ^3/2 exp(-ṽ_i^2)×{1+η_i A_i(ṽ_i) ṽ_i + ϵ_i C_i(ṽ_i) (ṽ_i^2- ṽ_i^2/2 )} . Let us define the various symbols employed in (<ref>),before discussing the origin of these expressions and their significance for formulating fluid equations (see section <ref>). The particle velocity v (with the corresponding speed v = |v|) is split into components parallel and perpendicular to the macroscopic magnetic field B = B ẑ as v = v_ẑ + v_, and the perpendicular plane is in turn characterised by two vectors x̂ and ŷ chosen so that {x̂,ŷ,ẑ} is an orthonormal basis. The perpendicular velocity is related to these basis vectors by the gyrophase angle ϕ:v_ = v_(cosϕ x̂ - sinϕ ŷ) .The non-dimensionalised peculiar velocity ṽ_s in the rest frame of the ion fluid is defined by ṽ_s ≡ (v-V_i)/v_ths, ṽ_s ≡|ṽ_s|, ṽ_s≡ẑṽ_s, and ṽ_s≡ |ṽ_s-ṽ_sẑ|. The number densities satisfy the quasi-neutrality conditionZ n_i = n_e ,where we have utilised Z_e = -1, and defined Z ≡ Z_i. We emphasise that n_s,{x̂,ŷ,ẑ} and v_ths all vary over length scales L in the plasma, butnot on shorter scales (at least not in the direction locally parallel to B). The functions A_e^T(ṽ_e), A_e^R(ṽ_e), A_e^u(ṽ_e), C_e(ṽ_e), A_i(ṽ_i) andC_i(ṽ_i) are isotropic functions. Their magnitude is O(1) when ṽ_e ∼1 or ṽ_i ∼ 1, for electrons and ions respectively. Finally, the parameters η_e^T, η_e^R, η_e^u, η_i, ϵ_e and ϵ_iare defined as follows: η_e^T = λ_e ∇_ logT_e = sgn(∇_ logT_e) λ_e/L_T,η_e^R =λ_e R_e/p_e , η_e^u =λ_e m_e u_ei/T_e τ_e , η_i = λ_i ∇_ logT_i = sgn(∇_ logT_i) λ_i/L_T_i, ϵ_e = λ_e/v_the (ẑ ẑ - 1/3I ):W_e = sgn[(ẑ ẑ - 1/3I ):W_e] V_e/v_the λ_e/L_V_e , ϵ_i =λ_i/v_thi (ẑ ẑ - 1/3I ):W_iv_thi = sgn[(ẑ ẑ - 1/3I ):W_i]V_i/v_thi λ_i/L_V , where λ_e is the electron mean free path, λ_i the ion mean-free-path, τ_e the electron collision time,R_e≡ẑR_e the parallel electron friction force, u_ei≡V_e - V_i the relative electron-ion velocity, u_ei≡ẑu_ei, W_s =V_s + (V_s)^T -2/3(V_s) Ithe traceless rate-of-strain tensor of species s, V_e (V_i) the bulk electron-(ion-)fluid speed, and L_T ≡|∇_ logT_e |^-1 , L_T_i ≡|∇_ logT_i |^-1 , L_V_e ≡1/V_e|(ẑ ẑ - 1/3I ):W_e |^-1 , L_V ≡1/V_i |(ẑ ẑ - 1/3I ):W_i |^-1 , are, respectively, the electron- and ion-temperature and the electron- and ion-flow length scales parallel to thebackground magnetic field. The mean free paths are formally definedfor a two-species plasma by λ_e ≡v_theτ_e , λ_i ≡v_thiτ_i , and the collision times τ_e and τ_i are given in terms of macroscopic plasma parameters byτ_e ≡3 m_e^1/2 T_e^3/2/4 √(2 ) Z_i^2 e^4 n_i logΛ_CL , τ_i ≡ 3 m_i^1/2 T_i^3/2/4 √(2 ) Z_i^4 e^4 n_i logΛ_CL, where logΛ_CL is the Coulomb logarithm <cit.>[Braginskii defined his ion collision time as equal to (<ref>b) multiplied by a factor of √(2); for the sake of species equality, we remove this factor.]. In acollisional plasma, η_e^T, η_e^R, η_e^u, η_i, ϵ_e and ϵ_iare assumed small. We note that all these parameters can be either positive ornegative, depending on the orientation of temperature and velocitygradients.It is clear from their definitions that each of the non-Maxwellian terms associated withthe parameters η_e^T, η_e^R, η_e^u, η_i, ϵ_e and ϵ_i is linked to a different macroscopic physical quantity. Thus, η_e^T and η_iare proportional to the electron- and ion-temperature gradients, respectively; we will thereforerefer to the associated non-Maxwellian terms as the CE electron-temperature-gradient termand the CE ion-temperature-gradient term. We refer to thenon-Maxwellian term proportional to η_e^R as the CE electron-friction term, to the non-Maxwellian term proportional to η_e^u as the CE electron-ion-drift term, and the non-Maxwellian termsproportional to ϵ_e and ϵ_i as the CE electron-shear termand the CE ion-shear term. We note that the friction and electron-ion-drift terms appear in the electron CE distribution function but not the ion CE distribution function because of our choice to define all velocities in the ion-fluid rest frame.The derivation of the CE distribution functions (<ref>) for a two-species strongly magnetised plasma undergoing sonic motions (that is, V_i ∼ v_thi) from the kinetic equation (<ref>) was firstcompleted by <cit.> for arbitrary values of ρ_s/λ_s. We donot reproduce the full derivation in the main text, but, for the reader's convenience, we provide a derivation of (<ref>) in appendix <ref>.The gist of the full derivation is to assume that the distribution function is close to a Maxwellian,with parameters that only evolve on a slow time scale t' ∼ t L/λ_e ∼ tL/λ_i ≫ t. The kinetic equation (<ref>) is then expandedand solved order by order in λ_e/L ∼λ_i/L ≪ 1, allowing forthe calculation of the (small) non-Maxwellian components of the distributionfunction. The small parameters η_e^T, η_e^R, η_e^u, η_i, ϵ_e andϵ_i, as well as the isotropic functions A_e^T(ṽ_e), A_e^R(ṽ_e), A_e^u(ṽ_e), C_e(ṽ_e), A_i(ṽ_i) andC_i(ṽ_i) emerge during this calculation. The precise forms of these functions depend only on the collision operator assumed in the original Maxwell-Vlasov-Landau system; in appendix <ref>, we provide a simple illustration of this, by calculating these isotropic functions explicitly forKrook <cit.> and Lorentz collision operators (Appendices <ref> and <ref>, respectively).For the full Landau collision operator,the equivalent calculation is more complicated, but can be performed (for example) by expanding the isotropic functions in Sonine polynomials <cit.>. §.§.§ Closure of fluid equations (<ref>) Once the CE distribution function has been calculated, the desired fluid closure can be obtained by evaluating the heat fluxes, the friction forces, and the momentum fluxes (<ref>) associated with the non-Maxwelliancomponents of the CE distribution functions. Thesecalculations were carried out in full for arbitrary values of ρ_s/λ_s by <cit.>. We do not reproduce the full fluid closure relations here; instead, we illustrate how the non-Maxwellian termsin the CE distribution functions (<ref>) give rise to the friction force and heat fluxesparallel to the macroscopic magnetic field, as well as to the viscosity tensor. In a stronglymagnetised two-species plasma (where ρ_s ≪λ_s), parallel friction forces and heat fluxesare typically much larger than their perpendicular or diamagnetic counterparts.* Heat fluxes. Recalling (<ref>c), the parallel heat flux q_s≡ẑq_s associated with species s is given byq_s = 1/2∫d^3v_s' m_s |v_s'|^2 v_s'f_s ,where v_s' ≡v-V_s. Noting that the electron distribution function (<ref>a) is specified in the rest frame of the ions, not electrons,it is necessary first to calculate the electron distribution function in the electron rest framebefore calculating the parallel electron heat flux. An expression for this quantity is given by (<ref>) in appendix<ref> as part of our derivation of (<ref>a):f_e(v_e',v_e') =n_e/v_the^3 ^3/2exp(-|v_e'|^2/v_the^2)×{1 + [η_e^T A_e^T(|v_e'|/v_the) +η_e^R A_e^R(|v_e'|/v_the) + η_e^u (A_e^u(|v_e'|/v_the) -1 ) ] v_e'/v_the+ ϵ_e C_e(|v_e'|/v_the) (v_e'^2/v_the^2 - v_e'^2/2 v_the^2) },Now substituting (<ref>)into (<ref>) (with s = e), we find that theparallel electron heat flux isq_e = - n_e T_e v_the[𝒜_e^T η_e^T + 𝒜_e^R η_e^R + (𝒜_e^u - 1/2) η_e^u],where𝒜_e^T,R,u = -4/3√()∫_0^∞dṽ_eṽ_e^6 A_e^T,R,u(ṽ_e)exp(-ṽ_e^2) .The minus signs in the definitions of 𝒜_e^T,R,uhave been introduced so that 𝒜_e^T,R,u≥ 0 for a typicalcollision operator (determining that these constants are indeed positive for any givencollision operator is non-trivial, but it is a simple exercise to show this for aKrook collision operator, using the expressions for A_e^T(ṽ_e), A_e^R(ṽ_e), and A_e^u(ṽ_e) given in appendix <ref>). Expression (<ref>) for the electron heat flux can be rewritten asq_e =- κ_e^∇_ T_e - [ 𝒜_e^u - 1/2 - 𝒜_e^R/𝒜̃_e^R(𝒜̃_e^u-1/2)]n_e T_e u_ei,where the parallel electron heat conductivity is defined byκ_e^ = 2 (𝒜_e^T - 𝒜_e^R/𝒜̃_e^R𝒜̃_e^T) n_e T_e τ_e/m_e,and 𝒜̃_e^T,R,u = -4/3√()∫_0^∞dṽ_eṽ_e^4 A_e^T,R,u(ṽ_e)exp(-ṽ_e^2).Numerical evaluation of the coefficients 𝒜_e^T,R,u and 𝒜̃_e^T,R,u for the Landau collision operatorgives <cit.>q_e≃- 3.16 n_e T_e τ_e/m_e∇_ T_e + 0.71 n_e T_e u_ei. The ion heat flux can be calculated directly from (<ref>) (s = i) using (<ref>b):q_i =- n_i T_i v_thi𝒜_i η_i ,where 𝒜_i = -4/3√()∫_0^∞dṽ_iṽ_i^6 A_i(ṽ_i)exp(-ṽ_i^2) .This becomesq_i = - κ_i^∇_ T_i ,where the parallel ion heat conductivity isκ_i^ = -2 𝒜_i n_i T_i τ_i/m_i≃ - 3.9 n_i T_i τ_i/m_i.The last equality is for the Landau collision operator <cit.>.Note that the absence of a term proportional to the electron-ion-drift in the ion heat flux (<ref>) is physically due to the smallness ofthe ion-electron collision operator <cit.>. * Friction force.We evaluate the friction force by considering the electron-ion-drift associated with electron CE distribution function.Namely, noting that u_ei = v_the^4/n_e∫d^3 ṽ_eṽ_e f_e ,it follows from (<ref>a) that u_ei = v_the(𝒜̃_e^T η_e^T + 𝒜̃_e^R η_e^R +𝒜̃_e^u η_e^u ) .This expression can in turn be used to relate theparallel electron-friction force R_e, defined in (<ref>d), to electron flows and temperaturegradients:R_e = -(2 𝒜̃_e^u + 1/2 𝒜̃_e^R) n_e m_e u_ei/τ_e- 𝒜̃_e^T/𝒜̃_e^R n_e ∇_ T_e .Evaluating the coefficients 𝒜̃_e^T, 𝒜̃_e^R and 𝒜̃_e^u for the full Landau collision operator, onefinds <cit.>R_e≃ -0.51 n_e m_e u_ei/τ_e- 0.71 n_e ∇_ T_e .* Viscosity tensor.For gyrotropic distributions such as the CE distribution functions (<ref>), the viscosity tensor π_sof species s defined by (<ref>b) – which is the momentum flux excluding the convective terms and isotropic pressure– is given byπ_s = (p_s - p_s⊥) (ẑẑ-1/3I),where the parallel pressure p_s and the perpendicular pressure p_s aredefined byp_s ≡∫d^3 v_s' m_s |v_s'|^2 f_s = n_s T_s (1-2/3ϵ_s 𝒞_s) , p_s ≡1/2∫d^3 v_s'm_s |v_s'|^2 f_s = n_s T_s (1+1/3ϵ_s 𝒞_s) , with the last expressions having being obtained on substitution of the CE distribution function (<ref>),and 𝒞_s = -8/5√()∫_0^∞dṽ_sṽ_s^6 C_s(ṽ_s)exp(-ṽ_s^2) .The sign of the constant 𝒞_s is again chosen so that 𝒞_s > 0 for typicalcollision operators; for the Landau collision operator, 𝒞_e ≃ 1.1 and 𝒞_i ≃ 1.44 <cit.>. We note for reference that the parameter ϵ_s [see (<ref>e-f)] has a simple relationship to the pressure anisotropy of species s: utilising (<ref>),one findsΔ_s ≡p_s-p_s/p_s = 𝒞_s ϵ_s . Using (<ref>), the viscosity tensor (<ref>) canbe writtenπ_s = - μ_vs/2(ẑẑ - 1/3I)(ẑẑ - 1/3I) :W_s,where the dynamic viscosity of species s is μ_vs≡ 2 𝒞_s n_s T_s τ_s .* Thermal energy transfer between species. It can be shown that forthe CE distribution functions (<ref>), the rate of thermal energy transfer from electrons to ions 𝒬_e is simply 𝒬_e = - R_e u_ei, while the rate of thermal energy transfer from ions to electrons vanishes: 𝒬_i ≈ 0. This is because the ion-electron collision rate is assumed small (by a factor of the mass ratio) compared to the ion-ion collision rate when deriving (<ref>b), and is thus neglected. <cit.> shows that, in fact,there is a non-zero (but small) rate of transfer: 𝒬_i = -𝒬_e - R_e u_ei= 3 n_e m_e/m_i τ_e(T_e-T_i) . The time scale on which the ion and electron temperatures equilibrate is the ion-electron temperature equilibration timeτ_ie^ eq≡1/2μ_e^-1/2τ_i . In summary, the non-Maxwellian components of the CE distribution function areessential for a collisional plasma to be able to support fluxes of heatand momentum. More specifically,(<ref>) demonstrates that the electron heat fluxes in a CE plasma are proportional to both temperature gradients andelectron-ion drifts, and are carried by the electron-temperature-gradient,friction and electron-ion-drift terms of the CE distribution function. In contrast, the ionheat fluxes (<ref>) are proportional only to ion temperature gradients (and carried bythe CE ion-temperature-gradient term). Momentum fluxes (<ref>) for electrons and ionsare carried by the CE electron- and ion-shear terms, respectively, and areproportional to components of the rate-of-strain tensor. §.§.§ Relative size of non-Maxwellian terms in the CE distribution function In the case of magnetised, two-species plasma satisfying T_i ∼T_e, (<ref>) can be used to estimate the size ofthe small parameters η_e^T, η_e^R, η_e^u, η_i, ϵ_e andϵ_i. Although these parameters are a priori proportional to λ_s/L for both ions and electrons, theirprecise magnitudes are, in fact, subtly different. Namely, the terms associatedwith η_e^T, η_e^R, η_e^u and η_i are gradients of the electron and ion temperatures and electron-ion relative parallel drift velocities, whereas terms associated with ϵ_e and ϵ_iinvolve gradients of the bulk flows [cf. (<ref>)] – and these gradients do not necessarily occur on the samelength scale. Recalling that the (electron) temperature and the (ion) flow length scales parallel to the macroscopic magnetic field are defined by [cf. (<ref>)]L_T = |∇_ logT_e |^-1 , L_V = 1/V_i|(ẑ ẑ - 1/3I ):W_i |^-1 , where W_i is the ion rate-of-strain tensor (<ref>), and assuming that L_T_i = (∇_log T_i )^-1∼ L_T (an assumption we will check a posteriori),it follows from (<ref>) that η_e^T ∼ λ_e/L_T, η_e^R ∼λ_e R_e/p_e ∼λ_e/L_T ∼η_e^T, η_e^u ∼u_ei/v_the ∼λ_e/L_T ∼η_e^T, η_i ∼λ_i/L_T ∼1/Z^2 η_e^T , ϵ_e ∼ V_i/v_the λ_e/L_V ∼Maμ_e^1/2 L_T/L_V η_e^T,ϵ_i ∼V_i/v_thi λ_i/L_V ∼Ma L_T/Z^2 L_V η_e^T,where Ma≡ V_i/v_thi is the Mach number. Note that, to arrive at (<ref>b), we assumed that R_e∼ p_e/L_T and u_ei∼ v_theλ_e/L_T, justified by (<ref>) and (<ref>), respectively. The relative magnitudes ofη_e^T, η_e^R, η_e^u, η_i, ϵ_e andϵ_i therefore depend on the Mach number of the plasma, as well as on the length scales L_T and L_V. In the work of <cit.>, who a priori presumes all “fluid” quantities in the plasma to vary on just a single scale L ∼ L_T ∼ L_V,with sonic ordering Ma≲ 1, determining the relative size of these parameters for a hydrogen plasma (Z = 1) is simple:ϵ_e ∼μ_e^1/2ϵ_i ≪ϵ_i ∼η_i ∼η_e^T ∼η_e^R ∼η_e^u . However, in most interesting applications, this single-scale ordering is incorrect. In a plasma with λ_s/L ≪ 1 under Braginskii's ordering,motions on many scales naturally arise. The fluid Reynoldsnumber in such a plasma is given by Re≡V_0 L_0/ν,where V_0 is the typical fluid velocity at the scale L_0 of driving motions and ν≡μ_vi/m_i n_i ∼ v_thiλ_iis the kinematic viscosity [see (<ref>)]. Typically, this number is large:Re∼V_0/v_thiL_0/λ_i≳1/ϵ_i≫ 1 ,where we have assumed Ma_0 ≡ V_0/v_thi≲ 1, in line with Braginskii's sonic ordering. Therefore, such a plasma will naturally become turbulent and exhibit motions across a range of scales. As a consequence, velocity and temperature fluctuations on the smallest (fluid) scales must be considered, since the associated shears and temperature gradients are the largest.To estimate η_e^T, η_e^R, η_e^u, η_i, ϵ_e andϵ_i accurately, we must determine the magnitude of these gradients.First, let ℓ_ν be the smallest scale on which the velocity varies due to turbulent motions (the Kolmogorov scale), with velocity fluctuations on scales ℓ≪ℓ_ν being suppressed by viscous diffusion.Then it follows that Re_ℓ_ν∼ 1, where Re_ℓ≡ V(ℓ) ℓ/νis the scale-dependent Reynolds number and V(ℓ) is the typical fluid velocity on scaleℓ. For Kolmogorovturbulence, V(ℓ)/V_0∼(ℓ/L_0)^1/3∼(Re_ℓ/Re)^1/4,and ℓ/L_0 ∼(Re_ℓ/Re)^3/4, which gives V(ℓ)/ℓ∼ (V_0/L_0) (Re_ℓ/Re)^-1/2, and thus, from (<ref>),V(ℓ_ν)/ℓ_ν∼V_0/L_0(Re_ℓ_ν/Re)^-1/2∼Ma_0^1/2(λ_i/L_0)^-1/2V_0/L_0.We therefore conclude that L_V ∼ℓ_νV_0/V(ℓ_ν)∼ L_0 Ma_0^-1/2(λ_i/L_0)^1/2 . Next, the smallest scale on which the electron temperature varies, ℓ_χ,is the scale below which temperature fluctuations are suppressed bythermal diffusion; it satisfies Pe_ℓ_χ∼ 1, where Pe_ℓ≡ V(ℓ) L/χis the scale-dependent Péclet number and χ≡ 2 κ_e^/3 n_e ∼ v_theλ_e is the (parallel) thermal diffusivity [see (<ref>)].Because temperature is passively advected by the flow, the temperature fluctuation T(ℓ) at any scale ℓ > ℓ_χobeys the same scaling as the bulk velocity: T(ℓ)/T(L_0)∼V(ℓ)/V_0∼(Pe_ℓ/Pe)^1/4.In addition, the magnitude of temperature fluctuations at the driving scale isrelated to the mean temperature by the Mach number of the driving-scale motions, T(L_0) ∼ T_0 Ma_0, which then givesT(ℓ)/T_0∼Ma_0 (Pe_ℓ/Pe)^1/4,where Pe≡Pe_L_0. It follows from an analogous argument to that just given for the velocity fluctuations thatT(ℓ_χ)/ℓ_χ∼T_0/L_0Ma_0 Pe^1/2 . Under Braginskii's ordering, the Prandtl number of CEplasma isPr≡ν/χ = Pe/Re∼v_thiλ_i/v_theλ_e∼μ_e^1/2≪ 1 ,and, therefore,L_T ∼ℓ_χT_0/T(ℓ_χ)∼ L_0 μ_e^-1/4Ma_0^-3/2(λ_i/L_0)^1/2 .Thus, L_V ∼Ma_0 μ_e^1/4 L_T ≪ L_T under the assumed ordering.Finally, we consider whether our a priori assumption that L_T_i∼ L_T is, in fact, justified. A sufficient condition for ion-temperature gradientsto be the same as electron-temperature gradients is for the evolution time τ_L of all macroscopic motions to be much longer than the ion-electron temperature equilibration timeτ_ie^ eq defined by (<ref>). Since τ_L≳ℓ_ν/V(ℓ_ν), it follows thatτ_L/τ_ie^ eq∼(m_i/m_e)^1/2Ma_0^3/2(λ_i/L_0)^1/2∼ϵ_i (m_i/m_e)^1/2.Thus, if ϵ_i ≫μ_e^1/2, we conclude that collisional equilibration of ion and electron temperatures might be too inefficient to regulate small-scale ion-temperature fluctuations, in which case it would follow that L_T_i < L_T. However, it has been previously demonstratedvia numerical solution of the Vlasov-Fokker-Planck equation that the CE expansion procedure breaks down due tononlocal transport effects if λ_e/L is only moderately small <cit.>; thus, the only regime in which there is not ion-electron equilibration over all scales is one where the CE expansion is not valid anyway. In short, we conclude that assuming L_T_i∼ L_T is reasonable.Bringing these considerations together with (<ref>), we find thatη_e^T ∼μ_e^1/4 Ma_0 λ_i/L_V ∼Ma_0^3/2 μ_e^1/4 (λ_i/L_0)^1/2 ∼η_e^R ∼η_e^u ∼η_i ,ϵ_e ∼μ_e^1/2 Ma_0 λ_i/L_V∼μ_e^1/2 Ma_0^3/2 (λ_i/L_0)^1/2 , ϵ_i ∼Ma_0 λ_i/L_V ∼Ma_0^3/2 (λ_i/L_0)^1/2. Thus, we conclude that the largest distortions of the ion CE distribution are due toflow gradients, while temperature gradients cause the greatest distortions of theelectron CE distribution function. §.§ Kinetic stability of classical, collisional plasma §.§.§ Overview We have seen that the CE expansion provides a procedure for the calculation ofthe distribution functions arising in a classical, collisional plasma in terms of gradients oftemperature, electron-ion drifts and bulk fluid velocities; these calculations in turn allow for theclosure of the system (<ref>) of fluid equations. However, these same gradients aresources of free energy in the plasma, so they can lead to instabilities. Someof these instabilities will be `fluid', i.e., they are captured within the CEdescription and are features of the fluid dynamics of plasmas; othersare kinetic (`microinstabilities'), and their existence implies that the CE expansion is, in fact, illegitimate.Our primary purpose in this paper is to determine when such microinstabilities do not occur in a strongly magnetised two-species plasma. If, however, they do occur, we wish to determine their growth rates.We begin by making a few general qualitative comments concerning the existence and nature of thesemicroinstabilities, before presenting the technical details of their derivation. §.§.§ Existence of microinstabilities in classical, collisional plasma It mightnaively be assumed that a classical, collisional plasma is kinetically stable, on twogrounds. The first of these is that the distribution function of such a plasma is `almost' Maxwellian,and thus stable. While it is certainly the case that a plasma whose constituent particles haveMaxwellian distribution functions is kinetically stable <cit.>, it is also known that a plasma with anisotropic particle distribution functions is typicallynot <cit.>. The (small) non-Maxwellian component of the CE distributionfunction is anisotropic (as, e.g., was explicitly demonstrated by the calculation of pressure anisotropy in section <ref>), and thus we cannot a priori rule outmicroinstabilities associated with this anisotropy. The second naive reason for dismissing the possibility of microinstabilities inclassical, collisional plasma is the potentially stabilising effect of collisional damping on microinstability growthrates. If collisional processes are sufficiently dominant tobe responsible for the mediation of macroscopic momentum and heatfluxes in the plasma, it might be naively inferred that they would also suppress microinstabilities. This is, in fact, far from guaranteed, for the following reason. Thecharacteristic scales of the microinstabilities are not fluid scales, but are ratherintrinsic plasma length scales related to quantities such as the Larmor radius ρ_sor the inertial scale d_s of species s, or the Debye length λ_D– quantities given in terms of macroscopic physical properties of plasma byρ_s = m_s v_ths c/Z_s e |B| ,d_s ≡(4 Z_s^2 e^2 n_s/m_s c^2)^-1/2 = ρ_s β_s^-1/2 , λ_D ≡(∑_s 4 Z_s^2 e^2 n_s/T_s)^-1/2 = (∑_s 2 c/d_s^2 v_ths )^-1/2, where β_s ≡8n_s T_s/B^2is the plasma beta of species s. The crucial observation is then that the dynamics on characteristicmicroinstability scales maybe collisionless. For a classical, collisional hydrogen plasma(where λ≡λ_e ∼λ_i for T_e ∼ T_i), the mean free path is much larger than the Debye length: λ/λ_D∼ n_e λ_D^3 ≫ 1; so there exists a range of wavenumbers k on which microinstabilities are both possible (k λ_D≲ 1) and collisionless (k λ≫ 1). For a strongly magnetised collisional plasma, λ_s ≫ρ_s for all species by definition; thus, any microinstability with a characteristic scale comparable to the Larmor radius of any constituent particle will be effectively collisionless. We note that such a range of collisionless wavenumbers only exists in classical (viz., weakly coupled) plasmas; in strongly coupled plasmas, for which λ≲λ_D, all hypothetically possible microinstability wavenumber scales are collisional.Thus the phenomenon of microinstabilities in collisional plasmas is solely a concern for the classicalregime.§.§.§ A simple example: the firehose instability in CE plasmas Perhaps the simplest example of a microinstability that can occur in CE plasma is the firehose instability. This example was previously discussedby <cit.>, but we nonetheless outline it here to illustrate thecentral concept of our paper. Consider bulk fluid motions of the plasma on length scales L_V that are much smallerthan the mean free path λ_i, but much larger than the ion Larmor radiusρ_i; the characteristic frequencies associated with these motions are assumed to be much smaller that the ion Larmorfrequency Ω_i, but much larger than the inverse of the ion collision time τ_i^-1.Under these assumptions,the following four statements can be shown to be true <cit.>:* The bulk velocities of the electron and ion species are approximately equal: V_e ≈V_i.* The electric field in a frame co-moving with the ion fluid vanishes; transforming to the stationary frame of the system, this gives E = -V_i ×B/c.* The contribution of the displacement current to the Maxwell-Ampère law (<ref>d) is negligible, and so e n_e (V_i - V_e ) ≈c/4 ×B.* The electron and ion viscosity tensors both take the form (<ref>), and the electron pressure anisotropy, defined by (<ref>), is small compared to the ion pressure anisotropy: Δ_e ≪Δ_i.It then follows directly from (<ref>b), summed over both ion and electron species, that m_i n_i DV_i D t|_i = -(B^2/8+ p_e⊥ + p_i⊥) -[ ẑẑ(p_i⊥-p_i)] + BB/4 .We remind the reader that ẑ = B/B, and emphasize that we have neglectedthe electron inertial term on the grounds that it is small compared to the ion inertial term:m_e n_e DV_e D t|_e ≪ m_i n_i DV_i D t|_i . Theevolution of the magnetic field is described by the induction equation, DBD t|_i = BV_i -BV_i,which is derived by substituting (<ref>) into Faraday's law (<ref>c). Now consider small-amplitude perturbations with respect to a particular macroscale state of theplasma δV_i =δV_i⊥ exp{i(k r - ωt)},δB =δB_⊥ exp{i(k r - ωt)},whose characteristic frequency ω is much greater than that of the plasma's bulk fluid motions (but is still much smaller than Ω_i),whose wavevector k = k_ẑ is parallel toB, and assume also that the velocity and magnetic-field perturbations areperpendicular to B. It is then easy to show that (<ref>) and (<ref>)become- i m_i n_i ωδV_i⊥=i (B_0^2/4+ p_i⊥ - p_i) k_δB_⊥ /B, - i ωδB_⊥ =i B k_ δV_i⊥ ,where p_i⊥ and p_i are the perpendicular and parallel ion pressuresassociated with the macroscale state (which, on account of its comparatively slow evolution compared to the perturbation, can beregarded a quasi-equilibrium). Physically, the macroscale flow gives riseto different values of p_i⊥ and p_i, and thereby an ion pressureanisotropy Δ_i, because it changes the strength B of the macroscale magneticfield; thanks to the effective conservation of the first and second adiabaticmoments of the ions on the evolution timescale of the macroscale flow <cit.>,an increase (decrease) in B results in an increase (decrease) in p_i⊥,and a decrease (increase) in p_i. The dispersion relation for the perturbation is thenω^2 = k_^2 v_thi^2 (1/β_i + Δ_i/2) ,where β_i, defined by (<ref>), is the ion plasma beta. For a sufficiently negative ion pressure anisotropy, viz., Δ_i < -2/β_i, the perturbation is unstable. This instability is known as the (parallel) firehose instability.The underlying physics of the parallel firehose instability has been discussed extensivelyelsewhere <cit.>.Here, we simply note that the firehose instability arises in a magnetised plasma with sufficiently negative pressure anisotropy as compared to the inverse of the ion plasma beta; because the ion CE distributionfunction has a small, non-zero pressure anisotropy, this statement applies to CE plasma atlarge β_i. We also observe that the product of the growth rate (<ref>) of the firehose instability with the ion-ion collision timesatisfiesωτ_i ∼ k_λ_i |1/β_i + Δ_i |^1/2∼1/β_iλ_i/ρ_i,where we have assumed that Δ_i ≲ 2 β_i^-1, and employed the (non-trivial) result that the peak growth of the parallel firehose instability occurs at wavenumbers satisfying k_ρ_i ∼β_i^-1/2 (see sections <ref> and<ref>). Thus, if β_i ≪λ_i/ρ_i– a condition easily satisifed in weakly collisional astrophysical environments such as the ICM (see table <ref>) – it follows that ωτ_i ≫ 1, and so collisional damping is unable to inhibit the parallelfirehose in a CE plasma[In fact, the naive condition γτ_i ≲ 1is not sufficient to ensure collisional stabilisation of the firehoseinstability; the true stabilisation condition is instead k_λ_i ≲ 1(see section <ref> for a discussion of this claim).]. Thisfailure is directly attributable to its characteristic wavelength being atcollisionless scales: the parallel wavenumber satisfies k_λ_i ∼β_i^-1/2λ_i/ρ_i ≫1. This simple example clearly illustrates that microinstabilities are indeed possible in a classical, collisional plasma,for precisely the reasons given in section <ref>.§.§.§ Which microinstabilities are relevant Although the naive arguments described in section <ref> do not imply kineticstability of CE plasma, these same arguments do lead to significant restrictionson the type of microinstabilities that can arise. Namely, for some plasma modes,the small anisotropy of CE distribution functions is an insufficient free-energysource for overcoming the competing collisionless damping mechanisms that ensure stability for pure Maxwellian distribution functions – e.g., Landau damping orcyclotron damping. For other plasma modes, the characteristic length scalesare so large that collisional damping does suppress growth. In magnetised plasmas, there alsoexist cyclotron harmonic oscillations that, despite minimal damping, can only become unstable for sufficiently large anisotropy of the particle distribution function: e.g., the electrostatic Harris instability <cit.>. Since the anisotropy threshold for such microinstabilities is typically Δ_s ≳ 1 <cit.>, they cannot operate in a CE plasma. We claim that there are only two classes of microinstabilities that can be triggered in a CE plasma.The first are quasi-cold plasma modes: these are modes whose frequency isso large that resonant wave-particle interactions (Landau or cyclotron resonances)only occur with electrons whose speed greatly exceeds the electron thermal speedv_the. Collisionless damping of such modes is typically very weak,and thus small anisotropies of particle distribution functions can be sufficient todrive an instability. Well-known examples of a small non-Maxwellian part of the distribution function giving rise to microinstabilities include the bump-on-tail instability associated witha fast beam of electrons <cit.>,or the whistler instability for small temperature anisotropies <cit.>.The existence of such instabilities for the CE distribution can bedemonstrated explicitly: e.g., the peak growth rate of the bump-on-tail instability associated with the CE distribution function (`the CE bump-on-tail instability') is calculated in appendix <ref>. However, the growth rates γ of such instabilities areexponentially small in λ_e/L ≪ 1. This claim, which is explicitlyproven for the CE bump-on-tail instability in appendix<ref>, applies to all electrostatic instabilities (see appendix <ref>), and it can be argued that it also applies to all quasi-cold plasma modes (seeappendix <ref>).When combined with the constraint that the resonant wave-particle interactionsrequired for such instabilities cannot occur if γτ_r≲ 1, where τ_r is the collision time of the resonant particles, the exponential smallness of the growth rate suggests that such microinstabilities will not besignificant provided λ_e/L really is small. As discussed in section <ref>, plasmas in which λ_e/L is only moderately small are not well modelled as CE plasmas anyway, and thus, for the rest ofthis paper, we will not study quasi-cold-plasma-mode instabilities.The second class of allowed microinstabilities comprises modes that are electromagnetic and low-frequency in the sense thatthe complex frequency ω of the microinstability satisfies, for at least one particle species s,ω/k v_ths∼(λ_s/L)^ι≪ 1 , where ι is some order-unity number. Low-frequency electromagnetic modes arein general only subject to weak Landau and cyclotron damping (of order ω/k v_ths≪ 1 or less), and thus can become unstable for small distribution-function anisotropies. By contrast, electromagneticmodes satisfying ω∼ k v_ths would typically generate stronginductive electric fields, which would in turn be subject to significant Landau or cyclotrondamping, overwhelming any unstable tendency.The firehose instability introduced in section <ref> is one example of thistype of microinstability: it satisfies (<ref>) with ι = 1/2, provided its β-stabilisation threshold is surpassed.In this paper, we will focus on microinstabilities in this second class.Whilst small comparedto the streaming rate k v_ths of species s, the growthrates satisfying (<ref>) can still be significantly larger than the rate at which the plasma evolveson macroscopic scales, and thus invalidate the CE expansion.We do not in this paper present a rigorous proof that there are nomicroinstabilities of the CE distribution function which do not fall into either of the two classes consideredabove. However, there do exist more precise arguments supporting the latter claim thanthose based on physical intuition just presented; these are discussed further in sections <ref> and <ref>. The microinstabilities satisfying (<ref>) fall into two sub-classes. The first sub-class consists of microinstabilities driven bythe CE temperature-gradient, CE electron-friction and CE electron-ion-drift terms in the CE distribution functions (<ref>); we refer tothese collectively as CE temperature-gradient-driven microinstabilities, or CET microinstabilities, on account of the parameters η_e^R and η_e^u scaling with temperature gradients (see section <ref>).The second sub-class is microinstabilities driven by the CE shear terms, orCE shear-driven microinstabilities (CES microinstabilities). This sub-classification isnecessary for two reasons. First, the velocity-space anisotropy associated with the CE shearterms is different from other non-Maxwellian terms, and thus different types ofmicroinstabilities can emerge for the two sub-classes. Secondly, as was discussed in section <ref> for the case of CE plasma, the typical size of small parameters η_e^T, η_e^R, η_e^u and η_iis different from that of ϵ_e and ϵ_i. In our initialoverview of our calculations (section <ref>) and in the more detailed discussion of our method (section <ref>),we will consider all microinstabilities driven by thenon-Maxwellian terms of the CE distribution together; however, when it comes topresenting detailed results, we will consider CET and CES microinstabilitiesseparately (sections <ref> and <ref>,respectively).§.§ Linear stability calculation: overview§.§.§ General dispersion relationOur linear kinetic stability calculation proceeds as follows: we consider an electromagnetic perturbation with wavevector k and (complex) frequency ω of the form δE =δE exp{i(k r - ωt)},δB =δB exp{i(k r - ωt)},in a plasma with the equilibrium electron and ion distribution functions given by (<ref>a) and (<ref>b), respectively. Weassume that all macroscopic parameters in the CE distribution function areeffectively constant on the time scales and length scales associated withmicroinstabilities: this is equivalent to assuming that k λ_e, k λ_i ≫ 1 (where k ≡ |k| is the wavenumber of the perturbation), and |ω| τ_L ≫ 1.To minimise confusion between quantities evolving on short, collisionless time scales, and those on long, fluid time scales,we relabel the equilibrium number density of species s as n_s0, and the macroscopic magnetic fieldas B_0 in subsequent calculations. For notational convenience, we defineη_e ≡η_e^T ,and A_e(ṽ_e) ≡ A_e^T(ṽ_e) + η_e^R/η_e^T A_e^R(ṽ_e)+ η_e^u/η_e^T A_e^u(ṽ_e),which in turn allows for the equilibrium distribution function of species s to bewritten asf_s0(ṽ_s,ṽ_s) = n_s0/v_ths^3 ^3/2exp(-ṽ_s^2) [1+η_s A_s(ṽ_s) ṽ_s + ϵ_s C_s(ṽ_s) (ṽ_s^2- ṽ_s^2/2)] .Finally, without loss of generality, we can set V_i = 0by choosing to perform the kinetic calculation in the frame of the ions; thus, ṽ_s =v/v_ths. It is well known <cit.> that the electric field of all linear electromagnetic perturbations in a collisionless, magnetisedplasma with equilibrium distribution function f_s0must satisfy [c^2 k^2/ω^2(k̂k̂-I)+ 𝔈]δE = 0 ,where k̂≡k/kis the direction of the perturbation, 𝔈≡I + 4 i/ωσthe plasma dielectric tensor, and σ the plasmaconductivity tensor.The hot-plasma dispersion relation is then given by[c^2 k^2/ω^2(k̂k̂-I)+ 𝔈]=0 .The conductivity tensor in a hot, magnetised plasma is best displayed in an orthogonal coordinate system with basis vectors {x̂,ŷ,ẑ} defined in terms of B_0 andk:ẑ≡B_0/B_0, x̂≡k_⊥/k_⊥≡k- k_ẑ/k_ , ŷ≡ẑ×x̂ ,where B_0 ≡ |B_0|, k_≡kẑ, and k_≡|k_⊥|. In this notation, k = k_ẑ + k_x̂.The conductivity tensor is then given byσ = ∑_s σ _s = - i/4 ω∑_s ω_ps^2 [ 2/√()k_/|k_|∫_-∞^∞dw̃_s w̃_s∫_0^∞dṽ_sΛ_s(w̃_s,ṽ_s) ẑẑ+ ω̃_s2/√()∫_C_Ldw̃_s∫_0^∞dṽ_sṽ_s^2 Ξ_s(w̃_s,ṽ_s) ∑_n = -∞^∞R_sn/ζ_sn -w̃_s] ,whereω_ps≡√(4Z_s^2 e^2 n_s0/m_s) ,w̃_s≡k_ṽ_s/|k_|, ρ̃_s ≡m_s c v_ths/Z_s e B_0 =|Z_s|/Z_sρ_s, ω̃_s≡ω/|k_| v_ths , ζ_sn≡ω̃_s - n/|k_| ρ̃_s , f̃_s0(ṽ_s,ṽ_s) ≡^3/2 v_ths^3/n_s0 f_s0(k_/|k_| v_thsw̃_s,v_thsṽ_s) , Λ_s(w̃_s,ṽ_s) ≡ṽ_sf̃_s0/w̃_s-w̃_sf̃_s0/ṽ_s, Ξ_s(w̃_s,ṽ_s) ≡f̃_s0/ṽ_s + Λ_s(w̃_s,ṽ_s)/ω̃_s ,(R_sn )_xx ≡n^2 J_n(k_ ρ̃_s ṽ_s)^2/k_^2 ρ̃_s^2 ṽ_s^2 ,(R_sn )_xy ≡i n J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s)/k_ ρ̃_s ṽ_s ,(R_sn )_xz ≡n J_n(k_ ρ̃_s ṽ_s)^2/k_ ρ̃_s ṽ_s k_ w̃_s/|k_| ṽ_s, (R_sn )_yx ≡- (R_sn )_xy , (R_sn )_yy ≡J_n'(k_ ρ̃_s ṽ_s)^2 , (R_sn )_yz ≡-i n J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) k_ w̃_s/|k_| ṽ_s ,(R_sn )_zx ≡(R_sn )_xz , (R_sn )_zy ≡-(R_sn )_yz , (R_sn )_zz ≡w̃_s^2/ṽ_s^2 J_n(k_ ρ̃_s ṽ_s)^2 . Here (R_sn)_xy = x̂R_snŷ, and similarlyfor other components of R_sn. For the reader's convenience, a summary of the derivation of the hot-plasmadispersion relation is given in appendix <ref>.We note that the dielectric and conductivity tensors have the following symmetries:𝔈_yx = - 𝔈_xy, 𝔈_zx = 𝔈_xz,𝔈_zy = - 𝔈_yz, σ_yx = - σ_xy, σ_zx = σ_xz, σ_zy = - σ_yz,where, for tensors with no species subscript, we use the notation 𝔈_xy≡x̂𝔈ŷ. We also observe that if f_s0(v_,v_) is an even function with respect to v_, then, for k_ > 0, σ_xx(-k_) = σ_xx(k_) , σ_xy(-k_) = σ_xy(k_) , σ_xz(-k_) = -σ_xz(k_) , σ_yy(-k_) = σ_yy(k_) , σ_yz(-k_) = -σ_yz(k_) , σ_zz(-k_) = σ_zz(k_) ,with the remaining components of the conductivity tensor given by equations(<ref>). If f_s0(v_,v_) is an odd function with respect to v_, thenσ_xx(-k_) = -σ_xx(k_) , σ_xy(-k_) = -σ_xy(k_) , σ_xz(-k_) =σ_xz(k_) , σ_yy(-k_) =-σ_yy(k_) , σ_yz(-k_) = σ_yz(k_) , σ_zz(-k_) =-σ_zz(k_) .These symmetries can be used to determine completely the behaviour of perturbations with k_ < 0directly from perturbations withk_ > 0, without any additionalcalculations. Thus, unless stated otherwise, from this point on, we assume k_ >0, and thus w̃_s = ṽ_s [see (<ref>)].§.§.§ Simplifications of dispersion relation: overview of our approachThe full hot-plasma dispersion relation (<ref>) is a transcendental equation, andthus, for general distribution functions, the growth rates of perturbations can only be determined numerically;this hinders the systematic investigation of stability over wide-ranging parameter regimes.However, adopting a few simplifications both to the form of the CE distributionfunctions (<ref>) and to the type of microinstabilities beingconsidered (see section <ref>) turns out to be advantageous when attempting a systematic study. It enables us to obtain simple analytical resultsfor microinstability growth rates and characteristic wavenumbers,as well as greatly reducing the numerical cost of evaluating these quantities. The former allows us to make straightforward comparisons between microinstabilities, while the latter facilitates the calculation of stability plots over a wide range of parameterswithout requiring intensive computational resources.First, we choose a Krook collision operator, with constant collision time τ_s for each species s <cit.>, when evaluating the isotropicfunctions A_e^T(ṽ_e), A_e^R(ṽ_e), A_e^u(ṽ_e),A_i(ṽ_i), C_e(ṽ_e), and C_i(ṽ_i) in (<ref>).As was explained in section <ref>, these functions are determinedby the collision operator.While the full Landau collision operator might seem to be the most appropriate choice, the conductivity tensor σ definedby (<ref>) cannot be writtenin terms of standard mathematical functions if this choice is made. Instead, the relevant integrals must be done numerically.If a simplified collision operator is assumed, σcan be evaluated analytically with only a moderate amount of algebra. Inappendix <ref>, we show that for the Krook collision operator, A_e^T(ṽ_e) = -(ṽ_e^2-5/2) , A_e^R(ṽ_e) = -1 , A_e^u(ṽ_e) = 0 , A_i(ṽ_e) = -(ṽ_i^2-5/2) , C_e(ṽ_e) = -1 , C_i(ṽ_i) = -1 , where it is assumed that ṽ_e, ṽ_i ≪η_e^-1/3, ϵ_i^-1/2 in order that theCE distribution functions retain positive signs (the vanishing of the CE electron-ion-drift term is discussed in appendix <ref>).Adopting the Krook collision operator has the additional advantage of allowing a simple prescription for collisional damping of microinstabilities to be introduced self-consistently into our stability calculation (see section <ref> for further discussion of this).Secondly, as discussed in section <ref>, the most important microinstabilities associated with the CEdistribution function are low-frequency, i.e., theysatisfy (<ref>). Therefore, instead of solving the full hot-plasma dispersionrelation, we can obtaina less complicated algebraic dispersion relation. We also always considerelectromagnetic rather than electrostatic perturbations. This is because it canbe shown for a CE plasma that purely electrostatic microinstabilities are limitedto the quasi-cold plasma modes (see appendix<ref>). Describing how the simplified dispersion relation forlow-frequency, electromagnetic perturbations is obtained from the full hot-plasmadispersion relation requires a rather lengthy exposition, and necessitates the introduction of a substantial amount of additional mathematicalnotation. In addition to this, certain shortcomings of this approach warrant an extended discussion.Readers who are interested these details will find them in the next section (section <ref>).Readers who are instead keen to see the results of the stability calculations as soon as possible are encouraged to jump tosections <ref> and <ref>. §.§ Linear stability calculation: detailed methodology§.§.§ Low-frequency condition in a magnetised plasma Before applying to the hot-plasma dispersion relation (<ref>) the simplifications discussed in section <ref>, we refine the low-frequency condition (<ref>) based on the specific form (<ref>) of the conductivity tensor for a magnetised plasma.It is clear that the equilibrium distribution function only affects the conductivity tensor via the functions Λ_s(ṽ_s,ṽ_s)and Ξ_s(ṽ_s,ṽ_s) [see (<ref>) and (<ref>)]. For a distribution function of the form (<ref>),it can be shown thatΛ_s(ṽ_s,ṽ_s)= - ṽ_sexp(-ṽ_s^2) [η_s A_s(ṽ_s) - 3 ϵ_s C_s(ṽ_s)ṽ_s],and Ξ_s(ṽ_s,ṽ_s) =- ṽ_sexp(-ṽ_s^2) [ 2 + 2 ṽ_sη_s A_s(ṽ_s) - ṽ_s/ṽ_sη_s A_s'(ṽ_s)+ 2 ϵ_s C_s(ṽ_s) (ṽ_s^2-ṽ_s^2/2+1/2)- 1/ṽ_s(ṽ_s^2-ṽ_s^2/2)ϵ_s C_s'(ṽ_s) + η_s/ω̃_s A_s(ṽ_s) - 3 ϵ_s/ω̃_s C_s(ṽ_s)ṽ_s],where the first term in the square brackets in (<ref>) originates from the Maxwellian part of the distribution function.A comparison of the size of the second, third, fourth, and fifth terms with the first indicates that for ṽ_s ∼ 1 – for which Ξ_s attains its largest characteristic values – the non-Maxwellian terms of the CE distribution function only provide a small, O(η_e, ϵ_e)contribution, and thus the conductivity is only altered slightly.However, considering the sixth and seventh terms in the square brackets in (<ref>)(which are only present thanks to the anisotropy of the CE distribution function), itis clear that the non-Maxwellian contribution to the conductivity tensor can besignificant for ṽ_s ∼ 1 provided the frequency (<ref>) satisfies one ofω̃_s∼η_s ≪ 1orω̃_s∼ϵ_s ≪ 1 .Thus, the relevant low-frequency condition in a magnetised plasma involves theparallel particle streaming rate k_ v_ths. There do exist certaincaveats to the claim that it is necessary for microinstabilities of CEplasma to satisfy (<ref>); we defer detailed statement and discussion of these caveats – as well as of other potential shortcomings of our approach – to sections<ref>, <ref> and <ref>. §.§.§ Simplification I: non-relativistic electromagnetic fluctuationsThe requirement that the mode be electromagnetic, combined with the fact we areinterested in non-relativistic fluctuations (ω≪ k c) enables our first simplification. We see from (<ref>) that for anyperturbation of interest, the dielectric tensor must satisfy 𝔈≳ k^2c^2/ω^2 ≫ 1 (where · is the Euclidean tensor norm); therefore, it simplifies to𝔈≈4 i/ωσ.This amounts to ignoring the displacement current in theAmpère-Maxwell law, leaving Ampère's original equation. For convenience of exposition, wedenote the contribution of each species s to (<ref>) by𝔈_s ≡4 i/ωσ_s . §.§.§ Simplification II: expansion of dielectric tensor in ω≪ k_ v_ths The next simplification involves an expansion of the matrices 𝔈_s inthe small parameters ω̃_s∼η_s ∼ϵ_s ≪ 1. The general principle of the expansion is as follows. We first divide the matrix 𝔈_s [see (<ref>), (<ref>), and (<ref>)] into the Maxwellian contribution M_s and the non-Maxwellian one P_s:𝔈_s = ω_ps^2/ω^2(M_s + P_s ) ,where the ω_ps^2/ω^2 factor is introduced for later convenience. Next, we notethat for a Maxwellian distribution, Λ_s(ṽ_s,ṽ_s) =0 [see (<ref>)], whereas Λ_s ∼ϵ_s, η_s for the non-Maxwellian component of the CE distribution function. Thus, from (<ref>)considered under the ordering k ρ_s ∼ 1, M_s = O(ω̃_s) as ω̃_s→ 0, while P_s = O(η_s,ϵ_s). The expansion of M_s and P_s in ω̃_sis, therefore,M_s(ω̃_s,k) ≡ω̃_s M_s^(0)(k)+ ω̃_s^2 M_s^(1)(k ) + ... ,P_s(ω̃_s,k)≡P_s^(0)(k) + ω̃_s P_s^(1)(k) + ... .where the matrices M_s^(0) andM_s^(1) are O(1) functions of k only, and P_s^(0)and P_s^(1) are O(η_s,ϵ_s). We then expand 𝔈_s as follows:𝔈_s = ω̃_s𝔈_s^(0) + ω̃_s^2 𝔈_s^(1) + ... ,where 𝔈_s^(0) ≡ω_ps^2/ω^2 [ M_s^(0)(k )+ 1/ω̃_s P_s^(0)(k) ] , 𝔈_s^(1)≡ω_ps^2/ω^2 [ M_s^(1)(k ) + 1/ω̃_sP_s^(1)(k) ] .§.§.§ Additional symmetries of low-frequency dielectric tensor 𝔈_s^(0) The tensor 𝔈_s^(0) defined by (<ref>a) has some rather convenient additional symmetries, which lead to significant simplification of the dispersion relation. In appendix <ref> we show that in combination with the general symmetries(<ref>), which apply to 𝔈_s^(0) in addition to 𝔈, for any distribution function of particle species s with asmall anisotropy, (𝔈_s^(0))_xz = - k_/k_ (𝔈_s^(0))_xx ,(𝔈_s^(0))_yz = k_/k_ (𝔈_s^(0))_xy ,(𝔈_s^(0))_zz = k_^2/k_^2 (𝔈_s^(0))_xx .These symmetries have the consequence thatk̂𝔈_s^(0) =𝔈_s^(0)k̂ = 0 .As a result of this identity, it is convenient to calculate the components of 𝔈_s^(0) (and 𝔈_s) in the coordinate basis {e_1,e_2,e_3} defined by e_1 ≡ŷ×k̂, e_2 ≡ŷ, e_3 ≡k̂.Carrying out this calculation (see appendix <ref>), we find(𝔈_s^(0))_11 =k^2/k_^2 (𝔈_s^(0))_xx,(𝔈_s^(0))_12 = -(𝔈_s^(0))_21 = k/k_(𝔈_s^(0))_xy, (𝔈_s^(0))_22 =(𝔈_s^(0))_yy , (𝔈_s^(0))_13 =(𝔈_s^(0))_31=(𝔈_s^(0))_23 =(𝔈_s^(0))_32 =(𝔈_s^(0))_33 = 0 ,where (𝔈_s^(0))_ij is the (i,j)-th component of 𝔈_s^(0) in the basis {e_1,e_2,e_3}. We conclude that, if k ρ_s ∼ 1 and ω̃_s≪ 1, the components of 𝔈_ssatisfy(𝔈_s)_13∼ (𝔈_s)_23∼ (𝔈_s)_33∼ω̃_s (𝔈_s)_11∼ω̃_s(𝔈_s)_12∼ω̃_s(𝔈_s)_22.These components can be written in terms of the components of 𝔈_s in the {x̂,ŷ,ẑ} coordinate frame [see (<ref>)] via a coordinate transformation; the resulting expressions are rather bulky, so we do not reproduce them here – they are detailed in appendix <ref>.§.§.§ Consequences for dispersion relation On account of the additional symmetries described in the previous section, asimplified dispersion relation for low-frequency modes can be derived in place of the fullhot-plasma dispersion relation (<ref>). However, depending on the frequency and characteristic wavelengths of modes, this derivationhas a subtlety because of the large discrepancy between ion and electron masses.In, e.g., a two-species plasma with μ_e = m_e/m_i ≪ 1 (and ion chargeZ), we haveω̃_e/ω̃_i = √(μ_e τ) ,where τ = T_i/T_e. If τ∼ 1 [as would be expected in a collisional plasma on macroscopic evolution time scales τ_L greater than the ion-electron temperature equilibration time τ_ie^ eq – cf. (<ref>)], then ω̃_i∼μ_e^-1/2ω̃_e≫ω̃_e. Thus, in general, ω̃_i≁ω̃_e, and any dispersion relation will in principle depend on an additional (small) dimensionless parameter μ_e. This introduces various complications to the simplified dispersion relation's derivation, most significant of which being that, since ρ_e = Z μ_e^1/2τ^-1/2ρ_i ≪ρ_i (for Z ≳ 1), to assume the ordering k ρ_s ∼ 1for both ions and electrons is inconsistent (see section <ref>). To avoid the description of our approach being obscured by these complications, we consider a special case at first: we adopt the ordering k ρ_e ∼ 1 in a two-species plasma and assume that ω̃_i∼μ_e^-1/2ω̃_e≪ 1. In this case, ω̃_i𝔈_i^(0)∼μ_e^1/2 Z τ^-1/2ω̃_e𝔈_e^(0)≪ω̃_e𝔈_e^(0), and so the dielectric tensor 𝔈 is given by𝔈 = ω̃_e𝔈^(0) + ω̃_e^2 𝔈^(1) + ... ,where 𝔈^(0) ≡𝔈_e^(0) + ω̃_i/ω̃_e 𝔈_i^(0) ≈𝔈_e^(0), 𝔈_s^(1)≡𝔈_e^(1) + ω̃_i^2/ω̃_e^2 𝔈_i^(1) .Thus, to leading order in the ω̃_e≪ 1 expansion, only the electron species contributes to the dielectric tensor for electron-Larmor-scale modes. We revisit the derivation of simplified dispersion relations for CE microinstabilities more generally in section <ref>. To derive the simplified dispersion relation for electron-Larmor-scale modes, we start by considering the component of (<ref>) for the electric field that is parallel to the wavevector k̂, k̂𝔈δE= 0 ,and then substitute the expanded form (<ref>) of thedielectric tensor (with s = e). The orthogonality of 𝔈_e^(0) to k̂ – viz., (<ref>) – implies that (<ref>) becomesk̂𝔈^(1)δE=𝔈_33^(1)k̂δE + k̂𝔈^(1)δE_T =O(ω̃_e |δE|) ,where the transverse electric field isdefined by δE_T≡δE(I -k̂k̂). In appendix <ref>, we show that for ω̃_e, ω̃_i≪ 1, 𝔈_33^(1)≈ω_pe^2/ω^22 k_^2/k^2 (1+Zτ^-1) [1+ O(η_e, ϵ_e)].Since this is strictly positive, we can rewrite (<ref>) togive the electrostatic field in terms of the transverse electric field:k̂δE = -(𝔈_33^(1))^-1(k̂𝔈^(1)δE_T ) .We conclude that |k̂δE| ∼ |δE_T| for all low-frequency perturbations with k_∼ k; a corollary of this result is that there can be no low-frequencypurely electrostatic perturbations (see appendix<ref> for an alternative demonstration of this).We can now derive the dispersion relation from the other two components of (<ref>),[c^2 k^2/ω^2(k̂k̂-I)+(k̂k̂-I) 𝔈]δE = 0 ,by (again) substituting the expanded dielectric tensor (<ref>) into (<ref>): [ω̃_e𝔈^(0) + c^2 k^2/ω^2(k̂k̂-I) ]δE_T = - (k̂k̂-I) ( 𝔈 - ω̃_e𝔈^(0)) δE,where we have used the identity𝔈^(0) =(k̂k̂-I) 𝔈^(0)(k̂k̂-I) ,and ordered k^2 c^2/ω^2 ∼ω̃_e𝔈^(0). The ratio of the right-hand side of (<ref>) to the left-hand side is O(ω̃_e); we thus concludethat, to leading order in the ω̃_e≪ 1 expansion, [ω̃_e𝔈_e^(0) + c^2 k^2/ω^2(k̂k̂-I) ]δE_T = 0 ,and the dispersion relation is approximately [ω̃_e (𝔈_e^(0))_11-k^2 c^2/ω^2][ω̃_e (𝔈_e^(0))_22-k^2 c^2/ω^2]+[ω̃_e (𝔈_e^(0))_12]^2= 0 .Finally, writing the dielectric tensor in terms of M_e andP_e as defined by (<ref>a), we find [ω̃_e (M_e^(0))_11 + (P_e^(0))_11- k^2 d_e^2][ω̃_e (M_e^(0))_22 + (P_e^(0))_22 - k^2 d_e^2] + [ω̃_e (M_e^(0))_12 + (P_e^(0))_12]^2 = 0, where d_e = c/ω_pe is the electron inertial scale [see (<ref>b)]. This can be re-written as a quadratic equation in ω – and thus, expressions for the complex frequency of any low-frequency perturbation can be found for any given positive wavenumber.We note that the electron inertial scale is related to the electron Larmorradius by d_e = ρ_e β_e^-1/2; therefore, our expansion scheme is onlyconsistent with the low-frequency assumption (<ref>) under our assumed ordering,ω̃_e∼β_e^-1, when β_e ≫ 1. We note that one only needs to know 𝔈_e^(0) in orderto obtain the dispersion relation of low-frequency perturbations and the transverse component of theelectric field, whereas to determine the electrostatic component of the electricfield (and other quantities, such as the density perturbation – see appendix <ref>), one must go to higher order in the ω̃_e≪ 1expansion. Since we are primarily interested in microinstability growth rates and wavenumber scales, we will not explicitly calculate theelectrostatic fields associated with perturbations using(<ref>), and thus can avoid the rather laborious calculation of𝔈^(1) for CE distribution functions.We do, however, in appendix <ref> derive an explicit expression for𝔈^(1) for a plasma with Maxwellian distributionfunctions for all particle species;this in turn allows us to relate the electrostaticelectric field to the transverse field for such a plasma (see appendix <ref>).For the sake of completeness, we also observe that if the non-Maxwellian partof the CE distribution function is even with respect tov_, the transformation rules (<ref>) combined with (<ref>) implythat a perturbation with a negative parallel wavenumber k_ will obey exactly the same dispersion relation as a perturbation for a positive parallel wavenumber, viz., for k_ > 0, P_e^(0)(-k_, k_) = P_e^(0)(k_, k_) . If instead the non-Maxwellian partis odd, then, for k_ > 0, P_e^(0)(-k_, k_) = -P_e^(0)(k_, k_) .The dispersion relation for perturbations with k_ < 0 can, therefore, be recovered by considering perturbations with k_ > 0, but under the substitution P_e^(0)→ -P_e^(0). Thus, we cancharacterise all unstable perturbations under the assumption that k_ > 0. In all subsequent calculations, we require the Maxwellian part M_e^(0) of the dielectric tensor. The elements of the matrix M_s^(0) of species s are as follows:(M_s^(0))_11 = i k^2/k_^2F(k_ ρ̃_s,k_ ρ̃_s) , (M_s^(0))_12 = -i k/k_ G(k_ ρ̃_s,k_ ρ̃_s) , (M_s^(0))_21 = i k/k_ G(k_ ρ̃_s,k_ ρ̃_s) , (M_s^(0))_22 = i H(k_ ρ̃_s,k_ ρ̃_s) , where the functions F(x,y), G(x,y) and H(x,y) are F(x,y) ≡4 √()/y^2 exp(-y^2/2) ∑_m=1^∞ m^2 I_m(y^2/2)exp(-m^2/x^2) , G(x,y) ≡exp(-y^2/2) ∑_m = -∞^∞ m Z(m/x) [ I_m'(y^2/2)- I_m(y^2/2)] , H(x,y) ≡ F(x,y) + √() y^2 exp(-y^2/2) ∑_m = -∞^∞ [ I_m(y^2/2)- I_m'(y^2/2)] exp(-m^2/x^2) , I_m(α) is the m-th modified Bessel function, and Z(z) = 1/√()∫_C_Ldu exp(-u^2)/u-zis the plasma dispersion function (C_L is the Landau contour) <cit.>.The derivation ofthese results from the full dielectric tensor (which is calculated in appendix <ref>) for a plasma whose constituent particles all have Maxwellian distributions is presented inAppendices <ref>(expansion in the {x̂,ŷ,ẑ} basis) and <ref> (expansion in the {e_1,e_2,e_3} basis). §.§.§ Effect of multiple species on dispersion-relation derivations We now relax the assumptions adopted in section <ref> that the low-frequency modes of interest are on electron Larmor scales, and discuss how we derive simplified dispersion relations for (low-frequency) CE microinstabilities more generally. First, it is unnecessarily restrictive to assume that, for all CE microinstabilities, ω̃_s≪ 1 for all particle species. There are some instabilities for which ω̃_e∼η_e ∼ϵ_e ≪ 1 while ω̃_i≳ 1. Recalling the orderings ω̃_e∼β_e^-1 and k ρ_e ∼ 1 that were adopted for the electron-Larmor-scale instabilities described in section <ref>, it follows that ω̃_i≳ 1 whenever β_e ≲τ^-1/2μ_e^-1/2; in other words, electron-Larmor-scale CE microinstabilities in plasmas with β_e that is not too large will satisfy ω̃_i≳ 1. Therefore, we cannot naively apply our low-frequency approximation to both 𝔈_e and 𝔈_iin all cases of interest. We will remain cognisant of this in the calculationsthat follow – a concrete example of ω̃_i≳ 1 willbe considered in section <ref>.Secondly, because of the large separation between electron and ion Larmor scales, it isnecessary to consider whether the approximation M_s(ω̃_s,k) ≈ω̃_sM_s^(0)(k) remains valid for parallel or perpendicular wavenumbers much larger or smaller than the inverse Larmor radii ofeach species. We show in appendix <ref> that the leading-order term in the ω̃_s≪ 1 expansion remains larger than higher-order terms for all k_ρ_s ≳ 1 (as, indeed, was implicitly assumed in section <ref>).However, for k_ρ_s sufficiently small, the same statement does not hold for all components ofM_s. More specifically, it is shown in the same appendix that the dominant contribution to M_s(k) when k_ρ_s ≪ 1 instead comes from the quadratic term ω̃_s^2 M_s^(1)(k) (rather than any higher-orderterm). Thus, in general, our simplified dispersion relation for low-frequency modes in a two-species plasma has the form ofa quartic in ω, rather than a quadratic, if k_ρ_s ≪ 1 for at leastthe electron species. Physically, the reason why a quadratic dispersion relation is no longer a reasonable approximation is the existence of more than two low-frequency modes in a two-species Maxwellian plasma in certain wavenumber regimes.For example, for quasi-parallel modes with characteristic parallel wavenumbers satisfying k_ρ_i ≪ 1,there are four low-frequency modes (see section <ref>). Nevertheless, in other situations, the components of M_s(k)for which the M_s(ω̃_s,k) ≈ω̃_sM_s^(0)(k)approximation breaks down are not important, on account of their small sizecompared with terms in the dispersion relation associated with other Maxwelliancomponents. In this case, the original quadratic dispersion relation issufficient. An explicit wavenumber regime in which this is realised is k_ρ_e ∼ k_⊥ρ_e ≪1 but k ρ_i ≫ 1 – see sections <ref> and <ref>. Taking these multiple-species effects into account, the reasons behind the decision made in section <ref> toconsider the CES microinstabilities separately from the CET microinstabilitiescome into plain focus. First, the characteristic sizes of the CEelectron-temperature-gradient and ion-temperature-gradient terms are comparable (η_i ∼η_e), while the CE ion-shear term is much larger than the CE electron-shearterm: ϵ_i ∼μ_e^-1/2ϵ_e. This has the consequence that the natural orderings of ω̃_e and ω̃_i with respect to other parameters are different for CES and CETmicroinstabilities. Secondly, the fact that the velocity-space anisotropyassociated with the CE temperature-gradient terms differs from the CE shear terms– which excite microinstabilities with different characteristic wavevectors –means that the form of the dispersion relations of CET and CESmicroinstabilities are distinct. More specifically, the dispersion relation for CET microinstabilities at both electron and ion scalescan always be simplified to a quadratic equation in ω; in contrast, for CES microinstabilities,the dispersion relation cannot in general be reduced to anything simpler than a quartic. §.§.§ Modelling collisional effects on CE microinstabilities As proposed thus far, our method for characterising microinstabilities in a CE plasma does not include explicitly the effect of collisions on themicroinstabilities themselves. In principle, this can be worked out by introducing a collision operator into the linearisedMaxwell-Vlasov-Landau equation from which the hot-plasma dispersion relation (<ref>)is derived. Indeed, if a Krook collision operator is assumed (as was done in section <ref> when determiningthe precise form of the CE distribution functions of ions and electrons), theresulting modification of the hot-plasma dispersion relation is quitesimple: the conductivity tensor (<ref>) remains the same, but with the substitution ω̃_s→ω̂_s≡ω̃_s + i/k_λ_s,in the resonant denominators (see appendix <ref>).As for how this affects the simplifications to the dispersion relation outlined in section <ref>,the expansion parameter in the dielectric tensor's expansion (<ref>)is altered, becoming ω̂_s≪ 1 (as opposed to ω̃_s≪1); in other words, 𝔈_s^(1)/𝔈_s^(0)∼ω̂_s. The latter result leads to an seemingly counterintuitive conclusion: collisions typically fail tostabilise low-frequency instabilities in CE plasma if ωτ_s ≲1 (where τ_s is the collision time ofspecies s) but k_ v_thiτ_s = k_λ_s ≫ 1. Thisis because the simplified dispersion relation (<ref>)only involves leading-order terms in the expanded dielectric tensor. Theseterms are independent of ω̂_s, and thusthe growth rate of any microinstability that is adequately described by (<ref>)does not depend on the size of ωτ_s.For these microinstabilities, the effect of collisions only becomes relevant if k_λ_s ≲ 1 .This is inconsistent with the assumptions k λ_e ≫ 1, k λ_i ≫ 1made when setting up our calculation in section <ref>.Thus, the only regime where collisions can reasonably be included in our calculation is one where they are typically not important. An exception to this rule arises whentwo-species plasma effects mean that the first-order terms in the ω̂_s≪ 1expansion are needed for a correct characterisation of the growth rate of certain microinstabilities (see section <ref>); forthese instabilities, we include the effect of collisions using(<ref>). Although our calculation is not formally valid when (<ref>) holds, so we cannot show explicitly that growth ceases, this condition nonetheless represents a sensible criterionfor suppression of microinstabilities by collisional damping. Physically, itsignifies that collisions are strong enough to scatter a particlebefore it has streamed across a typical wavelength of fluctuation excited by a microinstability. This collisional scattering prevents particles from being resonant, which in turn would suppress the growth of many different microinstabilities.However, we acknowledge that there exist microinstabilities that do not involve resonant-particle populations(e.g., the firehose instability – see sections <ref> and <ref>),and thus it cannot be rigorously concluded from our work that all microinstabilities are suppressed when (<ref>)applies. Yet even without an actual proof of collisional stabilisation, there is anotherreason implying that (<ref>) is a reasonable threshold formicroinstabilities: the characteristic growth time of microinstabilities atwavenumbers satisfying (<ref>) is comparable the evolutiontime τ_L of macroscopic motions in the plasma. To illustrate this idea, weconsider the ordering (<ref>) relating the complex frequency ofmicroinstabilities to the small parameter ϵ_s for CES (CE shear-driven) microinstabilities, and use it toestimateωτ_L ∼ϵ_s k_ v_thsτ_L ≲ϵ_s L_V/λ_sv_ths/V ,where V ∼ L_V/τ_L is the characteristic ion bulk-flow velocity. Consideringorderings (<ref>), it follows that ϵ_e ∼μ_e^1/2ϵ_i, and soϵ_i v_thi/V∼ϵ_e v_the/V∼λ_e/L_V∼λ_i/L_V.Then (<ref>) becomesωτ_L ≲1 ,implying (as claimed) that the CES microinstability growth rate is smaller than the fluid turnover rate τ_L^-1. Spelled out clearly, this means that the underlying quasiequilibrium state changes before going unstable.Similar arguments can be applied to CET (CE temperature-gradient-driven) microinstabilities. Thus, (<ref>) represents a lower bound on thecharacteristic wavenumbers at which microinstabilities can operate. We shalltherefore assume throughout the rest of this paper that microinstabilitiesare suppressed (or rendered irrelevant) if they satisfy (<ref>). §.§.§ Caveats: microinstabilities in CE plasma where ω/k_ v_ths≁η_s, ϵ_s As mentioned in section <ref>, there are a number of important caveats to the claim thatthe ordering (<ref>) must be satisfied by microinstabilities in a CE plasma. The first of these is that ourcomparison of non-Maxwellian with the Maxwellian terms in expression (<ref>) for Ξ_s is inessence a pointwise comparison at characteristic values of ṽ_s for which Ξ_s attains its largest typical magnitude. However,Ξ_s affects the components of the conductivity tensor viathe velocity integral of its product with a complicated function of frequency and wavenumber [see (<ref>)].Thus, it does not necessarily follow that the ratio of the integrated responses of the Maxwellian and non-Maxwelliancontributions to the conductivity tensor is the same as the pointwise ratio of the respective contributions toΞ_s. In some circumstances, this can result in the Maxwellian partbeing smaller than anticipated, leading to faster microinstabilities.An example of this phenomenon was given in section <ref>: for k_ρ_s ≪ 1,the characteristic magnitude ofthe Maxwellian contribution to some components of the dielectric tensor isO(ω̃_s^2), as compared with the naive estimateO(ω̃_s). This leads to certain CES microinstabilities (for example, the CE ion-shear-driven firehose instability – section <ref>)satisfying a modified low-frequency conditionω̃_s∼ϵ_s^1/2≪ 1 .A similar phenomenon affects the limit k_→ 0 for fixed k_,in which case it can be shown that the Maxwellian contribution to σ_zz isO(k_/k_); this leads to a CES microinstability (the CE electron-shear-driven ordinary-mode instability – see section <ref>) satisfying a modified ordering ω/k_ v_ths∼ϵ_s ≪ 1 .The second caveat is that for some plasma modes, the particles predominantly responsible forcollisionless damping or growth are suprathermal, i.e., ṽ_s ≫ 1.Then the previous comparison of terms in (<ref>) is not applicable. Modesof this sort are the quasi-cold plasma modes discussed in section<ref> and appendix <ref>. They can be unstable, but always with a growth ratethat is exponentially small in η_s and ϵ_s. In spite of these two caveats, we proceed by considering the full hot-plasma dispersion relation (<ref>)in the low-frequency limit ω≪ k_ v_ths. This approach enables the treatment of all microinstabilitiessatisfying conditionω̃_s∼η_s^ι_η , ϵ_s^ι_ϵ≪ 1 ,where ι_η and ι_ϵ are any fractional powers. Similarly to the discussion in section<ref>, we claim that the microinstabilities satisfying the low-frequency condition (<ref>)are likely to be the most rapid of all possible microinstabilities in CE plasma.A formal justification of this claim relies on the argument – presented in appendix <ref>– that for all plasma modes satisfying ω≳ k_ v_thsand | ω| ≫ | ω|, the growth rate isexponentially small in η_s and ϵ_s. By definition, this class ofmodes includes the quasi-cold modes. In a plasma where ϵ_s, η_s ≪1, the growth rates of such microinstabilities will be exponentially small, andthus of little significance. The only situation that we are aware of in which the low-frequencycondition (<ref>) is not appropriate is the aforementionedCES ordinary-mode instability; a separate treatment of it involving thefull hot-plasma dispersion relation is provided in appendix<ref>.§ CET (CHAPMAN-ENSKOG, TEMPERATURE-GRADIENT-DRIVEN) MICROINSTABILITIES §.§ Form of CE distribution functionWe consider first the non-Maxwellian terms of the CE distribution function arising from temperature gradients and electron-ion drifts.Neglecting bulk-flow gradients [viz., setting ϵ_s = 0 for both species – see (<ref>e,f)], the CE distribution functions (<ref>) for the electrons and ions becomef_e0(ṽ_e,ṽ_e) = n_e0/v_the^3 ^3/2 exp(-ṽ_e^2) {1-ṽ_e [η_e^T (ṽ_e^2-5/2)+η_e^R] } ,f_i0(ṽ_i,ṽ_i) = n_i0/v_thi^3 ^3/2 exp(-ṽ_i^2) {1-η_iṽ_i (ṽ_i^2-5/2) } ,where we have written out explicitly the electron-temperature-gradient [η_e^T, η_i – see (<ref>a,d)] and electron-friction [η_e^R – see (<ref>b)]terms under the assumption that the Maxwell-Vlasov-Landau system from which these CE distribution functions were derived is governedby a Krook collision operator.We remind the reader that the electron-ion-drift term[η_e^u – see (<ref>c)] disappears for this choice of collision operator. We also observe that thenon-Maxwellian part of the distribution functions (<ref>) haveodd parity; thus, any unstable mode with k_ > 0 has a correspondingunstable mode with k_ < 0 and the signs of η_e^T, η_e^R, and η_ireversed (see section <ref>, last paragraph). The precise methodology that we employ to calculate the growth rates of CET microinstabilities is described inappendix <ref>; here, we focus on the results of those calculations.In section <ref>, we will present the overview of the CETstability landscape, while the microinstabilities referred to there will betreated analytically in section <ref>.§.§ StabilityWe determine the stability (or otherwise) of the CE distribution functions of the form (<ref>a) and (<ref>b)for different values of η_e^T, η_e^R, and η_i, the electron inertial scaled_e, the electron-temperature scale length L_T = |∇_logT_e|^-1, and for fixed electron and ion plasma betas (β_e and β_i, respectively). Stability calculations are carried out for particular combinations of values of η_e^T, η_e^R, η_i, d_e, L_T, β_e and β_i by solving for the maximummicroinstability growth rate across all wavevectors (see appendix <ref> for explanation of how this is done),and determining whether this growth rate is positive for the microinstabilities whose wavelength is smaller than the Coulomb mean free paths (a condition necessary for our calculation to be valid).The results of one such stability calculation – for a temperature-equilibrated hydrogen plasma (η_e^T = η_i, β_i = β_e) – are presented in figure <ref>. In spite of the five-dimensional (η_e^T,η_e^R,d_e,L_T,β_e) parameter space that seemingly needs to be explored,we can, in fact, convey the most salient information concerning the stability of theCE distribution functions (<ref>) using plots over a two-dimensional (d_e/L_T,λ_e/L_T)parameter space at a fixed β_e [where we remind the reader that λ_e/L_T = |η_e^T| – see (<ref>a)]. This reduction in phase-space dimensionality ispossible for two reasons. First, it transpires that the CE electron-frictionterm of the form given in (<ref>a) does not drive anymicroinstabilities, bur merely modifies the real frequency ofperturbations with respect to their Maxwellian frequencies (this is proven in appendix <ref>). Thus,we can set η_e^R = 0 without qualitatively altering the stability properties of the CE distribution functions (<ref>). Secondly, none of the salient stability thresholds applying to CET microinstabilities depends on d_e and L_T separately:one is a function of d_e/L_T, while another is independent of bothquantities. Figure <ref>a shows the regions of instability and stability of the CEdistribution function (<ref>) over the (d_e/L_T,λ_e/L_T) parameter space.The unstable region is bracketed by two thresholds. For d_e/L_T below acritical value (d_e/L_T)_ c0, stability is independent of d_e/L_T, and only depends on the relative magnitude of λ_e/L_T and β_e:CET microinstabilities are quenched if λ_e β_e/L_T ≪ 1. Ford_e/L_T ≳ (d_e/L_T)_ c0, and λ_e β_e/L_T ≳ 1,stability is attained at fixed λ_e/L_T for d_e/L_T > (d_e/L_T)_ c, where (d_e/L_T)_ c increases monotonicallywith λ_e/L_T. If λ_e β_e/L_T ≳ 1 and d_e/L_T ≲(d_e/L_T)_ c, then the CEdistribution function (<ref>) is unstable. The fastest-growing CET microinstabilityis the whistler (heat-flux) instability: whistler waves driven unstable by the small anisotropy of the CE electron-temperature-gradient term (see section<ref>).That this instability with wavevector parallel to the magnetic field is indeed thedominant microinstability is most easily ascertained by comparing simple analyticexpressions for its peak growth rate and wavevector to the equivalent quantitiesrecorded when performing the general stability calculation (see figures <ref>b, <ref>c and <ref>d). The maximum microinstability growth rate matchesthe analytic result (<ref>) for the CET whistler instability in the limit λ_e β_e/L_T ≫ 1, while the parallel wavenumber (|k_| ρ_e)_ peak of the fastest-growing mode is extremely well described by (<ref>). In addition, figure <ref>d demonstrates that the parallel instability is indeed the fastest. The CET whistlerinstability has been considered previously by a number of authors (see references in section <ref>); we note that these prior studies of this instability suggest that, nonlinearly, oblique CET whistler modes may be the more important ones, even though linearly the parallel modes are the fastest growing (see section <ref>). The two thresholds demarcating the unstable region can then be associated with stabilisation conditions of the CET whistler instability,each with a simple physical interpretation. The first condition isthe β-stabilisation condition of the whistler instability. It is shown insection <ref> that when λ_e β_e/L_T ≪ 1,cyclotron damping on whistler modes is sufficiently strong that only quasi-parallel modes withparallel wavenumbers k_ρ_e ≲ (λ_e β_e/L_T)^1/3≪ 1 can bedestabilised by the anisotropy of the CE distribution function, and that the peak growthrate γ_ whistler,T of these unstable modes is exponentially small in λ_e β_e/L_T compared to the electron Larmor frequency [see (<ref>)]:γ_ whistler,T/Ω_e ∼λ_e exp[-(λ_e β_e/2 L_T)^-2/3]/L_T. This means thatif λ_e β_e/L_T is reduced below unity, the growth rate of the CET whistler instability decreasesdramatically, and thus the instability is unable to operate effectively on timescales shorter than those over which the CE plasma is evolving macroscopically. The second condition is collisional stabilisation of the CET whistlerinstability. Naively, it might be expected that two conditions must be satisfiedin order for the microinstability to operate: that its growth rate must satisfy γ_ whistler,Tτ_e≫1, and its characteristic wavenumber k λ_e≫ 1 [see (<ref>)]. Noting that for theCET whistler instability [cf. (<ref>)], γ_ whistler,Tτ_e/k λ_e = γ_ whistler,T/k v_the∼λ_e/L_T(λ_e β_e/L_T)^-1/5≪ 1 ,it follows that the former condition is more restrictive. Written as a conditionon d_e/L_T in terms of λ_e/L_T [and using γ_ whistler,T∼λ_e Ω_e/L_T – see (<ref>)], γ_ whistler,Tτ_e≫1 becomesd_e/L_T≪β_e^-5/2(λ_e β_e/L_T)^2 ,while the condition k λ_e≫ 1 on the instability wavenumber k_ρ_e ∼ (λ_e β_e/L_T)^1/5 [see (<ref>)] leads tod_e/L_T≪(d_e/L_T)_ c≡β_e^-3/2(λ_e β_e/L_T)^6/5.It is the latter that agrees well with the true result, as shown in figure<ref>a, implying that (d_e/L_T)_ c0 = β_e^-3/2. The (arguably surprising) result that the CET whistler instability can operate even ifγ_ whistler,Tτ_e≲ 1 is, in fact, a generic feature oflow-frequency (viz., ω≪ k v_the) plasma instabilities (see section <ref>). The physical instability mechanism underlying such modes can be sustained provided the time taken for thermalparticles (in this case, electrons) to cross the mode's wavelength is much shorterthan the collision time, irrespective of the mode's own frequency – in other words, τ_e k v_the = k λ_e ≫ 1.We point out that the collisional-stabilisation condition of the CET whistlerinstability can never be satisfied in a strongly magnetised plasma if λ_e β_e/L_T ≳ 1: this isbecause its wavenumber k satisfies k^-1≲ρ_e ≪λ_e. Whilst it is the fastest-growing one (assuming η_e^T ∼η_i), the CET whistler instability is not the onlyCET microinstability of interest. There are two other instabilities driven by the CET ion-temperature gradientterm, neither of which has previously been identified, to our knowledge: the slow (hydromagnetic) waveinstability (see section <ref>), and the long-wavelength kinetic-Alfvén wave instability (see section <ref>).The former, whose characteristic wavenumber scale satisfies k ρ_i ∼ 1, has a larger characteristic growth rate γ_ SW∼λ_i Ω_i/L_T_i(where L_T_i = |∇_logT_i|^-1 is thescale length of the ion temperature gradient).Similarly to the CET whistler instability, the CET slow-wave instability has β-stabilisation and collisional-stabilisation conditions λ_i β_i/L_T_i≪1 and λ_i ≲ρ_i, respectively. Thus, unless λ_i β_i/L_T_i > λ_e β_e/L_T_e (a condition equivalent to τ^3 L_T_e/L_T_i > Z^3, where τ = T_i/T_e), the CETslow-wave instability only operates when the CET whistler wave instability does, but on larger, ion rather than electron, scales.Nevertheless, the CET slow-wave instability is worth noting because, on account of being an ion instability, it should continue to operate even if the electron-scale CET whistler instability modifiesthe underlying electron distribution function. The slow-wave instability will then be responsible for modifying the ion distribution function. We are not aware of any work on the CET slow-wave instability and, thus, on its effect on ion heat conduction.Readers who areinterested in knowing more about the properties and growth rates of CET microinstabilities areencouraged to continue section <ref>; those who are focused on the widerquestion of the kinetic stability of the CE distribution function should jumpahead to section <ref>.§.§ CET microinstability classification§.§.§ Parallel whistler (heat-flux) instability The CET whistler instability, which has been studied previously by a number of authors <cit.>,is driven by parallel electron heat fluxes. These heat fluxes introduce the asymmetry to the CE electrondistribution function (i.e., the electron-temperature-gradient term), which,if it is sufficiently large, can overcome electron cyclotron damping of (electromagnetic) whistler wavesand render them unstable. The instability is mediated by gyroresonantwave-particle interactions that allow whistlers to drain free energy from electrons with parallelvelocities v_ = ±Ω_e/k_. For a positive, parallel electron heatflux, which is driven by an anti-parallel temperature gradient (∇_ T_e <0, so η_e^T < 0), it is only whistlers with a positive parallel wavenumber that areunstable. Whistler waveswith both parallel and oblique wavevectors with respect to the magnetic field can be destabilised, although the parallel modes are the fastest-growing ones.The CET whistler instability is most simply characterised analytically for parallel wavenumbers (i.e., k = k_). Then, it can be shown [see appendix <ref>,and also <cit.> and <cit.>] that the real frequency ϖ and growth rate γ at arbitrary k_ > 0are given byϖ/Ω_e =η_e^T (k_ ρ_e/4 - 1/2 k_ ρ_e)- (η_e^T/2 + k_^3 ρ_e^3/β_e) Z(1/k_ ρ_e) /[Z(1/k_ ρ_e)]^2 + exp(-2/k_^2 ρ_e^2),γ/Ω_e =-√()(η_e^T/2 + k_^3 ρ_e^3/β_e) /[Z(1/k_ ρ_e)]^2 exp(1/k_^2 ρ_e^2)+ exp(-1/k_^2 ρ_e^2). For η_e^T > 0, γ < 0, but if η_e^T < 0, then γ is non-negative for k_ρ_e ≤(η_e^T β_e/2)^1/3.The dispersion curves ϖ = ϖ(k_) and γ = γ(k_) ofunstable whistler waves with parallel wavevectors for three different values of |η_e^T|β_e are plotted in figure <ref> using the above formulae.For |η_e^T| β_e ≳ 1, the range of unstable parallel wavenumbers, Δ k_, is comparable to the characteristic wavenumber of the instability: Δ k_∼ k_∼ρ_e^-1. The expressions (<ref>a) and (<ref>b) can be simplified in two subsidiary limits, which in turn allows for the derivation of analytic expressions for the maximumgrowth rate of the instability and the (parallel) wavenumber at which that growth rate is realised. First, adopting the ordering k_ρ_e ∼(η_e^Tβ_e)^1/3≪ 1 under which the destabilising η_e^T terms and the stabilising electron FLR terms are the same order, we find ϖ≈k_^2 ρ_e^2/β_e Ω_e , γ≈ -√()/k_^2 ρ_e^2 ( η_e^T/2 + k_^3 ρ_e^3/β_e)exp(-1/k_^2 ρ_e^2) Ω_e . The frequency corresponds to that of a whistler wave in the k_ρ_e ≪ 1 limit <cit.>. The fastest growth, which occurs at the wavenumberk_ρ_e ≈(|η_e^T|β_e/2)^1/3 -|η_e^T|β_e/4,is exponentially slow in |η_e^T|β_e ≪ 1: γ_ max≈3 √()/4 |η_e^T| exp[-2^2/3/(|η_e^T|β_e)^2/3-1]Ω_e . Next, considering the opposite limit k_ρ_e ≫ 1, we obtainϖ≈[η_e^T β_e(1/4 k_ ρ_e--2/2 k_ ρ_e)+ 2/ k_^2 ρ_e^2 ] Ω_e/β_e, γ≈ -1/√() [η_e^T β_e(1/2-4-/2 k_^2 ρ_e^2) + k_^3 ρ_e^3 ] Ω_e/β_e . We then find that the maximum growth rate of the parallel mode is given byγ_ max ≈ |η_e^T|/√(){1- [1/√()(4/-1)]^3/5[(3/2)^2/5-(2/3)^3/5](|η_e^T|β_e )^-2/5}Ω_e ≈0.56 |η_e^T| [1-0.13 (|η_e^T|β_e )^-2/5] Ω_e ,at the parallel wavenumber k_ρ_e = [2/3√()(4/-1)]^1/5(|η_e^T|β_e )^1/5≈ 0. 63(|η_e^T|β_e )^1/5.In addition, we see that the real frequency ofmodes with k_ρ_e ≲(|η_e^T|β_e/2)^1/3 is larger than the growth rate of the mode: ϖ∼ k_ρ_e γ≫γ. Thus, these modes oscillate more rapidly than they grow. The approximate expressions for (<ref>) and (<ref>) are valid in the limits |η_e^T| β_e ≪ 1 and |η_e^T|β_e ≫ 1,respectively, and are plotted in figure <ref> alongside the exact results (<ref>). Of particular note is the accuracy of the approximate expression (<ref>b)for the growth rate when k_ρ_e ≳ 0.6; this suggests that (<ref>) is a reasonable estimate of the peak growth rate for |η_e^T| β_e ≳1.§.§.§ Oblique whistler (heat-flux) instability Analytical expressions for the frequency and growth rate of unstablemodes with an oblique wavevector at an angle to the magnetic field are more complicated thanthe analogous expressions for parallel modes. In appendix <ref>, we show that there are two low-frequency oblique modes, whose complexfrequencies ω are given by ω = Ω_e/β_e k_ρ_e -B_T±√(B_T^2 + 4A_TC_T)/2 A_T,where the coefficients A_T = A_T(k_ρ_e,k_⊥ρ_e,η_e^T β_e), B_T = B_T(k_ρ_e,k_⊥ρ_e,η_e^T β_e), and C_T = C_T(k_ρ_e,k_⊥ρ_e,η_e^T β_e)are composed of the sums and products of the specialfunctions defined in (<ref>), and also other special functions defined in appendix <ref>.For a given wavenumber, we can use (<ref>) to calculate the growth ratesof any unstable oblique modes – and, in particular, demonstrate that positive growth ratesare present for certain values of η_e^T. When they do exist, (<ref>) suggests that they will have the typical sizeγ∼Ω_e/β_e ∼η_e^T Ω_e when k ρ_e ∼ 1 and η_e^T β_e ∼ 1. For η_e^T > 0, we find that both modes (<ref>) are damped; for η_e^T < 0,one mode is damped for all wavenumbers, but the other is not.Figure <ref> shows the maximum (positive) growth rate γ (normalised to Ω_e/β_e) of this mode at a fixed value ofη_e^T, for a range of β_e. The growth rate is calculated by evaluatingthe imaginary part of (<ref>) at a given wavenumber. For -η_e^T < 1/β_e, the mode of interest is damped for most wavenumbers, except for a small region of wavenumbers quasi-parallel to the magnetic field: in this region, there is a very small growthrate γ≪Ω_e/β_e (figure <ref>a). This finding isconsistent with the exponentially small growth rates found for the parallelwhistler modes [see (<ref>)]. When -η_e^T ∼ 1/β_e, there is amarked change in behaviour: a larger region of unstable modes appears, with γ∼Ω_e/β_e, at wavenumbers k ρ_e ∼ 1 (figures <ref>b and c). The growth rate is the largestfor parallel modes – but there also exist oblique modes with k_≲ k_ whose growth rate is close to the peak growth rate.For example, for η_e^T β_e = -4, we find that the growth rate of the fastest-growing mode with a wavevector angle θ = 10^∘ is only ∼2% smaller than the fastest-growing parallel mode; for awavevector angle θ = 20^∘, the reduction is by ∼6%; and forθ = 30^∘, the reduction is by ∼20%. Finally, if -η_e^T ≫ 1/β_e, there exists aextended region of unstable modes, with 1 ≲ k ρ_e ≲|η_e^T β_e|^1/3, and γ∼ |η_e^T Ω_e| (figure <ref>d). Again, the peak growth rate is at k_⊥ = 0,but oblique modes also have a significant growth rate (for unstable modes with θ = 30^∘, the reduction in the largest growth rate compared to the fastest-growing parallel mode is only by ∼4%).Most of the unstable modes have a non-zero real frequency:for -η_e^T ∼ 1/β_e, ω∼γ (figure <ref>e), while for -η_e^T ≫ 1/β_e, ω≫γfor k ρ_e ≫ 1 (figure <ref>f). Note, however, that in the lattercase there exists a band of wavenumbers at which there is no real frequency. In summary, we have (re-)established that the fastest-growing modes of the CETwhistler instability are parallel to the magnetic field; however, we have shown semi-analytically (a novel result of this work)that the growth of oblique perturbations can be almost as large. This result is ofsome significance, because it has been argued that oblique whistler modes are necessaryfor the instability to scatter heat-carrying electrons efficiently <cit.>. It was proposed previously that such modes could arisefrom modifications to the CETelectron-temperature-gradient terms induced by the unstable parallel whistlermodes rendering the oblique modes the fastest-growing ones; our calculationssuggest that it would only a require a small change to the CET whistler growth ratesfor this to be realised.As a further aside, we observe thatin a plasma with sufficiently high plasma β_e, these oblique modes are in fact closer in nature tokinetic Alfvén waves (KAWs) than to whistler waves. Whistler waves arecharacterised as having effectively immobile ions (ω≫ k_ v_thi), while KAWs have warm ions (ω≪ k_ v_thi); as aconsequence, whistler waves have a negligible density perturbation (δ n_e ≪ Z e n_eφ/T_i, where φ is the electrostatic potential associated with the wave), while KAWsdo not: δ n_e ≈ Z e n_eφ/T_i <cit.>. In a β_e ∼ 1 plasma for k_≳ k_, the real frequency of whistlermodes satisfies ω/k_ v_thi∼ k_ρ_i/β_e ∼k_ρ_i; thus, we concludefrom our above considerations that the two waves must operate in differentregions of wavenumber space, viz., k_ρ_i ≪ 1, k_ρ_i > 1 for KAWs, and k_ρ_i ≫ 1 forwhistlers. However, for β_e ≳μ_e^-1/2 (where μ_e = m_e/m_i) and k_∼ k_≫ρ_i^-1, the frequency of whistler waves is too low for ω≫ k_ v_thito be satisfied whilst also maintaining k_ρ_e ≪ 1. Instead, theions participate in the wave mechanism, and δ n_e ≈ -Z e n_eφ/T_i (see appendix <ref>). For further discussion of thephysics of the whistler instability (as well as its nonlinear evolution), see <cit.> and the other references given at the beginning of section <ref>. §.§.§ Slow-(hydromagnetic)-wave instability Although parallel ion heat fluxes in a classical, collisional plasma are typically muchweaker than electron heat fluxes, they can still act as a free-energy source forinstabilities, by introducing anisotropy to the ion distribution function (<ref>b) (i.e., the CE ion-temperature-gradientterm). Furthermore, anisotropy in the ion distribution function can enable theinstability of plasma modes that are not destabilised by the CEelectron-temperature-gradient term. This exact situation is realised in the CETslow-hydromagnetic-wave instability, in which a sufficiently large CET ion-temperature-gradient term counteracts the effect of ion cyclotron damping onslow hydromagnetic waves. The slow hydromagnetic wave (or slow wave) <cit.> is the left-hand-polarised quasi-parallel electromagnetic mode in high-β plasma; it exists for parallel wavenumbers k_ that satisfy β_i^-1/2≪ k_ρ_i ≲ 1, andhas a characteristic frequency ω≈ 2 Ω_i/β_i.To the authors' knowledge, no instability of the slow wave due to the ion heat flux has previously beenreported. The instability's mechanism is analogous to the CET whistlerinstability: the slow waves drain energy from ions with parallel velocities v_ = ±Ω_i/k_ via gyroresonant wave-particle interactions.For an anti-parallel ion temperature gradient (i.e., ∇_ T_i < 0, so η_i <0), slow waves propagating down the temperature gradient are destabilised,while those propagating up the temperature gradient are not.As before, the slow-wave instability is most easily characterised in the subsidiary limit k_ρ_i → 0 (k = k_). Under the ordering k_ρ_i ∼ 1, the real frequency ϖ and growth rate γ are given by (see appendix <ref>)ϖ/Ω_i =η_i (k_ ρ_i/4-1/2 k_ ρ_i)- k_^2 ρ_i^2 [Z(1/k_ ρ_i) + k_ ρ_i] (η_i/4 + k_ ρ_i/β_i) /[Z(1/k_ ρ_i) + k_ ρ_i]^2 + exp(-2/k_^2 ρ_i^2), γ/Ω_i = - √() k_^2 ρ_i^2 (η_i/4 + k_ ρ_i/β_i) /[Z(1/k_ ρ_i) + k_ ρ_i]^2 exp(1/k_^2 ρ_i^2)+ exp(-1/k_^2 ρ_i^2).The CET electron-temperature-gradient term does not appear because itscontributions to the frequency and growth rate are much smaller than theequivalent contributions of the CET ion-temperature-gradient term at k_ρ_i ∼ 1.Plots of ϖ = ϖ(k_) and γ = γ(k_) for differentvalues of η_i β_i < 0 are shown in figure <ref>.As with the CET whistler instability, we can derive simple expressions for thepeak growth rate (and the wavenumber associated with that growth rate) insubsidiary limits. First, ordering k_ρ_i ∼η_i β_i/4 ≪ 1 so that the destabilising η_i terms and the stabilising ion FLR terms are the same order,we find that the real frequency (<ref>a) becomes ϖ≈2 Ω_i/β_i(1-1/4 k_ρ_i η_i β_i -3/2 k_^2 ρ_i^2 ) ,which is precisely that of the slow hydromagnetic wave, with first-order FLR corrections included <cit.>.For η_i < 0 and k_ρ_i < |η_i| β_i/4, the growth rate (<ref>b)is positive:γ≈ - 4 √()/k_^4 ρ_i^4(η_i/4+ k_ρ_i/β_i)exp(-1/k_^2 ρ_i^2)Ω_i .The maximum growth rate (which is exponentially small in η_i β_i/4 ≪ 1) is γ_ max≈8 √()/|η_i| β_i^2exp(-16/|η_i|^2β_i^2-1)Ω_i ,achieved at the parallel wavenumberk_ρ_i ≈|η_i|β_i/4 -|η_i|^3β_i^3/128. In the opposite limit, k_ρ_i ∼(|η_i| β_i/4)^1/3≫ 1, we obtain ϖ≈-(η_i β_i 1-/4/k_ ρ_i -k_^2 ρ_i^2 ) Ω_i/β_i, γ≈ -√() [η_i/4 β_i(1--3/k_^2 ρ_i^2) + k_ ρ_i ] Ω_i/β_i . The maximum positive growth rate is γ_ max≈√()/4{1- 3[4 (-3)]^1/3(|η_i|β_i )^-2/3} |η_i| Ω_i≈ 0.44 [1-2.48 (|η_i|β_i )^-2/3]|η_i| Ω_i ,realised for η_i < 0 at the parallel wavenumberk_ρ_i ≈( -3/2) ^1/3(|η_i|β_i)^1/3≈ 0.41 (|η_i|β_i)^1/3.We note that, in contrast to the CET whistler instability, the real frequency of thefastest-growing unstable mode is smaller than its growth rate: ω_ peak/γ_ max≈ 0.36(|η_i|β_i)^-1/3. The approximate expressions (<ref>), (<ref>), (<ref>a), and (<ref>b) for the frequency and growth rate in thelimits k_ρ_i ≪ 1 and k_ρ_i ≫ 1, are plotted in figure <ref>, along with the exact results (<ref>).As with the CET whistler instability, a general expression for the complex frequency ofoblique ion CET instabilities can be derived in the form (see appendix <ref>):ω = Ω_i/β_i k_ |ρ_i| -B̃_T±√(B̃_T^2 + 4Ã_TC̃_T)/2 Ã_T,where Ã_T = Ã_T(k_ρ_i,k_⊥ρ_i,η_i β_i), B̃_T = B̃_T(k_ρ_i,k_⊥ρ_i,η_i β_i), and C̃_T = C̃_T(k_ρ_i,k_⊥ρ_i,η_i β_i)are again sums and products of various special mathematical functions defined in (<ref>).Investigating such modes by evaluating (<ref>) numerically for a range of wavenumbers (see figure <ref>), we find that, for η_i < 0, there is one mode that is always damped and onethat can be unstable. For -η_i ≲ 4/β_i, the unstable modes arerestricted to quasi-parallel modes (see figure <ref>a); for -η_i ≳ 4/β_i, there is a much broader spectrum ofunstable modes (including oblique ones). The positive growth rates of the unstable mode are shown in figure<ref>b for η_i β_i = -8. The typical growth rate γ satisfies γ∼Ω_i/β_i ∼η_i Ω_i, as anticipated from (<ref>).We also observe in figure <ref>b the existence of an unstable mode atquasi-perpendicular wavenumbers, which is discussed in section <ref>. In summary, an ion temperature gradient can destabilise ion-Larmor-scale, slow hydromagneticwaves via a similar mechanism to an electron temperature gradientdestabilising electron-Larmor-scale whistler waves. If β_i ≫ L_T_i/λ_i, thecharacteristic growth rate of these modes is γ∼λ_iΩ_i/L_T_i.Unstable modes whose wavevector is parallel to B_0 grow mostrapidly, although the growth rate of (moderately) oblique modes is only somewhat smaller. While the CET whistler instability is faster growing than the CET slow-waveinstability, both modes grow much more quickly than characteristic hydrodynamictime scales in a strongly magnetised plasma. In any conceivablesaturation mechanism, the electron mode will adjust the electron heat flux, andthe ion mode the ion heat flux. Thus, it seems likely thatunderstanding the evolution (and ultimately, the saturation) of bothinstabilities would be necessary to model correctly the heat transport in a classical,collisional plasma that falls foul of the β-stabilisation condition. §.§.§ Long-wavelength kinetic-Alfvén-wave instabilityThe instability observed in figure <ref>b at wavevectors satisfying k_ρ_i ≪ k_⊥ρ_i ∼ 1is different in nature to the slow-hydromagnetic-wave instability: it is an ion-temperature-gradient-driven instability of long-wavelength KAWs.Like the CET slow-wave instability, it operates on account of resonant wave-particleinteractions that allow free energy to be drained from the anisotropy of the iondistribution function, which itself arises from the ion temperature gradient. However,the gyroresonances v_≈±Ω_i/k_ operateinefficiently for modes with k_ρ_i ≪1 in a CE plasma, because there are comparatively few particles with v_≫ v_thi;the dominant resonance is instead the Landau resonance v_ = ω/k_. More specifically,KAWs with k_⊥ρ_i ≳ 1, which are usually subject to strong Landau and Barnes damping (that is, thedamping rate of the waves is comparable to their real frequency), can be destabilised if the (ion) plasma beta is sufficiently large: β_i ≳ L_T_i/λ_i.In figure <ref>b, the peak growth rate of the CET KAW instabilityis smaller than that of the CET slow-hydromagnetic-wave instability by anorder of magnitude; as will be shown below, this is, in fact, a generic feature of the instability. Similarly to quasi-parallel unstable modes, quasi-perpendicular ones such as unstable KAWs can becharacterised analytically, allowing for a simpleidentification of unstable modes and their peak growth rates.It can be shown (see appendix <ref>) that, in the limit k_ρ_i ≪1, k_⊥ρ_i ∼ 1, the complex frequency of the low-frequency (ω≪ k_ v_thi) modes in a plasma whoseion distribution function is (<ref>b) isω/k_ v_thi=η_i 𝒢_i/2 (1-ℱ_i)+ k_⊥ρ_i/β_i(1-ℱ_i)^2[ -i√()/2 k_⊥ρ_i (ℱ_i + √(μ_e Z^2/τ))±√(1-/4k_⊥^2 ρ_i^2/β_i(ℱ_i + √(μ_e Z^2/τ))^2 - i√()η_i β_i/42 𝒢_i - ℱ_i(1 - ℱ_i)/1-ℱ_i)] ,where ℱ_i ≡ℱ(k_⊥ρ_i), 𝒢_i ≡𝒢(k_⊥ρ_i), and ℱ(α)≡ exp(-α^2/2)[I_0(α^2/2) - I_1(α^2/2)],𝒢(α)≡2 α^2 ℱ(α)-exp(-α^2/2) I_1(α^2/2).In a Maxwellian plasma (i.e., when η_i = 0), (<ref>) becomesω/k_ v_thi=1/(1-ℱ_i)^2[ -i√()/2k_⊥^2 ρ_i^2/β_i(ℱ_i + √(μ_e Z^2/τ)) ±√(k_⊥^2 ρ_i^2/β_i^2-/4k_⊥^4 ρ_i^4/β_i^2(ℱ_i + √(μ_e Z^2/τ))^2)] .In the subsidiary limit k_⊥ρ_i ≫ 1, we recoverω≈± k_ v_thi k_⊥ρ_i/β_i, which is thewell-known dispersion relation of a KAW <cit.>. For η_i ≠ 0, we find that, for modes with a positive propagation direction withrespect to the background magnetic field (viz., k_ > 0), thereis an instability providedη_i ≲ -3.14 (1 + 6.5 √(μ_e Z^2/τ)) β_i^-1,with the perpendicular wavenumber k_⊥ρ_i of the fastest-growing unstable modeat fixed k_ just beyond this threshold being approximately given byk_⊥ρ_i ≈ 1.77 (1 - 3.4 √(μ_e Z^2/τ)).Figure <ref> shows the real frequency and growth rate of such modes at threedifferent (negative) values of η_i β_i. As η_i isdecreased beyond the threshold, modes over an increasingly large range of perpendicularwavenumbers are destabilised at both super- and sub-ion Larmor scales. Indeed, inthe limit |η_i| β_i ≫ 1, the peak growth rate γ_ max (for a fixed k_) occurs at a perpendicular wavenumber k_⊥ρ_i < 1, which decreases as |η_i| β_i increases. Such modes are, in fact, no longer well described physically as KAWs; their analogues in a Maxwellianplasma are Barnes-damped, non-propagating slow modes. Although it is possible to characterise analytically the peak growth rate of theunstable modes (and the perpendicular wavenumber at which such growth is attained) in the limit k_ρ_i ≪ 1by analysing (<ref>), such estimates do not capture accuratelythe behaviour of the fastest-growing modes across all wavevectors, because thesefastest-growing modes occur at finite values of k_ρ_i; at such values, thedependence of the frequency and growth rate on k_⊥ρ_i departs somewhat from (<ref>) (see figure <ref>). Instead, wefind numerically that, for η_i β_i ≲ -6,γ_ max≈ 0.025 |η_i| Ω_i at (k_ρ_i)_ peak≈ 0.35 ,independent of the specific value of either η_i or β_i.For values of k_ρ_i that are larger than (k_ρ_i)_ peak, the instability isquenched. It is clear that, in comparison to the slow-hydromagnetic wave instability, thegrowth rate of the fastest-growing perpendicular modes is small [see (<ref>)]. Thisdifference can be attributed to the fact that, for unstable modes inthe limit |η_i| β_i ≫ 1, γ_ max∼ |η_i| k_ρ_iΩ_i and the value of k_ρ_i at which maximum growth is achievedis still rather small compared to unity. We conclude that the instability of slow hydromagnetic waves that are driven by an ion temperature gradient is likely to be moresignificant than the analogous instability of quasi-perpendicular/KAWmodes. § CES (CHAPMAN-ENSKOG, SHEAR-DRIVEN) MICROINSTABILITIES §.§ Form of CE distribution functionNext, we consider the non-Maxwellian terms of the CE distribution arising frombulk-flow gradients. If we set η_s = 0 for both ions and electrons (viz., neglecting both temperature gradients and electron-ion drifts),the CE distribution functions (<ref>) for both species becomef_s0(v_,v_) = n_s0/v_ths^3 ^3/2exp(-ṽ_s^2) [1 - ϵ_s (v_^2/v_ths^2- v_^2/2 v_ths^2)] ,where we have again chosen the isotropic functions C_s(ṽ_s)to be the ones that arise from the Krook collision operator (see section <ref>). We note that for this choice of collision operator, the constant 𝒞_s defined by (<ref>) is 𝒞_s ≈ 3/2, and so the relationship (<ref>) between the CE distribution functions' pressure anisotropy Δ_sand the shear parameter ϵ_s becomesΔ_s = 3/2ϵ_s .We also observe that the CE shear terms have even parity with respect to theparallel velocity v_, and thus for any unstable mode with positive parallel wavenumber k_ > 0, there is a corresponding unstable mode with k_ < 0.This conclusion has the consequencethat the sign of ϵ_s [which is the same as the sign of (ẑẑ - I/3 ) :W_s, where W_s is the rate-of-strain tensor of species s – see (<ref>)] has a significant effect on possible types of CES microinstabilities. Thus, wemust consider the cases ϵ_s > 0 (positive pressure anisotropy, Δ_s > 0)and ϵ_s < 0 (negative pressure anisotropy, Δ_s < 0) separately. For easier comparison toprevious work by other authors, we will sometimes substitute ϵ_s =2 Δ_s/3, and work in terms of Δ_s.As with the discussion of CET microinstabilities in section <ref>, in the main text, weonly present the main findings of our calculations: namely, the overview of the CES stability landscape (section <ref>), and the analytical characterisation of CES microinstabilities with ϵ_s > 0 (section <ref>) and ϵ_s < 0 (section <ref>). The methodology underlying the calculations of growth rates of CES microinstabilitiesis presented in appendix <ref>.§.§ Stability The stability of CE distribution functions of the form (<ref>) is determined as a function of the parameters ϵ_i, ϵ_e, d_e, β_e, β_i, and the velocity scale length L_V = |(ẑẑ - 1/3I) :W_i/V_i|^-1 by assessing whether the maximum microinstability growth rate across all wavelengths smaller than λ_e and λ_iis negative or positive (see appendix <ref> for the methodology underpinning this calculation). As with the temperature-gradient-driven instabilities, we report the results of stability calculations that pertain to a temperature-equilibrated hydrogen plasma; that is, the particular case in which β_i = β_e and ϵ_e = μ_e^1/2ϵ_i [where we recall that the characteristic magnitude of the CE electron velocity-shear term in such a plasma is smaller than the analogous CE ion velocity-shear term by a factor of μ_e^1/2 = (m_e/m_i)^1/2]. Because ϵ_i can take both positive and negative values (see section <ref>), we do one stability calculation for each case; the results of these two calculations are shown in figures <ref> and <ref>, respectively.The key characteristics of the stability of the CE distribution function(<ref>) for ions and electrons can be shown usingplots over a two-dimensional (d_e/L_V,Ma λ_e/L_V) parameter space at fixed β_eand Ma – we remind the reader that Ma λ_e/L_V = |ϵ_i|, and thatthe Mach number Ma is assumed to satisfy Ma≲ 1– as opposed to the five-dimensional (ϵ_i,d_e,L_V,β_e,Ma) parameter space that might naively be anticipated, because the two relevant stability thresholds are not independentfunctions of d_e, Ma, and L_V. The regions of stability presented in figure <ref>a for ϵ_i > 0 (viz., for shear flows that drive positive pressure anisotropy) and in figure <ref>a for ϵ_i < 0 (viz., for shear flows drivingnegative pressure anisotropy), respectively, are broadly similar to the region of stability for CET microinstabilities described in section<ref> (and shown in figure <ref>a), but with one crucial difference. Once again, for d_e/L_V less than a critical value (d_e/L_V)_ c0, stability isindependent of d_e/L_V, and there are no instabilities for Ma λ_e β_e/L_V ≪1; for d_e/L_V ≳(d_e/L_V)_ c0 and Ma λ_e β_e/L_V> 1, stability is guaranteed if (and only if) d_e/L_V > (d_e/L_V)_ c at fixed Ma λ_e/L_V, where (d_e/L_V)_ c is a monotonically increasing function of Ma λ_e/L_V.As before, these two bounding thresholds correspond to theβ-stabilisation conditions and collisional stabilisation conditions,respectively, of CES microinstabilities. However, the dependence of (d_e/L_V)_ c onMa λ_e/L_V is more complicated than the analogous relationshipbetween (d_e/L_T)_ c andMa λ_e/L_T that was presented in figure <ref>a. Namely, if Ma λ_e/L_T ≳β_e^-1μ_e^-1/2, then (d_e/L_V)_ c suddenly shifts towards a larger value, with thesubsequent (power-law) relationship between (d_e/L_V)_ c and Ma λ_e/L_V being distinct from the analogous relationship when Ma λ_e/L_T ≲β_e^-1μ_e^-1/2. This behaviour is the result of a feature of the unstable region that is present for CES but not CET microinstabilities: different instabilities being dominant in different regions ofthe (d_e/L_V,Ma λ_e/L_V) parameter space.As we will see, this arises because CES microinstabilities on ion scales have less stringent β-stabilisation thresholds than those on electron scales. Although their regions of stability arequalitatively similar, the types of microinstabilities that arise when ϵ_i > 0 or ϵ_i < 0 are quite different, so we now discuss each case in turn. §.§.§ Positive pressure anisotropyFor ϵ_i > 0 and 0.5 μ_e^-1/2β_e^-1≳Ma λ_e/L_V ≫β_e^-1,the fastest-growing CES microinstability is the mirror instability:that is, a non-propagating, compressible slow mode on ion scales that is destabilised by positiveion pressure anisotropy. For Ma λ_e β_e/L_V ≳ 0.5 μ_e^-1/2, a faster-growing CES microinstability emerges on electron Larmor scales, drivenby positive electron pressure anisotropy: the whistler (electron-cyclotron)instability. For fixed β_i,the CES mirror instability can operate at smaller values of Ma λ_e/L_Vthan the CES whistler instability, because the mirror-instability thresholdΔ_i β_i = 3 Ma λ_e β_i/2 L_V ≥ 1 (see section <ref>) is a less stringent condition on Ma λ_e/L_V for fixed β_e than the threshold Δ_e β_e = 3 μ_e^1/2Ma λ_e β_i/2 L_V ≳0.5 of the CES whistler instability (see section <ref>).On the other hand, once Ma λ_e β_e/L_V ≳ 0.5 μ_e^-1/2, the maximum growth rate of the CES mirror instability γ_ mirr∼Δ_i Ω_iis much smaller than that of the CES whistler instability: γ_ whistler,S∼Δ_e Ω_e ∼μ_e^-1/2Δ_i Ω_i ≫Δ_i Ω_i. For Ma λ_e β_e/L_V ≫μ_e^-1/2, in addition to unstable whistler modes, modes on sub-electron-Larmor scales are also destabilised: this is the parallel transverse instability, a microinstability that is essentially unmagnetised (k ρ_i ≫ 1) in character.When it can operate, the CES parallel transverse instability has a much larger growth rate than the unstable electron-Larmor-scale whistler waves, γ_ trans∼Δ_e (Δ_e β)^1/2Ω_e ≫γ_ whist∼Δ_e Ω_e, so if Ma λ_e β_e/L_V ≫μ_e^-1/2, the transverse instability dominates. Numerical evidence for the dominance of the CES mirror instability when μ_e^-1/2≫Ma λ_e/L_V ≫ 1, and then the CES parallel transverseinstability when Ma λ_e/L_V ≫μ_e^-1/2, can be produced by isolating the maximum growth rate, the parallel wavenumber and the wavevector angle associated with peakgrowth for the unstable regions of the (d_e/L_V,Ma λ_e/L_V) parameter space. Figure<ref>b shows that, for fixed d_e/L_V and a range of Ma λ_e/L_V,the peak microinstability growth rate is a reasonable match for thatof the mirror instability [viz., (<ref>)] for 0.5 μ_e^-1/2β_e^-1≳Ma λ_e/L_V ≫β_e^-1,and a good match for the parallel transverse instability [viz., (<ref>)] for Ma λ_e/L_V ≳μ_e^-1/2β_e^-1.Figure <ref>c demonstrates that, for μ_e^-1/2β_e^-1≳Ma λ_e/L_V ≫β_e^-1, the (non-dimensionalised) parallel wavenumber (k_ρ_e)_ peak of peak growth satisfies (k_ρ_e)_ peak∼μ_e^-1/2, in agreement with the expected parallel wavenumber of the fastest-growing mirror modes [see (<ref>)]. At Ma λ_e/L_V ∼μ_e^-1/2β_e^-1, there is a dramatic shift in (k_ρ_e)_ peak to a value (k_ρ_e)_ peak≳ 1 that agrees with the expected parallel wavenumber of the parallel transverse instability [see (<ref>)]. As for the peak-growth wavevector angle (figure <ref>d), for β_e^-1≲Ma λ_e/L_V ≲μ_e^-1/2β_e^-1, the dominant instability is oblique (as would be expected for the mirrorinstability), while for Ma λ_e/L_V ≳ 0.5 μ_e^-1/2β_e^-1, it is parallel (implying that the CES whistler/parallel transverse instability dominates). We conclude that the mirror instabilityis indeed dominant when 0.5 μ_e^-1/2β_e^-1≳Ma λ_e/L_V ≫β_e^-1,and the parallel transverse instability when Ma λ_e/L_V ≫μ_e^-1/2β_e^-1. §.§.§ Negative pressure anisotropyNow considering the case when ϵ_i < 0, i.e., the case of negativepressure anisotropy, the only CES microinstability that operates whenμ_e^-1/2β_e^-1≳Ma λ_e/L_V ≫β_e^-1 is the firehose instability: the destabilisation of Alfvén waves by ion pressureanisotropies Δ_i ≲ -1/β_i[In the limit of wavelengths much larger than the ion Larmor radius, the firehose instabilitythreshold is well known to be Δ_i = (Δ_i)_ c < -2/β_i. However, for plasmas whose ion species have either a CE distribution function or a bi-Maxwellian distribution, the instability threshold for oblique ion-Larmor-scale firehose modes is somewhat less stringent: see section<ref>.].If Ma λ_e/L_V ≳μ_e^-1/2β_e^-1, severalelectron-scale CES microinstabilities arise, all of which tend to have larger growth rates than the firehoseinstability. The first of these to develop (at Ma λ_e/L_V ∼μ_e^-1/2β_e^-1)is the oblique electron firehose instability: the destabilisation of oblique kinetic-Alfvén waves bynegative electron pressure anisotropy. For μ_e^-1/2β_e^-1≲Ma λ_e/L_V ≲μ_e^-1/2β_e^-5/7, the electron-scale-transition (EST) instability begins to operate; this is a non-propagatingquasi-perpendicular mode on electron Larmor scales (k_⊥ρ_e ∼ 1 ≫ k_ρ_e), which,while damped in a Maxwellian plasma, is unstable for sufficiently negativeelectron pressure anisotropies, and grows more rapidly than the oblique electron firehose instability. Forμ_e^-1/2β_e^-5/7≲Ma λ_e/L_V ≲μ_e^-1/2β_e^-1/3, the EST instability is surpassed by thewhisper instability: the instability of a newly discovered propagating wave in a Maxwellian plasma (a whisper wave) whoseperpendicular wavelength is on sub-electron-Larmor scales (k_⊥ρ_e ≫ 1), butwhose parallel wavelength is above the electron-Larmor scale (k_ρ_e <1). Finally, when Ma λ_e/L_V ≳μ_e^-1/2β_e^-1/3, the oblique transverse instability comes to predominate;unlike either the oblique electron firehose, the EST, or whisper instabilities, it is unmagnetised in nature (like its parallel relative). Of these fourinstabilities, the oblique electron firehose and transverse instabilities have beenidentified previously (see references in sections <ref> and <ref>, respectively), but not the EST or whisper instabilities. We support these claims (in an analogous manner to the ϵ_i > 0 case) by calculating the growth rate of the dominant microinstabilities for given pointsin the (d_e/L_V,Ma λ_e/L_V) parameter space.Figure <ref>b shows the maximum growth rate for afixed value of d_e/L_V. For μ_e^-1/2β_e^-1≳Ma λ_e/L_V ≫β_e^-1,the peak growth rate follows the analytical prediction for the ion firehose instability, γ_ fire∼ |Δ_i|^1/2Ω_i/√(log1/|Δ_i|), when Δ_i ≪ -2/β_i [see (<ref>)].For Ma λ_e/L_V ≳μ_e^-1/2β_e^-1, the peak growth rate becomes much greater thanγ_ fire; for β_e^-5/7≳μ_e^1/2Ma λ_e/L_V ≫β_e^-1, it instead matches that of the EST instability, γ_EST∼ |Δ_e| (|Δ_e| β_e)^3/2Ω_e/√(log|Δ_e| β_e) [see (<ref>)], where we remind the reader that |Δ_e| = 3 μ_e^1/2Ma λ_e/2 L_V. For μ_e^1/2Ma λ_e/L_V ≫β_e^-5/7, the observed growth rate agrees with an analytical prediction for the whisper instability,γ_whisp∼ |Δ_e|^1/2(|Δ_e| β_e)^1/4Ω_e/√(log|Δ_e| β_e) [see (<ref>)]. Finally, because of the value of β_e chosen for this numerical example, the condition Ma λ_e/L_V ≳μ_e^-1/2β_e^-1/3 under whichthe oblique transverse instability dominates is never met for Ma λ_e/L_V ≪1, and thus the numerically measured growth rate of the dominant CES microinstability is larger than the tranverse instability's peak growth rate γ_ trans∼ |Δ_e| (|Δ_e| β_e)^1/2Ω_e [see (<ref>)] forthe entire range of Ma λ_e/L_V that we show in figure <ref>b, (blue line) .A further confirmation that the most important microinstabilities are those that we have explicitly identified is obtained bycalculating the paralleland perpendicular wavenumbers associated withthe dominant microinstability. Figures <ref>c and <ref>d show that, for β_e^-1≪Ma λ_e/L_V ≪μ_e^-1/2β_e^-1, (k_ρ_e)_peak∼ (k_⊥ρ_e)_peak∼μ_e^1/2. These values of (k_ρ_e)_peak are consistent with the properties of the fastest-growing unstable firehose modes (see sections <ref> and <ref>), whose parallel wavenumber (approximately) satisfies (k_ρ_i)_peak∼ 1/√(log1/|Δ_i|) when Δ_i ≪ -2/β_i [see (<ref>)], and whose wavevector angle isθ_peak≈ 39^o.At Ma λ_e/L_V ∼μ_e^-1/2β_e^-1, the magnitudes of the parallel and perpendicular wavenumbers changes abruptly, to (k_ρ_e)_peak∼ (k_⊥ρ_e)_peak∼ 1; this is in line with expectations from the onset of the oblique electron firehose instabilitywhen |Δ_e| β_e ∼ 1. For Ma λ_e/L_V ≫β_e^-1 (|Δ_e| β_e ≫ 1), the parallel scale of the fastest-growing mode remains above electron Larmor scales [(k_ρ_e)_peak<1], while (k_⊥ρ_e)_peak increases monotonicallyabove unity. Both findings match theoretical expectationsconcerning the evolution of the parallel and perpendicular wavenumbers of the EST and whisper instabilitiesas functions of increasing |Δ_e| β_e, and analytic formulae for thesequantities are in reasonable agreement with the numerical results (see sections <ref> and<ref>). §.§.§ Collisional stabilisationFor both ϵ_i > 0 and ϵ_i < 0, the shift in(d_e/L_V)_ c at Ma λ_e/L_V ∼μ_e^-1/2β_e^-1 observed infigures <ref>a and <ref>a can be explained in terms of the ion-scale and electron-scale microinstabilitieshaving distinct collisional-stabilisation conditions of the form (<ref>) (viz., k λ_e ∼ k λ_i ≲ 1), with the condition on theion-scale instabilities being more restrictive.The wavenumbers k_ mirr and k_ fire at whichmaximal growth of the ion mirror and firehose instabilities occurs satisfy k_ mirrρ_i ∼ 1 and k_ fireρ_i ≲ 1, respectively, forMa λ_e β_e/L_V ≫ 1, leading to the collisional-stabilisation conditionλ_e/L_V≲ρ_i/L_V∼μ_e^-1/2β_e^1/2d_e/L_V .For the electron-scale microinstabilities, the parallel and the oblique transverseinstabilities have the largest (common) wavenumber of all such instabilities that operate when ϵ_i > 0 and ϵ_i < 0, respectively, andso provide the most demanding collisional-stabilisation conditions.For both transverse instabilities, the wavenumber at which peak growth occurs for thesatisfies k_ transρ_e ∼ (μ_e^1/2Ma λ_e β_e/L_V)^1/2[see (<ref>)], which in turn can be rearranged to givethe collisional-stabilisation condition λ_e/L_V≲Ma^-1/3μ_e^-1/6(d_e/L_V)^2/3.Bringing these results together, we find(d_e/L_V)_ c = {[μ_e^1/2β_e^-1/2λ_e/L_V , β_e^-1≪Ma λ_e/L_V < μ_e^-1/2β_e^-1,; μ_e^1/4Ma^1/2(λ_e/L_V)^3/2,Ma λ_e/L_V≳μ_e^-1/2β_e^-1, ].with (d_e/L_V)_ c0 = μ_e^1/2β_e^-3/2. This matchesasymptotically the numerical results shown in figures <ref>a and <ref>a. These findings confirm that,once again, the relevant collisional-stabilisation condition for the microinstabilities with wavenumber k is k λ_e= k λ_i≪ 1 [viz., (<ref>)], as opposed to the more restrictive conditions γτ_i ≫ 1 and γτ_e ≫ 1 on the CES ion-scale and electron-scale instabilities, respectively. Similary to the collisional-stabilisation condition on the CET whistler instability (see section <ref>), we note that the collisional-stabilisation condition on any of thesemicroinstabilities can never actually be satisfied in a strongly magnetised plasma, because k λ_i ≳λ_i/ρ_i ≫1 for the ion-scale instabilities, and k λ_e ≳λ_e/ρ_e ≫ 1 for the electron-scale instabilities.§.§.§ Outline of the rest of this section Further discussion about the properties and growth rates of CES microinstabilitieswith ϵ_s > 0 (viz., those driven by positive pressure anisotropy) can be found insection <ref>, with the mirror, whistler and transverseinstabilities discussed in sections <ref>, <ref> and<ref>, respectively. In addition to these,there is another instability (the electron mirror instability) that can be driven by positive pressureanisotropy of CE distribution functions that we note in passing: it consists in KAWs driven unstable by the CE electron-shear term, and to some extent by the ion-shear term (section <ref>).The electron mirror instability does not appearto be the fastest-growing CES microinstability anywhere in the (d_e/L_V,Ma λ_e/L_V) parameterspace; since the instability is subdominant to two other electron-scale instabilities (the whistler and transverseinstabilities), this would seem to imply that the instability is comparatively less important. CES microinstabilities with ϵ_s < 0 (viz., those driven by negative pressureanisotropy) are explored in section <ref>. The firehoseinstability is overviewed in section<ref>, with then four subclasses of the instability (parallel, oblique, critical-line, and sub-ion-Larmor-scale) considered in sections <ref>, <ref>, <ref>, and<ref>. The oblique electron firehose instability is discussed in section<ref>, the EST instability in section<ref>, the oblique transverse instability in section <ref>, and the whisper instability in section<ref>. We identify two additional CES microinstabilitieswhich are never the fastest-growing microinstability in any unstable region: the parallel electron firehose instability (section<ref>), which (in spite of its name) has a differentunderlying physical mechanism than the oblique electron firehose, and theordinary-mode instability (section <ref>), which onlyoperates at very high β_e (β_e ≳ |Δ_e|^-3), and is only characteristicallydistinct from the oblique transverse instability in a regime in whichit is slower growing. Readers who do not wish to dwell on specific CES microinstabilities should proceed directly to section <ref>. §.§ CES microinstability classification: positive pressure anisotropy (ϵ_i > 0)§.§.§ Mirror instability The CES mirror instability consists in the destabilisation of compressive slow modesby a sufficiently large positiveion pressure anisotropy associated with the ion-shear term of the ion CE distribution function. In a high-β plasma with Maxwellian ion and electron distribution functions, the slow mode – which is one of the two plasmamodes which exist at oblique wavevector angles θ≳β_i^-1/4 (the other being the shear Alfvénwave), and consists of a perturbation to the magnetic field's strength – is non-propagating, being subject to strong Barnes' (equivalently, transit-time)damping <cit.>. This damping is the result of Landau-resonant interactions between the slow mode and co-moving ions with v_ = ω/k_; since,for a distribution function that decreases monotonically with v_ > 0, thereare more ions with v_ < ω/k_ than with v_ > ω/k_,there is a net transfer of free energy from the slow modes to the ions (as a particle accelerationprocess, this is sometimes called betatron acceleration). However, in a plasma with Δ_i > 0, there is an increase in the relative number of ions with large pitch anglesin the troughs of the slow mode's magnetic-field strength perturbation,giving rise to excess perpendicular pressure. When Δ_i > 1/β_i, this excess pressure overbalances the magnetic pressure, leading to the mirror instability. In CE plasma with 0 < Δ_i β_i -1 ≪ 1, only quasi-perpendicular long-wavelength mirror modes (k_ρ_i ≪ k_⊥ρ_i ≪ 1) are destabilised; forlarger values of Δ_i, a broad range of slow modes (including ion-Larmor-scale ones)become unstable. Chronologically, the earliest discussions of themirror instability in pressure-anisotropic plasmas are due to <cit.> and <cit.>.<cit.> provide a detailed and lucid discussion of the linear physics of the mirror instability <cit.>; various analytical <cit.> andnumerical <cit.> studies investigating its nonlinear evolution have also been carried out. The CES mirror instability can be characterised analytically – and simple expressionsderived for the maximum growth rate and the wavevector at which that growth is attained – in the limit ofmarginal instability. First, we define the threshold parameter Γ_i ≡β_i Δ - 1,where Δ≡Δ_i + Δ_e = (1+μ_e^1/2)Δ_i,and assume that Γ_i ≪ 1. It can then be shown (see appendix <ref>) that under the orderingsk_ρ_i∼ k_^2 ρ_i^2 ∼Γ_i ≪ 1 , γ/Ω_i∼Γ_i^2/β_i≪ 1 ,the mirror modes have a growth rate given by γ/Ω_i = k_ρ_i/√()β_i(Γ_i -3/2k_^2/k_^2-3/4 k_^2 ρ_i^2 ).This is the same result as the growth rate of the mirror instability in abi-Maxwellian plasma, with (the anticipated) threshold Γ_i > 0 <cit.>.The peak growth rate γ_max is then given by γ_max = Γ_i^2/6 √(2 )β_iΩ_i ,achieved at the wavenumber (k_ρ_i)_peak =Γ_i/3√(2),(k_ρ_i)_peak = Γ_i^1/2/√(3).This recovers the results of <cit.>.Figure <ref> illustrates the accuracy of the above predictions for γ (and therefore γ_ max),(k_ρ_i)_peak and (k_ρ_i)_peak bycomparing them with the equivalent values obtained numerically using the general method outlined in appendix <ref> for a particular value of Γ_i ≪ 1. The wavenumber dependence of the numerically determined growth rate (see figure <ref>a) corroborates that, close to marginality, the unstablemirror modes are quasi-perpendicular; more quantitatively, the values ofk_ρ_i and k_ρ_i at which peak growth is obtained numerically match (<ref>).Furthermore, the growth rate (<ref>)agrees well with the numerical result when plotted as a function of k_ρ_i with fixed k_ρ_i, and also as a function of k_ρ_i with fixed k_ρ_i (figure<ref>b).In contrast, for finite Γ_i ≳ 1, simple expressions for γ_ max,(k_ρ_i)_peak, and (k_ρ_i)_peak arechallenging to derive analytically. Our numerical calculations indicate that, when Γ_i ∼1, a broad range of (purely growing) oblique modes becomes unstable, with maximum growth rate γ_ max∼Ω_i/β_i ∼ΔΩ_i attained when k_ρ_i ≲ k_⊥ρ_i ∼ 1 (figure <ref>a).Therefore, asymptotic expansions that treat k_ρ_i and k_ρ_ias small or large cannot be used to derive simplified expressions for the growthrate of the fastest-growing mirror modes. While the expressions (<ref>) for the wavenumber of peak growth derived in the case of near-marginality remainqualitatively correct, they are no longer quantitatively accurate; the sameconclusion applies to the expression (<ref>) for the growth ratewhen k_ρ_i ∼ k_⊥ρ_i ∼ 1 (figure <ref>b). That being said, an expression similar to(<ref>) can be derived (see appendix <ref>) for long-wavelength unstable mirrormodes that satisfy the ordering k_ρ_i∼ k_ρ_i≪ 1 , γ/Ω_i∼k_ρ_i/β_i∼Δ k_ρ_i ≪ 1 .This expression isγ/Ω_i = k_ρ_i/√()β_i(Γ_i -Γ_i + 3/2k_^2/k_^2). It implies that all such modes withk_ > (3+Γ_i/2 Γ_i)^1/2 k_will be unstable, a prediction that is consistent with the unstable region observed in figure <ref>a. When Γ_i ≫ 1, but Γ_i < (m_i/m_e)^1/2,the region of (k_,k_) space in which mirror modes are unstable is qualitatively similarto the Γ_i ∼ 1 case, albeit more extended (figure<ref>a). We find that in this limit, the maximum growth rateγ_max becomes directly proportional to Δ (see figure<ref>b), in contrast to the marginal case (<ref>):γ_max≈ 0.2 ΔΩ_i .This growth is attained at parallel and perpendicular wavenumbers(k_ρ_i)_peak≈ 1.2 ,(k_ρ_i)_peak≈0.7 ,which depend only weakly on Δβ_i.Some understanding of these results can be derived by considering the dispersionrelation of mirror modes on sub-ion Larmor scales. Adopting the orderingk_ρ_i ∼ k_⊥ρ_i ∼ (Δ_i β_i)^1/2≫ 1 , γ/Ω_i∼Δ_i ,while assuming that Δ_i β_i ≪μ_e^-1/2,one finds (see appendix <ref>) thatγ/Ω_i≈k_/k√((k^2 ρ_i^2/β_i - Δ_i k_^2-k_^2/k^2) (Δ_i k_^2/k^2 - k^2 ρ_i^2/β_i)).This can be re-written in terms of the wavevector angle θ = tan^-1(k_⊥/k_) asγ/Ω_i≈cosθ√([k^2 ρ_i^2/β_i - Δ_i (cos^2θ-sin^2θ) ] (Δ_i cos^2θ - k^2 ρ_i^2/β_i)).Analysing this expression leads to three conclusions. First, for θ > 45^∘, there is an instability at all wavenumbers satisfying k ρ_i < (Δ_i β_i)^1/2cosθ, explaining the expansion of the unstable region of (k_,k_)-spacewith increasing Δ_i β_i. For θ≤ 45^∘, growth only occurs over a more limited rangeof wavenumbers √(cos^2θ-sin^2θ) < k ρ_i/(Δ_i β_i)^1/2 < cosθ. Secondly, growth in this limit is maximised when k ρ_i ≪ (Δ_iβ_i)^1/2, with the maximal growth rateγ_ max = 1/3 √(3)Δ_i Ω_i≈ 0.19 Δ_i Ω_iattained at cosθ = 1/√(3) (θ≈ 55^∘). This expression for γ_ maxis (surprisingly) close to the numerically measured peak growth rate(<ref>). For k ρ_i ∼ (Δ_iβ_i)^1/2, the maximum growth rate is smaller than(<ref>) by an order-unity factor. Finally, when k ρ_i ≫ (Δ_i β_i)^1/2, viz.,in a wavenumber regime where there are no unstable mirror modes, (<ref>) becomes imaginary, implying that the modes have a real frequency given byω≈± k_ k_ρ_e Ω_e/β_i. This is the dispersion relation of kinetic Alfvén waves (KAWs) in a high-βplasma[We note that (<ref>) is also the same dispersion relation as that of oblique whistler waves <cit.>.However, as was discussed in section <ref>, in ahigh-β plasma (β_e ≫μ_e^-1/2), the small frequency (ω≪ k_ v_thi) ofperturbations prohibits all but parallel perturbations from not interacting significantly withthe ions, and thus we believe that the modes are more accurately identified as KAWs.].In short, at Δ_i β_i ≫ 1, KAWs are also destabilised by positive ion pressureanisotropy in addition to longer-wavelength mirror modes. We note that KAWs can also be destabilised by positive electron anisotropy, but the characteristic wavelength of such modes is preferentially comparable to electron Larmor scales (see section <ref>). §.§.§ Whistler instabilityThe CES whistler instability arises when the free energy associated with positive electron-pressure anisotropy Δ_e of the electron CE distribution function destabilises whistler waves, overwhelming both the electron cyclotron damping (whichis the dominant stabilisation mechanism for whistlerwaves with k_ρ_e ∼ 1) and the Landau damping due to the ion species (the domininantstabilisation mechanism for waves with k_ρ_e ≪ 1).In the special case of static ions, electron cyclotron damping can be overcome by a positive electron-pressure anisotropy of any magnitude for whistler waves with sufficiently long wavelengths.Retaining mobile ions, the instability operates only if Δ_e exceeds a threshold of order (Δ_e)_ c∼β_e^-1. When Δ_e > (Δ_e)_ c,gyroresonant interactions between electrons with v_ = ±Ω_e/k_ and whistler waves allow for free energy to pass from the former to the latter,and so an increasingly broad spectrum of unstable parallel and oblique modes emerges onelectron Larmor scales.The analogue of this instability in a bi-Maxwellian plasma was found by <cit.>, and it has since been studied numerically in moderately high-β plasma (β_e ∼ 1-10) by several authors <cit.>. Similarly to the CET whistler instability, the simplest characterisation of the CES whistler instability is for unstable parallel whistler modes(viz., k ≈ k_). Assuming that these modes satisfy theorderingsω̃_e = ω/k_ v_the∼Δ_e ∼1/β_e, k_ρ_e ∼ 1 ,it can be shown (see appendix <ref>) that their real frequency ϖ and growth rate γ satisfyϖβ_e/Ω_e =±Δ_e β_e ±k_ ρ_e [Δ_e β_e (1+μ_e^1/2)- k_^2 ρ_e^2] Z(1/k_ ρ_e) /[Z(1/k_ ρ_e)]^2 + exp(-2/k_^2 ρ_e^2) , γβ_e/Ω_e=k_ ρ_e [exp(-1/k_^2 ρ_e^2)+μ_e^1/2](Δ_e β_e- k_^2 ρ_e^2) +μ_e^1/2 Δ_e β_e Z(1/k_ ρ_e) /[Z(1/k_ ρ_e)]^2/√() + √() exp(-2/k_^2 ρ_e^2) ,where the terms proportional to μ_e^1/2 are associated with the ion species[Formally, these terms are O(μ_e^1/2) under our assumed ordering, and so should be dropped. However, because of the exponential dependence of the other damping/growth terms on k_ρ_e,these terms play an important role for moderate values of k_ρ_e, viz. μ_e^1/2exp(1/k_^2 ρ_e^2)≥ 1 for k_ρ_e ≤√(2)/√(logm_i/m_e)≈0.5, so we retain them.]. In the limit μ_e → 0, formally there is always instability provided Δ_e β_e > 0; however,for a hydrogen plasma (μ_e ≈ 1/1836), it can be shown numerically that thenumerator of (<ref>b) only becomes positive (over a narrow intervalof parallel wavenumbers around k_ρ_e ≈ 0.60) for Δ_e β_e >0.56. The dispersion curves ϖ(k_) and γ(k_) of the unstable whistler waves ina hydrogen plasma for three different values of Δ_e β_ethat are above the necessary value for instability are shown infigure <ref>. When Δ_e β_e ≳ 1, the growth rate is postivefor a range Δ k_∼ρ_e^-1 around k_ρ_e ∼1, attaining a characteristic magnitude γ∼ϖ∼Ω_e/β_e. As before, we characterise the growth rate for variousvalues of Δ_e β_e by taking subsidiary limits.First, for Δ_e β_e ≪ 1, a necessary (though not always sufficient) condition for positive growth is k_ρ_e < (Δ_e β_e)^1/2≪ 1.We therefore expand (<ref>) in k_ρ_e ∼ (Δ_e β_e)^1/2≪1, finding thatϖ≈k_^2 ρ_e^2/β_e Ω_e , γ≈ √()/k_ ρ_e {exp(-1/k_^2 ρ_e^2) (Δ_e - k_^2 ρ_e^2/β_e) -μ_e^1/2 k_^2 ρ_e^2/β_e } Ω_e . Similarly to what we showed in section <ref> for the CET whistler instability,we have once again found unstablewhistler waves. For comparison's sake, the approximate expressions (<ref>)are plotted in figure <ref> in additionto their exact analogues (<ref>); it is clear thatthere is reasonable agreement for a moderately small value of Δ_eβ_e, but that the approximations become less accurate fork_ρ_e ≳0.5 and Δ_e β_e > 1. In the limit μ_e → 0, the expression (<ref>b)for the growth rate is very similar to that of the whistler (electron-cyclotron) instability in a plasma with a bi-Maxwelliandistribution and positive electron pressure anisotropy <cit.>. In this case, whistler modes with k_ρ_e < (Δ_e β_e)^1/2 are always unstable, although the growth rateof such modes is exponentially small in Δ_e β_e ≪ 1 as compared tothe frequency (<ref>a), and so γ≪ϖ∼Ω_e/β_e.By contrast, with small but finite μ_e = m_e/m_i, it can be shown analytically that,for (<ref>b) to be positive, Δ_e > (Δ_e)_ c, where (Δ_e)_ c=1/β_e W_ Lam[μ_e^-1/2exp(-1)]≈ 1/β_e1/log(μ_e^-1/2)-1-log[log(μ_e^-1/2)-1].Here, W_ Lam(x) denotes the Lambert W function <cit.>.Unstable modes first develop around (k_ρ_e)_c = (Δ_e)_ c^1/2 /[(Δ_e)_ c+1/β_e]^1/2. In a hydrogen plasma, this gives (Δ_e)_ c≈ 0.49/β_e and (k_ρ_e)_c ≈0.57, which are similar to the instability threshold and wavenumber, respectively, determined numerically if γ is computed for arbitrary values of k_ρ_e; the small discrepancy isdue to the finite value of k_ρ_e at which instabilityfirst emerges. Formally, (Δ_e)_ c→ 0 as μ_e → 0, butthe limit converges only logarithmically in μ_e, suggesting that in anactual plasma, the CES whistler instability will generically have a threshold at a finite value of Δ_e β_e. Let us now turn to the opposite subsidiary limit Δ_e β_e ≫ 1. We find from (<ref>b) that maximal growthoccurs at k_ρ_e ∼ (Δ_e β)^1/2≫ 1:ϖ≈1/[Δ_e (-2)+ k_^2 ρ_e^2/β_e] Ω_e , γ≈ k_ ρ_e/√() (Δ_e - k_^2 ρ_e^2/β_e) Ω_e .Alongside k_ρ_e ≪ 1 approximations, these approximations are plotted in figure<ref>, and agree well with the numerical results for Δ_e β_e ≳ 3and k_ρ_e ≳ 2. The maximum growth rate γ_max = 2/3 √(3 )Δ_e (Δ_e β_e)^1/2Ω_e ≈ 0.22 Δ_e (Δ_e β_e)^1/2Ω_eis attained at the parallel wavenumber (k_ρ_e)_peak = (Δ_e β_e/3)^1/2.A notable feature of the CES whistler instability in this subsidary limit is that the fastest-growingmodes are on sub-electron-Larmor scales; thus, such modes are arguably betterconceptualised not as whistler modes, but as unstable, unmagnetised plasma modes (see section <ref>).Similarly to the CET whistler instability, analytical expressions for the frequency and growth rate ofunstable modes that have an oblique wavevector angle are much less simplethat the analogous expressions for parallel whistler modes. It can be shown (see appendix <ref>) thatthe complex frequency of such modes is given by ω = Ω_e/β_e k_ρ_e - i B_S±√(-B_S^2 + 4A_SC_S)/2 A_S,where the functions A_S = A_S(k_ρ_e,k_⊥ρ_e,Δ_e β_e), B_S = B_S(k_ρ_e,k_⊥ρ_e,Δ_e β_e), and C_S = C_S(k_ρ_e,k_⊥ρ_e,Δ_e β_e)are composed of the sums and products of special mathematical functions. When Δ_e β_e ∼ 1, (<ref>) implies that if there is an instability, its growth rate will be of order γ∼Ω_e/β_e at k_ρ_e, k_⊥ρ_e ∼ 1. To confirm this expectation, in figure <ref> we plot the maximum growth rate (obtained numerically) of oblique modes across the (k_,k_⊥)-planefor two of the values of Δ_e β_e used in figure <ref>. For Δ_e β_e not far beyond the threshold of the CES whistlerinstability (figure <ref>a), the unstable modes are quasi-parallel and have growth rates γ≪Ω_e/β_e (cf. figure <ref>, left panel).For Δ_e β_e ≳1, a broader spectrum of wavenumbers becomes unstable (figure<ref>b). The parallel mode remains the fastest growing in this case;however, oblique modes with k_⊥≲ k_/2 also have growth rates of comparable magnitude: e.g., the fastest-growing mode with wavevector angle θ = 10^∘ has γ_ max/γ_ max(k_⊥ = 0) ≈0.93, and for a wavevector angle θ = 10^∘, γ_ max/γ_ max(k_⊥ = 0) ≈0.76. For more oblique angles, the growth rate is reduced significantly: e.g., for θ = 30^∘, γ_ max/γ_ max(k_⊥ = 0) ≈0.22. Thus, we conclude that a spectrum of oblique modes in addition to parallel ones isindeed destabilised, with γ∼Ω_e/β_e ≲γ(k_⊥ = 0). We note that, in addition to oblique CES whistler modes, whose characteristic wavenumber domain is k_⊥ρ_e ≲ k_ρ_i ∼ 1, we observe two otherunstable modes in figure <ref>a with different characteristic values of k_ and k_⊥. The first ofthese, which exists on ion scales, is the CES mirror instability, which we already discussed in section<ref>. The second is the CES electron mirror instability – we shall consider this instabilityin section <ref>. §.§.§ Parallel transverse instabilityAs was shown in section <ref>, in the limit Δ_e β_e ≫ 1, the fastest-growingCES microinstability is essentially unmagnetised, and is a variant of the so-called transverseinstability <cit.>. This instability is also sometimes referred to as the resonant (electron) Weibel instability,or the Weibel instability at small anisotropy <cit.>. Both the linear theory ofthis instability and its physical mechanism have been explored extensively forbi-Maxwellian plasmas <cit.>, and various studies (both analytical and numerical) of its nonlinear evolution have also been performed <cit.>.For the small anisotropy case that is relevant to CEplasma, the mechanism of the instability is somewhat subtle, involving both non-resonant and Landau-resonant wave-particle interactions. In a Maxwellian plasma, transversemodes are non-propagatingand Landau-damped by electronswith velocities v ≈ω/k_. However, this damping can bereversed by the free energy associated with positive electron-pressure anisotropy atwavenumbers that satisfy k d_e ≲Δ_e^1/2; theelectron Landau damping increases more rapidly with k than the instability's drive, which in turn setsthe wavenumber at which peak growth occurs. The requirement for the corresponding scale to be well belowthe electron Larmor scale – and thus for the plasma to be quasi-unmagnetised with respect to the transverse modes – sets the restriction Δ_e β_e ≫ 1 on the instability's operation.In general, transverse modes whose wavevectors are co-parallel to the velocity-space direction along which the temperature is smallest are the fastest growing; in the case of a CE electron distribution function of the form (<ref>) with Δ_e > 0, these modes' wavevectors are parallel to the magnetic field. However, a broad spectrum ofoblique transverse modes is also destabilised when Δ_e > 0. To characterise the transverse instability's growth analytically, we first assume Δ_e β_e ≫1, and then take directly the unmagnetised limit of the full CES dispersion relation(see appendix <ref>) under the orderingsk_ρ_e ∼ k_ρ_e ∼(Δ_e β_e)^1/2≫ 1 , ω̃_e = ω/k_ v_the∼Δ_e .We obtain two non-propagating modes (real frequency ϖ = 0) that have growth ratesγ_1 = k v_the/√() (Δ_e k_^2-k_^2/k^2 - k^2 ρ_e^2/β_e) , γ_2 = k v_the/√() (Δ_e k_^2/k^2 - k^2 ρ_e^2/β_e) .For Δ_e > 0, the growth rate of the second mode is always positiveand larger than that of the first mode; the first mode only has a positive growth rate provided k_ < k_. Nowtaking the subsidiary limit k_ρ_e ≫ k_ρ_e ≫ 1, we find thatboth roots have the same growth rate:γ≈k_ v_the/√()(Δ_e - k_^2 ρ_e^2/β),which is identical to (<ref>b). We note by comparison with (<ref>a) thatthe unmagnetised limit fails to recover the non-zero real frequencies of the k_ρ_e ≫ 1 whistler modes; this is because theratio of these modes' real frequency ϖ to their growth rate γ is ϖ/γ∼ 1/k_ρ_e ≪ 1.The maximum growth rate γ_max of the second mode (<ref>b) for an oblique wavevector with angle θ is γ_max = 2/3 √(3 )cos^3θ Δ_e (Δ_e β_e)^1/2Ω_e,attained at the (total) wavenumber (k ρ_e)_peak = cosθ(Δ_e β_e/3)^1/2.The parallel and perpendicular wavenumbers of this maximum growth are then(k_ρ_e)_peak = cos^2θ(Δ_e β_e/3)^1/2 ,(k_ρ_e)_peak = cosθsinθ(Δ_e β_e/3)^1/2 .In the special case of parallel modes (θ = 0^∘), this recovers the peak growth rate (<ref>) of the CES whistler instability at k_ in the limit Δ_e β_e ≫ 1. In figure <ref>, we demonstrate that the fastest-growing unstable modes in the limit Δ_e β_e ≫ 1 are indeed transverseones. This figure shows the numerically determined growth rate as a function of k_ and k_⊥), for a particular large value of Δ_e β_e. A broad range of sub-electron-Larmor scale modes are unstable (figure <ref>a), with theparallel wavenumber of the fastest-growing ones closely agreeing with theanalytical prediction (<ref>). The analytical expression (<ref>b) forthe transverse instability's growth rate also agrees well with the numerical result as a functionof both k_ and k_⊥ (figure <ref>b).§.§.§ Electron mirror instabilityThe oblique microinstability evident in figure <ref>b at sub-ion-Larmor scalesis the CES electron mirror instability: the destablisation of KAWsby excess perpendicular electron pressure (viz., Δ_e > 0) associated with the CE electron-shear term.The instability <cit.> isperhaps confusingly named, given that its physical mechanism is ratherdifferent to that of the (ion-scale) mirror instability: non-resonantinteractions between the anisotropic distribution of electrons and the KAWs causes the restoring force underpinning the latter's characteristic oscillation to be negated if Δ_e > 1/β_e. The electron mirror instability has been extensively explored in β_e ∼ 1plasma <cit.>;in plasmas with β_e ≫ 1, it has been analytically characterised and itsphysical mechanism elucidated in the quasi-perpendicular (k_≪ k_⊥) limit of gyrokinetics <cit.>. Here, we find that once its marginality condition (Δ_e = 1/β_e) is surpassed sufficiently, oblique modes with k_≲ k_⊥ are also destabilised. As with the mirror instability, a simple analytic characterisation of the CES electron mirror instability can beperformed in the case of marginal instability. We definethe marginality parameter Γ_e ≡Δ_e β_e -1, and adopt the orderingk_⊥^2 ρ_e^2 ∼k_ρ_e ∼ω̃_eβ_e ∼Γ_e ≪ 1 ,with the additional assumption that Γ_e ≫μ_e^1/2 in order thatthe effect of ion pressure anisotropy can be neglected. Then, it can beshown (see appendix <ref>) that the growth rate isγ/Ω_e = k_ρ_e/β_e[-3√()/4 k_⊥^2 ρ_e^2+ √(3/2Γ_e k_⊥^2 ρ_e^2 -9/4 k_^2 ρ_e^2 + 9/16(-2) k_⊥^4 ρ_e^4 )].It follows that the maximum growth rate is γ_ max = [-8+√((16+))]^3/2/48(-2)[ √(+4+√((16+))/-8+√((16+)))-√(/-2)] ≈ 0.055 Γ_e^2/β_e ,attained at(k_ ρ_e)_peak = √(-8+√((16+))/36(-2)) Γ_e ≈0.27 Γ_e, (k_⊥ρ_e)_peak = √(-8+√((16+))/6(-2)) Γ_e^1/2 ≈0.65 Γ_e^1/2.Figure <ref> demonstrates that these predictions are accurate by comparing them to numerical results for a particular (small) value of Γ_e. More specifically, figure <ref>a shows that the location in the(k_,k_⊥) plane at which the maximum growth of the electron mirror instability is attainedclosely matches the analytical prediction(<ref>), while figure <ref>b confirms that thewavenumber dependence of the growth rate agrees with (<ref>) for k_⊥ρ_e ≳μ_e^1/4. We note that, in addition to the electron mirror, another instability operating at smaller characteristic values of k_⊥ρ_e is evident in figure <ref>. These are the k_⊥ρ_i ≳ 1 mirror modesdriven unstable by the CE ion-shear term that were discussed in section<ref>; for 1 ≪ k ρ_i ≪μ_e^-1/4, the ion-pressure anisotropy associatedwith the CE ion-shear terms remains a greater free-energy source for KAW instabilitiesthan the CE electron-shear term, even when Δ_e > 1/β_e. For Γ_e ≳ 1, our near-marginal theory anticipates that peak growthoccurs at electron Larmor scales (k_ρ_e ≲ k_ρ_e ∼ 1), with γ_ max∼Ω_e/β_e. These expectations are indeed realised numerically, as shown in figure <ref> (see alsofigure <ref>). The expression (<ref>) for the growth rate as a function of wavenumber that was derived in the case of Γ_e ≪ 1 remains qualitatively – but not quantitatively –accurate (see figure <ref>b). Figure <ref> shows that a similar conclusion holds for the expression (<ref>) for the peak growth rate, and also for the expressions (<ref>a) and (<ref>b) of the parallel and perpendicular wavenumbers atwhich that growth occurs.To confirm our prior claim in section <ref> that the CES parallel whistlerinstability is faster growing than the electron mirror instability, we show theformer's numerically computed growth rate on figure <ref> (left panel); as it approaches the asymptotic value (<ref>) that is valid in thelimit Δ_e β_e ≫ 1, we observe that the electron mirror's growthrate is a factor of ∼3 smaller (cf. figure <ref>a). Theparallel wavenumber at which peak growth of the whistler instability occurs isalso larger than the analogous quantity for the electron mirror by an order-unity factor. While we cannot derive a simple analytic expression for the growth rate of the dominant electron mirror modes when Γ_e ≳ 1, we can calculate this quantity for long-wavelength (viz., k ρ_e ≪ 1) modes. For this calculation,we assume that k ρ_e ∼μ_e^1/4≪ 1, k_∼ k_, and the orderingω̃_e= ω/k_ v_the∼k ρ_e/β_e∼ |Δ_e| k ρ_e . Under these assumptions, we obtain (see appendix <ref>) two modes whose complex frequenciesω are given by ω ≈ ± k_ρ_e Ω_e {[1/β_e + Δ_e(1/2- μ_e^1/2k_^2 ρ_e^2- k_^2 ρ_e^2/k^4 ρ_e^4) ]×[ k^2 ρ_e^2/β_e - Δ_e (k_^2 ρ_e^2 + μ_e^1/2k_^2/k^2 - 1/2 k_^2 ρ_e^2 ) ]}^1/2 .The terms proportional to μ_e^1/2Δ_e are associated with the CE ion-shear term, which plays a non-negligible role for k ρ_e ≲μ_e^1/4. In the subsidiary limit k ρ_e ≪μ_e^1/4, (<ref>) becomes the dispersion relation (<ref>) obtained in section <ref> forunstable mirror modes in the limit Δ_i β_i ≫ 1. In the opposite subsidiary limit k ρ_e ≫μ_e^1/4 (but k ρ_e ≪ 1),(<ref>) simplifies toω ≈ ± k_ρ_e Ω_e √((1/β_e + Δ_e/2)[k^2 ρ_e^2/β_e - Δ_e (k_^2 ρ_e^2 - 1/2 k_^2 ρ_e^2) ]).For k_≪ k_⊥, this recovers the high-β limit of thedispersion relation for unstable KAWs previously derived in the gyrokineticcalculations of <cit.>; our calculations show that this dispersionrelation also applies to oblique (k_≲ k_⊥) electron mirror modes. For Δ_e > 0, we (as expected) have an unstable root if and only ifΔ_e > 1/β_e ,with the unstable mode's growth rate beingγ ≈k_ρ_e Ω_e √((1/β_e + Δ_e/2)[Δ_e (k_^2 ρ_e^2 - 1/2 k_^2 ρ_e^2) - k^2 ρ_e^2/β_e]). We can now provide an analytical demonstration thata broad spectrum of electron mirror modes is unstable if Γ_e ≳ 1.It follows directly from (<ref>) that instability arises for allmodes with k_⊥ > k_ if the following constraint on the total wavenumber k is satisfied:k ρ_i < √(2 μ_e^1/2(Γ_e+1) cos^2θ/(Γ_e+3) cos^2θ-2 Γ_e sin^2θ),where θ = tan^-1(k_⊥/k_) is, as normal, thewavevector angle. The validity of this bound is illustrated in figure <ref>a. (<ref>) is particularly simple to interpret in the subsidiary limit k ρ_e ≫μ_e^1/4, yielding a lower bound on θ alone:θ > tan^-1√(Γ_e+3/2 Γ_e) .For Γ_e ≪ 1 (but Γ_e > 0), this implies that the only unstable electron mirror modes arequasi-perpendicular, as anticipated from our calculations pertaining to the marginal state of theinstability. On the other hand, for Γ_e ≳ 1, modes with awide range of wavevector angles will be destabilised.§.§ CES microinstability classification: negative pressure anisotropy (ϵ_i < 0)§.§.§ Firehose instabilityThe best-known instability to be triggered by either negative ion or electron pressure anisotropy associated with the CE ion- and electron-shear terms, respectively, is the CES firehoseinstability. The linear theory of the firehose (or garden-hose) instability in high-β plasma, the first studies of which were completed over half a century ago <cit.>, has previously been explored in the contexts of plasmas with bi-Maxwellian distributions <cit.>,CE distributions <cit.>, and even characterisations that areindependent of the ion distribution function <cit.>.Its physical mechanism is well established: negative pressure anisotropiesreduce the elasticity of magnetic-field lines that gives rise to Alfvén waves, and can completelyreverse it when Δ_i is negative enough. The long-wavelength `fluid'firehose instability (whose mechanism is independent of the particular ion distributionfunction) is non-resonant in nature; however, resonant damping mechanisms suchas Barnes damping or cyclotron damping play an important role in regulating the growth of modes on scales comparable to the ion-Larmor scale, and therebyset the scale of peak firehose growth. Beyond linear theory, nonlinear analyticalstudies of the parallel firehose instability in high-β plasma have beencompleted <cit.>, as well as numericalones <cit.>.While there is much in common between firehose modes across all wavevector angles, there are certain differences that,on account of their significance for determining the fastest-growing firehose mode, areimportant to highlight. Based on these differences, firehose modes can becategorised into three different types: quasi-parallel,oblique, and critical-line firehose modes. Quasi-parallelfirehose modes, which are destabilised left-handed and/orright-handed high-β Alfvén waves <cit.>, exist inside anarrow cone of wavevector angles θ≲β_i^-1/4 <cit.>.The peak wavenumber of their growth (k_ρ_i ∼ |Δ_i + 2/β_i|^1/2) is determined by gyroviscosity, an FLReffect <cit.>.For θ≳β_i^-1/4, the characteristic low-frequency (viz., ω≪Ω_i) waves that exist above ion-Larmor-scales in high-βplasma are shear-Alfvén waves and (compressible) slow modes; the formerremains susceptible to firehose instability, but, on account of its FLR coupling tothe slow mode, its instability proceeds quite differently at sufficientlysmall wavenumbers (k ρ_i ≳ |Δ_i + 2/β_i|^1/2), with peak growth occurring at smaller scales (k_ρ_i ∼ |Δ_i + 2/β_i|^1/4≪ 1). Finally,along a `critical line' in the (k_,k_⊥) plane (k_⊥≈√(2/3) k_, θ≈ 39^∘), the FLR couplingbetween the slow mode and shear-Alfvén wave becomes anomalously weakdue to two opposing FLR effects cancelling each other out. This results in much weaker collisionless damping on critical-line firehose modes, and so theycan exist on scales that are close to (though, as we prove here for the first time, not strictly at) the ion-Larmorscale. Thus critical-line firehose modes are generically the fastest-growing ones inhigh-β plasma <cit.>. We support this claim with figure <ref>, which shows themaximum growth rate of the firehose-unstable modes as a function of both k_ andk_⊥ for two different (unstable) values of Δ_i β_i (and with the same value of β_i as was used to calculate the stability maps presented in section <ref>). Both examples confirm that, although a broad spectrum of unstable parallel and oblique firehose modes emerge when Δ_i β_i +2 ≲ -1, it is the critical-line firehose modesthat are the fastest growing. The value of Δ_i required to trigger the CES firehose instability is, aswith the case of the firehose instability in a plasma with a bi-Maxwellian ion distribution, dependent on the scale of the unstablefirehose modes. For long-wavelength firehose modes (i.e. those with k ρ_i ≪1), the threshold is Δ_i < (Δ_i)_ c = -2/β_i; it can be shown that this result is independent of theparticular form of the ion distribution function <cit.>. However, our numerical solutions for the wavenumber-dependent growth rate of firehose modes in CE plasmawhen Δ_i > -2/β_i (see figure <ref>a) suggest that oblique ion-Larmor-scale firehose modes can be destabilised at less negativepressure anisotropies. This is consistent with the findings ofprevious studies of the oblique firehose in β∼ 1 plasma <cit.>, although this finding has not until now been comprehensively studied in plasma with β≫ 1. We can, in fact, calculate the threshold semi-analyticallyfor the CES firehose instability as a function of wavenumber (see appendix<ref>); the results, which are shown in figure <ref>b show that obliquefirehose modes with k_ρ_i ≈ 0.45, k_⊥ρ_i ≈ 0.3 become unstablewhen Δ_i ≈ -1.35/β_i. The reduced threshold of ion-Larmor-scale firehose modes, which can be shown to depend only on fourth- and higher-order moments of the iondistribution function, is considered in greater depth in Bott et al. (2023, in prep.).The growth of the three different sub-categories of unstable CES firehose modes (quasi-parallel, oblique, and critical-line firehoses) can be described analytically. However, the relative orderings of ω̃_i, k_ρ_i, k_ρ_i, β_i and |Δ_i| for these sub-categories are different, so it is necessary to treat them separately.§.§.§ Quasi-parallel firehose instabilityThe relevant orderings of parameters in for quasi-parallel firehose modes is ω̃_i= ω/k_ v_thi∼β_i^-1/2∼ |Δ_i|^1/2∼ k_ρ_i , with the additional small wavenumber-angle condition k_ρ_i ≪β_i^-1/4 k_ρ_i ∼β_i^-3/4.Under theordering (<ref>), we find (see appendix <ref>) that there are four modes with complex frequencies given by ω/Ω_i = ± k_ρ_i (1/4 k_ρ_i ±√(1/16 k_^2 ρ_i^2 + 1/β_i+Δ_i/2)) , where the ± signs can be chosen independently. This is the standard parallel firehose dispersion relation <cit.>. To (re-)identify the modes that are destabilised by the negative ion-pressure anisotropy, we set Δ_i = 0: the resulting dispersion relation agrees with <cit.>, recovering the dispersion relation of Alfvén waves in the limit k_ρ_i ≪β_i^-1/2 [see see their eqn. (19)]and the dispersion relation of the slow and fast hydromagnetic waves in the limit k_ρ_i ≫β_i^-1/2 [see see their eqn. (20)]. The growth rate of the unstable parallel firehose modes that follows from (<ref>)is shown in figure <ref> for several different values of Δ_i and β_i; the results closely match the analogous resultdetermined numerically[An inquisitive reader might wonder whythe numerical solution suggests that, in addition to the long-wavelength parallel firehose modes, parallel ion-Larmor scale modes are also unstable in some cases (see figure <ref>, middle panel), albeit with amuch smaller growth rate. This instability is the CES resonant parallel firehose instability, so named because of its mediation via gyroresonant interactions beween ions and ion-Larmor-scalemodes <cit.>.In a β_i ∼ 1 plasma, this instability can have a growth rate comparable to (or even larger than) the longer-wavelength non-resonant firehose modes; however, because of the exponentialdependence of the resonant parallel firehose instability's growth rate on |Δ_i|^-1∼β_i,the instability is generically much weaker than the non-resonant firehose in plasma with β_i ≫ 1 (see Bott et al., in prep.). In the language of section <ref>, resonant parallel firehose modes are quasi-cold in CE plasma. We therefore do not consider this instability further in this paper.]. For non-zero Δ_i and fixed k_ρ_i, (<ref>) implies that we have instability provided |Δ_i| >2/β_i + 1/8 k_^2 ρ_i^2 .The fastest-growing mode γ_max/Ω_i = |2/β_i+Δ_i|occurs at the characteristic wavenumber(k_ρ_i)_peak = 2 |2/β_i+Δ_i|^1/2.For k_ρ_i > 2 √(2)|2 β_i^-1+Δ_i|^1/2, theunstable mode is stabilised. This agrees with previous analytical characterisations of the firehose instability <cit.>.§.§.§ Oblique firehose instabilityIn this case, we order ω̃_i∼1/β_i^1/2∼ |Δ_i|^1/2∼ k_^2 ρ_i^2 ∼ k_^2 ρ_i^2 .Aside from the finite propagation angle of oblique modes, the key difference between the oblique and quasiparallel cases is the larger magnitude of the typical wavenumber k ρ_i ∼β_i^-1/4. The unstable oblique firehose modes have the complex frequency (see appendix <ref>)ω/Ω_i= -k_ρ_i [i/8 √() k_^2 ρ_i^2(k_^2 ρ_i^2- 3/2 k_^2 ρ_i^2 )^2±√(1/β_i+Δ_i/2 - 1/64k_^4 ρ_i^4(k_^2 ρ_i^2- 3/2 k_^2 ρ_i^2 )^4)]. Setting |Δ_i| = 0, and considering the subsidiary limit k ρ_i ≪β_i^-1/4, we recover the dispersion relation of the shear Alfvén mode <cit.>. Similarly to the quasi-parallel firehose instability, the instability condition is stillΔ_i < -2/β_i. If this condition is met, the maximum growth rate of the instability is γ_max/Ω_i≈(8 /27)^1/4|2/β_i+Δ_i|^3/4tanθ[1-3/2tan^2θ]^-1, and is attained at (parallel) wavenumber(k_ρ_i)_peak≈(32 /3)^1/4|2/β_i+Δ_i|^1/4tanθ[1-3/2tan^2θ]^-1, where θ = tan^-1(k_⊥/k_) is (again) the wavevector angle with respect to the magnetic field.In contrast to the quasi-parallel case, if the condition (<ref>) is met, the instability persists for all wavenumbers satisfying k ρ_i ≲ 1, albeit with an decreasing growth rate beyond the parallel wavenumber given by (<ref>). We notice that along the critical line k_ = k_√(2/3) (θ≈ 39^∘), the maximumgrowth rate (<ref>) of the oblique firehose diverges.This divergence is mathematically the resultof failing to take into account higher-order terms in the k ρ_i ≪ 1expansion, but, as was discussed earlier in this section, it is indicative of a physical effect (viz., much faster growth offirehose modes with k_ = k_√(2/3)). The degree to which the growth rate of unstable modes determined from (<ref>)follows a numerical solution for a particular choice of θ is demonstrated in figure<ref>.The agreement is reasonable, although an increasingly large discrepancy developsas k ρ_i approaches unity due to FLR effects.§.§.§ Critical-line firehose instability In this third and final case, we set k_ = k_√(2/3). The FLR coupling between the shear Alfvén mode and theBarnes'-damped slow-mode then vanishes to leading order in k ρ_i ≪ 1, andnext order FLR terms must be considered. Depending on the value of β_i, wefind two sub-cases.First, for β_i ∼Δ_i^-1≫ 10^6 – a numerical bound that we will justify a posteriori following ourcalculations – the FLR term responsible forsetting the wavenumber of the fastest-growing mode is the second-order correctionto the FLR coupling between the shear Alfvén and slow modes. The appropriateordering to adopt then depends on the relative magnitude of Δ_i and β_i^-1. For Δ_i β_i + 2 ≲ -1,we use the orderingω̃_i∼1/β_i^1/2∼ |Δ_i|^1/2∼ k_^6 ρ_i^6 .In this case, we find (see appendix <ref>) that the frequency of the two shear Alfvén modesis given byω/Ω_i = -k_ρ_i [ 6889 i k_^6 ρ_i^6/27648 √()±√((1/β_i+Δ_i/2) - 6889^2/27648^2k_^12ρ_i^12)]. The wavelength at which the growth rate is maximisedscales with an extraordinarily low power of |2β_i^-1+Δ_i|:(k_ρ_i)_peak≈2^19/12 3^1/2^1/12/83^1/3 35^1/12|2/β_i+Δ_i|^1/12≈ 0.97 |2/β_i+Δ_i|^1/12, with associated maximum growth rate γ_max/Ω_i≈2^13/12 3^1/2^1/12/83^1/3 35^1/12|2/β_i+Δ_i|^7/12≈ 0.58 |2/β_i+Δ_i|^7/12. As discussed in section <ref>, the instability threshold for critical-line firehose modes is not(<ref>), but is a less stringent value. We can demonstrate this analytically by showing that, for Δ_i ≃ -2/β_i, critical-line firehose modes are still unstable. Adopting the ordering ω̃_i∼1/β_i^3/5∼ k_^6 ρ_i^6 ,it follows (see appendix <ref>) that the growth rate of the critical-line firehose modes is γ/Ω_i = -k_ρ_i [ 6889 k_^6 ρ_i^6/27648 √()±√(5/4 β_i k_^2 ρ_i^2 + 6889^2/27648^2k_^12ρ_i^12)].The maximum growth rate of such modes is then given byγ_max/Ω_i≈2^3 5^7/10 3^3/2^1/5/83^4/5 7^7/10β_i^-7/10≈ 1.2 β_i^-7/10obtained at parallel wavenumber (k_ρ_i)_peak≈2 5^1/10 3^1/2^1/10/83^2/5 7^1/10β_i^-1/10≈ 0.64 β_i^-1/10. When β_i ∼Δ_i^-1≪ 10^6 the fastest-growing critical-line firehose modes have a sufficiently large wavenumber that the effect of FLR coupling betweenshear Alfvén and slow modes is sub-dominant to the effect of cyclotron damping. Assuming that Δ_i β_i + 2 ≲ -1 and adopting the ordering ω̃_i∼1/β_i^1/2∼ |Δ_i|^1/2, k_ρ_i ∼1/√(log1/|β_i^-1+Δ_i/2|) ,we show in appendix <ref> that the frequency of the shear Alfvén modes becomesω/Ω_i = - i√()/2 k_ρ_iexp(-1/k_^2 ρ_i^2)± k_ρ_i √((1/β_i+Δ_i/2) - π/4 k_^4 ρ_i^4exp(-1/k_^2 ρ_i^2)).In this case, the maximum growth rate γ_max/Ω_i≈ (k_ρ_i)_peak|1/β_i+Δ_i/2|^1/2 is attained at (k_ρ_i)_peak≈√(2)/√(log1/|β_i^-1+Δ_i/2|)[1-4 log(log1/√(|β_i^-1+Δ_i/2|))/log1/|β_i^-1+Δ_i/2|] . Figure <ref> corroborates that the analytical approximation (<ref>) provides a reasonable estimate of the parallel wavenumber at which peak growthoccurs. Similarly to the β_i ≫ 10^6 regime, when β_i ≪ 10^6, critical-line firehose modes still grow when Δ_i ≈ -2/β_i. Their growth rate as a function of wavenumber is given by γ/Ω_i = - √()/2 k_ρ_iexp(-1/k_^2 ρ_i^2)± k_ρ_i √(5/4 β_i k_^2 ρ_i^2 + π/4 k_^4 ρ_i^4exp(-1/k_^2 ρ_i^2)). The maximum of (<ref>), γ_max/Ω_i≈√(5)/2 (k_ρ_i)_peak^2 β_i^-1/2, is achieved at (k_ρ_i)_peak≈√(2)/√(log(β_i/20)){1 - 3log[log(β_i/20)/2]/log(β_i/20)}.By comparing the expressions (<ref>) and (<ref>)for the complex frequency of shear Alfvén modes – specifically, the ratio of the final terms – the dependence on β_i (equivalently, Δ_i) of the relative importanceof FLR slow-mode coupling and cyclotron damping can be determined. This ratio is∼0.16 k_^8 ρ_i^8 exp(-1/k_^2 ρ_i^2), with equality being achieved when k_ρ_i ≈ 0.3. Using (<ref>) to estimating the value of |2β_i^-1+Δ_i| at which this value of k_ρ_i is achieved, we find that |2β_i^-1+Δ_i| ≈ 8 × 10^-7. Assuming |Δ_i β_i^-1 + 2| ∼ 1, we conclude that, for β_i ≲ 10^6, cyclotron damping will determine the wavenumber cutoff, with this transition value of β_i proportional to the value of |Δ_i| β_i. Thisestimate can be validated numerically by comparing (<ref>) and (<ref>)with the numerically determined growth rate (see figure <ref>).We indeed find that, for β_i ∼Δ_i^-1≪ 10^6, the effect of cyclotron damping sets thewavenumber of peak growth, while FLR slow-mode coupling does so for β_i ∼Δ_i^-1≫10^6. In both cases, the superior of the two analytic approximations closely matches thenumerical growth rate.These results suggest that, for very large β_i, the wavenumber of the maximum growth of the firehose instability satisfies k ρ_i ≪ 1, rather than k ρ_i ∼ 1. This result might seem to contradict previous authors who claim to have found numerical evidence that the fastest growth rates of the firehose instability occur at k ρ_i ∼ 1 <cit.>; however, given the logarithmic dependence of the characteristic wavenumber (<ref>), we conclude that it would take simulations at very high β_i to be able to distinguish between k ρ_i ∼ 1 and k ρ_i ∼β_i^-1/12≪ 1. In addition, the results presented in figure <ref>b indicate that firehose modes with k ρ_i ∼ 1 have a less stringentinstability threshold on Δ_i than (<ref>),providing an opportunity for such modes to grow significantly beforelonger-wavelength modes can do so. In short, it seems reasonable toassume for all practical purposes that the dominant firehose modes occur at k ρ_i ∼ 1, provided β_i is not extremely large. §.§.§ Sub-ion-Larmor-scale firehose instabilityFigure <ref>b also suggests that, once |Δ_i| β_i ≫ 1,firehose modes on sub-ion-Larmor scales develop – albeit with a smaller growth rate than the critical-line ones. Similarly to sub-ion-Larmor-scale mirror modes (see the end ofsection <ref>), we can characterise these modes analytically by adopting the orderingk_ρ_i ∼ k_⊥ρ_i ∼ (|Δ_i| β_i)^1/2≫ 1 , γ/Ω_i∼Δ_i .If we also assume that |Δ_i| β_i ≪μ_e^-1/2,it is shown in appendix <ref> that the growth rate of these modes is given by γ/Ω_i ≈ k_/k√((-Δ_i k_^2-k_^2/k^2 - k^2 ρ_i^2/β_i) (k^2 ρ_i^2/β_i - Δ_i k_^2/k^2)) =cosθ√([-Δ_i (sin^2θ-cos^2θ)- k^2 ρ_i^2/β_i] (k^2 ρ_i^2/β_i - Δ_i cos^2θ)).If Δ_i < 0, we have an instability for all modes with θ > 45^∘ whose total wavenumber satisfies k ρ_i < √(|Δ_i| β_i (sin^2θ-cos^2θ)).Analogously to the sub-ion-Larmor-scale mirror modes [cf. (<ref>)], the growth is maximised when k ρ_i≪ (|Δ_i|β_i)^1/2 and θ≈ 55^∘, withγ_ max = 1/3 √(3) |Δ_i| Ω_i≈ 0.19 |Δ_i| Ω_i .In contrast to the case of the mirror instability, this growth rate is asymptotically small in Δ_i ≪ 1 compared to the peakgrowth rate of the critical-line firehose modes [cf. (<ref>) and(<ref>)], and thus the instability of sub-ion-Larmor-scale firehose modesis always subdominant. For completeness, we note that, once |Δ_i| β_i ∼μ_e^-1/2, the electron-pressure anisotropy associated with the CE electron-shear term begins to play a comparable role tothe ion-pressure anisotropy for modes with k ρ_i ∼ (|Δ_i|β_i)^1/2. In this case, the expression for the growth rate becomesγ/Ω_i ≈ k_/k{[-Δ_i k_^2-k_^2/k^2 - k^2 ρ_i^2(1/β_i +μ_e^1/2Δ_i/2) ]×[k^2 ρ_i^2/β_i - Δ_i (μ_e^1/2 k_⊥^2 ρ_i^2 -1/2μ_e^1/2 k_^2 ρ_i^2 + k_^2/k^2)]}^1/2.The bound (<ref>) on the total wavenumber required for the instability of modes with k_⊥ > k_ is then k ρ_i < √(|Δ_i| β_i (sin^2θ-cos^2θ)/1+μ_e^1/2Δ_i β_i/2).Because the denominator tends to zero as Δ_i → -2 μ_e^-1/2β_i^-1, the bound becomes increasingly weak, and so the region of (k_,k_⊥)-space in which there is instability extendssignificantly towards electron Larmor scales. This extension precedesthe onset of the oblique electron firehose instability (see section<ref>). §.§.§ Parallel electron firehose instabilityThe CES parallel electron firehose instability arises when the negative electron-pressure anisotropy (Δ_e < 0) associated with the CE electron-shear term becomes a sufficiently large free-energy source to overcome the relatively weak collisionless damping mechanisms that act on long-wavelength (k_ρ_e ≪ 1) quasiparallel whistlerwaves by changing their handedness from right- to left-handed. More specifically, whistler waves with quasi-parallel wavevectors do nothave a component of electric field parallel to B_0, and so arenot subject to electron Landau damping.Electron cyclotron damping does occur, but is very inefficient for k_ρ_e ≪1. The resonant interaction primarily responsible for damping is that between the whistler waves and Maxwellian ions in the CE plasma streaming alongfield lines with v_≪ v_thi. When the handedness of the whistler waves changes, this interaction instead leads to the waves' growth.Because the resonant interaction driving the instability involves the plasma's ions, the CES parallel electron firehose instability has a rather small growth rate compared to other CES electron-scale microinstabilities,with growth disappearing entirely in the special case of cold ions. Theparallel wavenumber of peak growth, which is a small but finite fraction of the electron Larmor scale, viz., (k_ρ_e)_ peak≈ 0.4 for Δ_e ≲ -2/β_e, is set by electron cyclotron damping, which prevents shorter-wavelength modesfrom becoming unstable. The CES parallelelectron firehose instability was first identified by <cit.> and has been studied subsequently using theory and simulations in plasmawith β_e ∼ 1-20 by a number of authors <cit.>. To characterise the parallel electron firehose instability analytically, we can simply use theexpressions (<ref>a) and (<ref>b) given in section <ref> for the real frequency ϖand growth rate γ, respectively, of the parallel whistler waves that satisfy the ordering ω̃_e = ω/k_ v_the∼Δ_e ∼1/β_e, and have k_ρ_e ∼ 1, but this time with Δ_e β_e < 0.Plots of the dispersion curves ϖ(k_) and γ(k_) of CES parallel electron firehose modesare then shownin figure<ref> for a selection of different (negative) values of Δ_e β_e.In a hydrogen plasma, we find an instability for Δ_e < (Δ_e)_ c≈-1.7/β_e. For Δ_e ≲ -2/β_e,modes with k_ρ_e ≲ 0.4 become unstable. Figure <ref> also shows that parallel electron firehose modes generically have a real frequency that ismuch greater than their growth rate (ϖ∼Ω_e/β_e ≫γ); however, this frequency changes sign at awavenumber which, when Δ_e ≲ -2/β_e, is comparable to the wavenumber (k_ρ_e)_ peak at which peak growth occurs. These results can be elucidated by considering the expressions (<ref>) in the subsidiary limitk_ρ_i ∼1/√(log(2 μ_e^-1/2|1+2/Δ_e β_e|))≪ 1.Then (<ref>) simplifies toϖ=±[(1+Δ_e β_e/2) k_^2 ρ_e^2 - μ_e^1/2 Δ_e β_e ] Ω_e/β_e, γ=√()/k_ ρ_e [Δ_e exp(-1/k_^2 ρ_e^2)-(Δ_e/2 + 1/β_e) μ_e^1/2 k_^2 ρ_e^2 ] Ω_e . These approximations are plotted alongside (<ref>) in figure<ref>; the agreement is qualitative rather than quantitative for Δ_e ∼-2/β_e, but becomes increasingly good as Δ_e is decreased further. Using these simplified expressions, we can derive approximate analyticalexpressions for the instability's threshold (Δ_e)_ c, as well as its peak growth rate and thewavenumber at which that growth occurs. First considering the sign of (<ref>), it is easy to show that there exists arange of wavenumbers k_ at which γ > 0 if and only if Δ_e <-2/β_e, so (Δ_e)_ c≈ -2/β_e. This is somewhat more stringent than the numerically observedthreshold, a discrepancy attributable to FLR effects, not taken into account by theapproximation (<ref>b). When Δ_e <-2/β_e, it can be proven that the growth rate (<ref>b) ismaximised at(k_ρ_e)_ peak≈1/√(log(μ_e^-1/2|1/2+1/Δ_e β_e|)){1-log[√(2)log(μ_e^-1/2|1/2+1/Δ_e β_e|)]/log(μ_e^-1/2|1/2+1/Δ_e β_e|)},attaining the valueγ_ max = √()μ_e^1/2 (k_ρ_e)_ peak|Δ_e/2+1/β_e|Ω_e .Comparing (<ref>) with the characteristic magnitude of ϖ evaluated using (<ref>a) at k_ρ_e = (k_ρ_e)_ peak (and assuming that (k_ρ_e)_ peak≳μ_e^1/4),we conclude that γ≲μ_e^1/4ϖ, thereby explaining ourprevious observation that the growth rate of parallel electron firehose modes is genericallymuch smaller than the real frequency of those modes. We can also show that the one exception to thisoccurs when (k_ρ_e)_ peak≈μ_e^1/4 [2 Δ_e β_e/(1+2 Δ_eβ_e)]^1/2, an approximate expression for the wavenumber below which ϖ changes sign. As we will see, the characteristic growth rate of the CES parallel electron firehoseis typically much smaller than its oblique relative in high-β plasma (see section <ref>), a conclusion that also applies in β_e ∼ 1 plasmas with bi-Maxwelliandistributions <cit.>.§.§.§ Oblique electron firehose instabilityIn spite of its similar name, the CES oblique electron firehose instability is quite distinct from its parallel cousin: it is a non-propagating mode than arises from the destablisation of oblique KAWs by a sufficiently negative electronpressure anisotropy. The linear theory of the analogous instability in β_e ∼ 1 plasmawith bi-Maxwellian electrons was first presented by <cit.>, with a numberof simulation studies of this instability having been conducted subsequently <cit.>.The high-β variant of the (linear) instability for general anisotropic electron distributionfunctions was studied in the k_≪ k_⊥ limit of gyrokineticsby <cit.>. In contrast to the findings of <cit.>, who showed that the obliqueelectron firehose instability in a bi-Maxwellian plasma at β_e ∼ 1 involvesgyroresonant wave-particle interactions between electrons and the unstablemodes, instability of CES oblique electron firehose modes at β_e ≫ 1 is essentiallynon-resonant, with sufficient large negative electron pressure anisotropiesnegating the restoring force that underpins the oscillation of high-βKAWs. Similarly to the parallel electron firehose instability, the CES obliqueelectron firehose instability is triggered when Δ_e ≲ -2/β_e.The precise value of the threshold depends on the wavevector of themode being destabilised. Analogously to the parallel electron firehose, long-wavelength oblique electronfirehose modes are unstable when Δ_e < (Δ_e)_ c =-2/β_e. However, figure <ref>a shows thatthere is positive growth of k ρ_e ∼ 1 oblique electron firehose modes for less negative values of Δ_e, illustrating thatthe threshold is less stringent for such modes. This phenomenon is reminiscent of the ion firehose instability (see figure <ref>): ion-Larmor-scale oblique firehose modesalso have a less stringent threshold than longer-wavelength modes. In additionto the k ρ_e ∼ 1 modes, a region of unstable KAWs with characteristic wavenumbersμ_e^1/2≪ k ρ_e ≪μ_e^1/4, k_∼ k_, is evident in figure <ref>a.These modes, which were discussed at the end of section <ref>, aredestabilised by negative ion pressure anisotropy; the extent of this region closely matches the analytic prediction (<ref>). Using a similar semi-analytic approach to that employed for the case of the ion firehoseinstability (see appendix <ref>), we can determine theapproximate threshold for the oblique electron firehose instability as a function of k_ρ_eand k_⊥ρ_e. The results are shown in figure<ref>b; modes with k_ρ_e ∼ 0.5, k_⊥ρ_e ∼ 0.4 have the least stringent threshold (Δ_e ≈ -1.4/β_e). Well into the unstable regime, i.e., when Δ_eβ_e + 2 ≲ -1, electron firehosemodes across a broad range of wavevectors are destabilised (see figure<ref>a).The fastest-growing electron firehose modes are oblique and occur at electron Larmorscales (k_⊥ρ_e ∼ 1 > k_ρ_e), with characteristicgrowth rate γ∼ |Δ_e| Ω_e ∼Ω_e/β_e.This growth rate is much larger than the peak growth rate of the parallel electronfirehose instability (<ref>).Similarly to the electron mirror instability, a simple analytic expression for the growth rate of the fastest-growingelectron firehose modes when Δ_eβ_e + 2 ≲ -1 is challenging toestablish. We can, however, characterise the growth of two particular classes ofelectron firehose modes analytically. The first of these are long-wavelength (viz., k ρ_e ≪ 1) electron firehose modes. For these, we adopt the same ordering (<ref>) as wasconsidered when characterising long-wavelength electron mirror modes: k_ρ_e ∼ k_⊥ρ_e ∼μ_e^1/4≪ 1 , ω̃_e = ω/k_ v_the∼k ρ_e/β_e∼ |Δ_e| k ρ_e.We then obtain a closed-form expression [cf. (<ref>), and also (<ref>)] for the complexfrequencies of the electron firehose modes: ω ≈ ± k_ρ_e Ω_e {[1/β_e + Δ_e(1/2- μ_e^1/2k_^2 ρ_e^2- k_^2 ρ_e^2/k^4 ρ_e^4) ]×[ k^2 ρ_e^2/β_e - Δ_e (k_^2 ρ_e^2 + μ_e^1/2k_^2/k^2 - 1/2 k_^2 ρ_e^2 ) ]}^1/2 .If Δ_e < -2/β_e, the right-hand side of (<ref>) is purely imaginary for k_⊥ > k_, and so we have positive growth for all long-wavelengthelectron firehose modes with θ > 45^∘[In fact, this condition is stronger than necessary toguarantee instability – but the exact condition is somewhat complicated, so we omit discussion of it.]. This approximation should be compared with the numerically determined growth rate infigure <ref>b.If it is further assumed thatμ_e^1/4≪ k ρ_e ≪ 1, k_∼ k_, it is shown in section <ref> that(<ref>) simplifies to an analogue of (<ref>), viz., ω ≈ ± k_ρ_e Ω_e √((1/β_e + Δ_e/2)(k^2 ρ_e^2 1/β_e + Δ_e/2[k_^2 ρ_e^2 -2 k_^2 ρ_e^2 ] )).This result is again in agreement with the gyrokinetic calculations of <cit.>.Extrapolating (<ref>) to k_ρ_e ∼ k_⊥ρ_e ∼1, we recover that γ∼Ω_e/β_e when |Δ_e β_e +2| ≳ 1. A second sub-category of electron firehose modes that can be describedanalytically are quasi-perpendicular ones. For any fixed k_ρ_e ≪ 1, the most rapidly growing modes are strongly anisotropic: they occur when the perpendicular wavelength is comparable to the electron Larmor radius,k_ρ_e ∼ 1. These modes can therefore be elucidatedanalytically by considering their dispersion relation under the orderingω̃_e∼ |Δ_e| ∼1/β_ein the wavenumber domain μ_e^1/2≪ k_ρ_e ≪ k_ρ_e ∼1. We solve the dispersion relation (see appendix <ref>) to find ω/Ω_e =k_ρ_e/ℱ(k_⊥ρ_e){-i√()/2[k_^2 ρ_e^2/β_e+Δ_e ℋ(k_⊥ρ_e) ] ±√(𝔇(k_ρ_e, β_e, Δ_e))}, where the discriminant is 𝔇(k_ρ_e, β_e, Δ_e)≡ [k_^2 ρ_e^2/β_e+Δ_e ℋ(k_⊥ρ_e) ]×{1/β_e(1-/4 k_^2 ρ_e^2 ) - Δ_e [/4ℋ(k_⊥ρ_e) + ℱ(k_⊥ρ_e) ] }, and the two auxiliary functions are [cf. (<ref>)]ℱ(α) =exp(-α^2/2)[I_0(α^2/2) - I_1(α^2/2)],ℋ(α)≡1 - exp(-α^2/2) I_0(α^2/2).As a sanity check, we observe that in the subsidiary limit k_ρ_e ≪ 1, (<ref>) becomesω ≈ ± k_ k_ρ_e^2 Ω_e √((1/β_e + Δ_e/2)(1/β_e -Δ_e )),returning us to the dispersion relation (<ref>) of unstable kinetic Alfvénwaves taken in the limit k_≪ k_.In the case when Δ_e < -2 β_e^-1, one of the modes described by (<ref>) can bedestabilised by sufficiently negative pressure anisotropy, and become purelygrowing. The wavenumbers susceptible to this instability are thosesatisfyingk_^2 ρ_e^2 [1 - exp(-k_^2 ρ_e^2/2) I_0(k_^2ρ_e^2/2)]^-1 < |Δ_e| β_e .Provided Δ_e < -2 β_e^-1 and |Δ_e| β_e ∼ 1, this gives a range of unstable perpendicular wavenumbers k_ρ_e ≲ 1. That these wavenumbers are indeed unstable follows immediately from the observationthat if (<ref>) holds, then the discriminant (<ref>)satisfies𝔇(k_ρ_e, β_e, Δ_e) =-[Δ_e ℋ(k_⊥ρ_e)-k_^2 ρ_e^2/β_e] [Δ_e ℋ(k_⊥ρ_e)-k_^2 ρ_e^2/β_e+ 1/β_e+|Δ_e| ℱ(k_⊥ρ_e)]<[Δ_e ℋ(k_⊥ρ_e)-k_^2 ρ_e^2/β_e]^2 ,from which it follows that the imaginary part of (<ref>) for the `+' root ispositive. When |Δ_e β_e + 2| ∼ 1, the characteristic growth rate of the instability is γ_max∼ k_ρ_e |Δ_e|Ω_e ,which is consistent with the numerical findings shown in figure<ref>a. Indeed, (<ref>) agrees reasonably with the numerically determined growth rate for small values of k_ρ_i(see figure <ref>b). One particularly interesting subsidiary limit of (<ref>) is |Δ_e| β_e ≫ 1, in which it can beshown that, under the ordering k_ρ_e ∼ (|Δ_e| β_e)^1/2≫ 1, the growth rate is γ≈ k_ k_^3 ρ_e^4 (|Δ_e| - k_^2 ρ_e^2/β_e)Ω_e . This implies that the perpendicular wavelength of peak growth transitions smoothly to values below the electron Larmor radius as |Δ_e| β_e is increasedbeyond order-unity values. As we shall discuss in the next section, these unstablesub-electron-Larmor scale modes are best regarded as a distinct instability fromthe electron firehose, and so we introduce it properly in a new section. §.§.§ Electron-scale-transition (EST) instabilityWhen |Δ_e| β_e is increased significantly past unity, the fastest-growing microinstability changes characterfrom that of a destabilised KAW, and insteadbecomes a destabilised non-propagating mode.The authors of this paper are not aware of this instabilityhaving been identified previously; we call it the electron-scale-transition (EST)instability, on account of it providing a smooth transition between unstableKAWs with k_ρ_e ≪ 1, and microinstabilities on sub-electron scales (k_ρ_e ≳1). Unstable EST modes are quasi-perpendicular (k_ρ_e < 1 ≲ k_ρ_e ≲β_e^1/7),with the parallel wavenumber of the fastest-growing modes determined bya balance between the instability's drive and the electron-cyclotron dampingthat arises at sufficiently large k_ρ_e.In contrast to the oblique electron firehose instability, Landau-resonant electronswith v_≈ω/k_ also play a role in the EST instability's physical mechanism. To demonstrate that the EST modes are not unstable KAWs, we consider the expression (<ref>) in a Maxwellian plasma (viz., Δ_e =0). It is easy to show that in this case, 𝔇(k_ρ_e, β_e, Δ_e) ≤ 0if and only ifk_ρ_e ≥2/√().Thus, for sufficiently large values of k_ρ_e, KAWs cease to be able topropagate, and we obtain two purely damped non-propagating modes. Thus, anymicroinstabilities for Δ_e < 0 associated with these modes can no longerbe considered to be unstable KAWs. Substituting (<ref>) into thethreshold condition (<ref>), we estimate that EST modes first becomeunstable when Δ_e < (Δ_e)_ c≈ -3/β_e. As Δ_e is decreased below (Δ_e)_ c,the EST modes quickly acquire a faster growth rate than all the other CESmicroinstabilities that can operate for such values of Δ_e. We illustratethis numerically in figure <ref>a by showing the maximum growth rateof all CES microinstabilities as a function of (k_,k_⊥) for a particular value of Δ_e < 0. The EST modes with k_ρ_e, k_⊥ρ_e > 1 are the fastest growing,with γ≫Ω_e/β_e. In the limit |Δ_e| β_e ≫ 1 (but |Δ_e| β_e ≪β_e^2/7), the maximum growth rate of the ESTinstability can be estimated analytically. Adopting the orderings k_ρ_e ∼1/√(log|Δ_e| β_e) ,k_ρ_e ∼ (|Δ_e| β_e)^1/2 , ω/k_ v_the∼ |Δ_e|^5/2β_e^3/2 ,it can be shown (see appendix <ref>) that the EST mode has the growth rateγ/Ω_e =k_ k_^3 ρ_e^4 (|Δ_e| - k_^2 ρ_e^2/β_e){1+ k_^2 ρ_e^2/k_^2 ρ_e^2[4 exp(-1/k_^2 ρ_e^2) +√()μ_e^1/2 k_^3 ρ_e^3 ]}^-1,where the term proportional to μ_e^1/2 is associated with Landau damping on the ion species.Taking the subsidiary limit k_ρ_e ≪ 1/√(log|Δ_e|β_e), we recover (<ref>). The EST mode's growth rate is, therefore, anticipated to be positive providedk_ρ_e < (|Δ_e| β_e)^1/2.It can then be shown that (<ref>) has the approximate maximum valueγ_max≈6 √(3)/25 √(5)(k_ρ_e)_peak[1-3 ^3/2/5μ_e^1/2 (k_ρ_e)_peak |Δ_e| β_e ]|Δ_e| (|Δ_e| β_e)^3/2Ω_e , at the wavenumbers(k_ ρ_e)_peak = (3 |Δ_e| β_e/5)^1/2 ,(k_ ρ_e)_peak = 1/√(log(24 |Δ_e| β_e/5)) [1-loglog(24 |Δ_e| β_e/5)/log24 |Δ_e| β/5] .The growth rate (<ref>) is plotted in figure<ref>b along with the numerically determined growth rate; reasonable agreement is found. We note that, for perpendicular wavenumbers k_⊥ρ_e ≳β_e^1/7, the characteristic quasi-perpendicular plasma modes in a Maxwellian plasma are not EST modes, but are instead whisper waves (see section <ref>).Therefore, when |Δ_e| β_e ≳β_e^2/7 [see (<ref>)], the expressions (<ref>) and (<ref>a) for the EST mode's maximum growth rate and theperpendicular wavenumber at which that growth is attained are no longer valid.Instead, when |Δ_e| β_e ≳β_e^2/7, the fastest-growing EST modes (which coexist with faster-growing unstable whisper waves) are those close to the scale k_⊥ρ_e ∼Δ_e^-1/5; extrapolating from (<ref>), we find that γ_ max∼ |Δ_e|^2/5Ω_e/√(log|Δ_e| β_e).§.§.§ Oblique transverse instabilityThe transverse instability (whose physical mechanism was discussed in section <ref>) can be excited for sufficiently large negativeelectron pressure ansotropies as well as positive ones; however, when Δ_e <0, the fastest-growing modes are highly oblique with respect to the background magnetic field as opposed toparallel to it. In contrast to the Δ_e > 0 case, the oblique transverse instability doesnot become the fastest-growing CES microinstability for all Δ_e ≪-β_e^-1, only becoming so once its maximum growth rate exceeds the electron Larmor frequency (whichoccurs when Δ_e ≲ -β_e^-1/3). While Δ_e >-β_e^-1/3, the fastest-growing oblique transverse modes, which have k_⊥ρ_e ∼ (|Δ_e| β_e)^1/2, are confined to the parallel wavenumbers satisfying k_ρ_e ≳ 1. Their growth is outcompeted bythe EST and whisper instabilites (see sections <ref> and<ref>, respectively), which have k_ρ_e < 1; this isillustrated numerically in figure <ref>a for a particularlarge, negative value of Δ_e β_e.As for their analytical characterisation, transverse modes have identical growth rates to those obtained in the Δ_e > 0 case, given by (<ref>a,b). For Δ_e < 0, only the first mode can have positive growth, and such growth is onlyrealised if k_ > k_. Nowtaking the quasi-perpendicular unmagnetised limit k_ρ_e ≫ k_ρ_e ≫ 1, wefind that this mode has the growth rateγ≈k_ v_the/√()(-Δ_e - k_^2 ρ_e^2/β_e) .This expression is mathematically identical to the parallel transverse instability (<ref>) (section <ref>), except withsubstitution k_→ k_; the maximum growth rate of the oblique transverse instability is, therefore,γ_max = 2/3 √(3 ) (|Δ_e| β_e)^1/2 |Δ_e| Ω_eat the (perpendicular) wavenumber (k_ρ_e)_peak = (Δ_e β_e/3)^1/2.(<ref>) is compared with the numerically determined growth ratein figure <ref>b; we find that the approximation isexcellent provided k_ρ_e ≳ 1. We note that, based on our analysis, the oblique transverse mode is anticipated always to have a smaller growth rate than the EST instability (<ref>) when 1 ≪ |Δ_e| β_e ≲β_e^2/7:γ_ EST/γ_ trans∼|Δ_e| β_e/√(log|Δ_e| β_e)≫ 1 . §.§.§ Whisper instabilityWhen Δ_e ≲ -β_e^-5/7 (but Δ_e ≫ -β_e^-1/3), the dominantCES microinstability is the CES whisper instability. Theinstability is so named, because it consists in the destablisation of the whisperwave, a plasma wave whose existence has not previously been identified: it is therefore of some interest. The likely reason forits previous neglect relates to the somewhat esoteric regime inwhich such a wave exists – a magnetised plasma with β_e ≫ 1 that might naively be expected to support essentiallyunmagnetised perturbations at k ρ_e ≫ 1. The energetically dominant magnetic component of the wave is perpendicular to both k and B_0 (viz., δ B_y),and the wave itself has no electron-number-density perturbation unless β_e is extremelylarge. Its operation (and also the operation of its instability in a CE plasma) involves both resonant and non-resonant interactionsbetween electrons and the wave. More specifically, it is the non-resonant interaction of electrons at the edge of their Larmor orbits with the parallel electric field associated with the whisper wave that gives rise to the phase-shifted current perturbation necessary for wave propagation, while the primary damping mechanisms (Landau and Barnes' damping, respectively) of whisper waves are mediated by resonant wave-particle interactions. The physical mechanism of this wave and its instability (which is most clearlyexplored within the quasi-perpendicular limit of gyrokinetics) will be discussed further in afuture paper.We characterise the whisper instability's growth analytically in the limits μ_e^1/2≪ k_ρ_e ≪1, k_ρ_e ≫ 1 and Δ_e β_e ≫ 1 under the orderings ω̃_e = ω/k_ v_the∼1/β_e^2/7∼1/k_^2 ρ_e^2∼1/Δ_e β_e , k_ρ_e ∼1/√(log|Δ_e| β_e)≪ 1.It can be shown (see appendix <ref>) that such modeshave complex frequencies ω/Ω_e= -i[√()/2 k_ρ_eexp(-1/k_^2 ρ_e^2) + k_ρ_e/8 √() k_^2 ρ_e^2] ± k_ρ_e √(√()/4 k_ρ_e (k_^2 ρ_e^2/β_e+Δ_e ) - [√()/2 k_^2 ρ_e^2exp(-1/k_^2 ρ_e^2) + 1/8 √() k_^2 ρ_e^2]^2).It is a simple matter to ascertain that the right-hand-side of (<ref>)is either purely real or purely imaginary, and thus modes are approximately either non-propagating withgrowth rate γ or purely oscillating with frequency ϖ. Thedispersion curves ϖ(k_⊥) and γ(k_⊥) are plotted in figure<ref>.To interpret (<ref>), we take subsidiary limits.We first consider 1 ≪ k_ρ_e ∼ (|Δ_e| β_e)^1/2≪β_e^1/7: in this case, the expression for the `+' root simplifies to the dispersion relation (<ref>) of the EST instability.However, when k_⊥ρ_e ≳β_e^1/7/2^4/7^1/7≈ 0.57 β_e^1/7, this simplification is no longer justifiable, andso when|Δ_e| β_e ≳5^6/7/2^10/7 3^4/7^3/7β_e^2/7≈ 0.79 β_e^2/7 ,the perpendicular wavenumber (<ref>a) of the EST instability's peak growth derived from (<ref>) is so large that (<ref>) is nolonger, in fact, a valid description of the EST mode's growth rate. Now considering the subsidiary limit k_⊥ρ_e ∼ (|Δ_e| β_e)^1/2≫β_e^1/7 and k_ρ_e ≪ 1/√(log|Δ_e|β_e) of (<ref>), we find two propagating modes:ω/Ω_e≈±^1/4/2 k_ρ_e √(k_ρ_e (k_^2 ρ_e^2/β_e+Δ_e )).If we set Δ_e = 0 in order to identify the underlying Maxwellian mode, this reduces toω/Ω_e≈±^1/4/2 k_ρ_e (k_ρ_e)^3/2/β_e^1/2,This dispersion relation, which does not coincide with any previously identified plasmawave, is that of the whisper wave. The presence of this wave in the case of Δ_e < 0 results ina purely unstable mode provided β_e^-1/7≪ k_ρ_e < (|Δ_e| β_e)^1/2 and retaining finite k_ρ_e.In this subsidiary limit, the growth rate of the instability is γ/Ω_e= - √()/2 k_ρ_eexp(-1/k_^2 ρ_e^2) ± k_ρ_e √(√()/4 k_ρ_e (|Δ_e| - k_^2 ρ_e^2/β_e) + /2 k_^4 ρ_e^4exp(-2/k_^2 ρ_e^2)). This has the maximum value γ_max≈^1/4/√(2)(k_ρ_e)_peak(|Δ_e|β_e)^1/4 |Δ_e|^1/2Ω_e , at the wavenumbers (k_ ρ_e)_peak = (|Δ_e| β_e/3)^1/2 ,(k_ ρ_e)_peak = 2/√(3 log|Δ_e| β_e) [1-4 log3 (log|Δ_e| β_e/4)/3 log|Δ_e| β_e] . Thus, the maximum growth rate of whisper instability has different scalings with |Δ_e| and β_e than either the EST instability (<ref>) or the oblique transverse instability (<ref>). When |Δ_e| β_e ≳β_e^2/7, (<ref>) implies that the growth rate γ continues to increase beyond the maximumvalue of k_⊥ρ_e at which the EST modes can exist, and thus thewhisper instability, if it is operating, is always dominant over the ESTinstability. Whether it is also dominant over the oblique transverse instability depends on the choice of β_e and Δ_e. We can quantify this explicitly, by considering the ratio of the oblique transverse instability's growth rate (<ref>) to that of the whisper instability: γ_ trans/γ_ whisper∼√(log(|Δ_e|β_e))(|Δ_e|β_e)^1/4|Δ_e|^1/2. We see that for |Δ_e|^3β_e ≪ 1, γ_ trans≪γ_ whisper. Thus for |Δ_e|^-7/5≪β_e ≪|Δ_e|^-3, the whisper instability dominates. This condition certainly holds forthe particular value of Δ_e considered in figure<ref>; to support our claim, in figure <ref>a weplot the analytical approximation (<ref>) along with thenumerically determined growth rate for the fixed values of k_⊥ρ_e and k_ρ_e, respectively, at which the whisper instability is predicted toachieve its maximum growth. The growth rate of the whisper instability, which is correctly captured by ouranalytic approximation, does indeed exceed that of the transverse instability byan appreciable factor. For β_e ≳ |Δ_e|^-3, (<ref>)implies that, in fact, γ/k_ v_the∼ 1. This violates the condition of validity of the method that we have generally used to evaluate CES microinstability growth rates numerically (see section <ref>, and also appendix <ref>).The divergence of the true growth rates (calculated by solving the full hot-plasma dispersion relation numerically)from those arising from the solution of the low-frequency (ω≪ k_ v_the) dispersion relation (<ref>) for increasing β_e is illustrated in figure <ref>b.For γ≳Ω_e, we find that the distinction between k_ρ_e < 1modes and k_ρ_e > 1 modes vanishes; futhermore, all modes (including the modes with k_ = 0) come to resemble the transverse instability when β_e ≫ |Δ_e|^-3; this feature, which indicatesthe emergence of yet another distinct CES instability, is discussed in the next section.§.§.§ Ordinary-mode instabilityThe final instability we consider in this paper is the CES ordinary-mode (electromagnetic)instability: the destabilisation of the ordinary mode at sub-electron-Larmor scales by negative electron pressure anisotropy. The bi-Maxwellian equivalent of theinstability was first identified by <cit.>; for a more recent linear study of the instability, see <cit.>. For the characteristically small electron pressure anisotropies that are associated with the CE electron-shear term, this instability can only arise at very large values of β_e. For purely perpendicular modes (k_ = 0) in a magnetised plasma, resonant wave-particle interactions cannot arise, and so the ordinary-mode's instability mechanism is non-resonant.The CES ordinary-mode instability is most simply characterised by consideringmodes that are exactly perpendicular to the guide magnetic field (viz., k_ =0). In this case, it can be shown (see appendix <ref>) that, if the ordinary mode is destabilised, its growth rate is given by the equation ∑_n = 1^∞2 γ^2/γ^2+n^2 Ω_e^2exp(-k_^2 ρ_e^2/2) I_n(k_^2 ρ_e^2/2) = -Δ_e-k_^2 d_e^2- exp(-k_^2 ρ_e^2/2) I_0(k_^2 ρ_e^2/2).This dispersion relation is very similar to that derived by <cit.> for the ordinary-mode instability in the case of a bi-Maxwelliandistribution. If the electron pressure anisotropy is insufficient to destabilisethe ordinary mode, the mode is undamped, and its real frequency satisfies ∑_n = 1^∞2 ϖ^2/n^2 Ω_e^2-ϖ^2exp(-k_^2 ρ_e^2/2) I_n(k_^2 ρ_e^2/2) = Δ_e+k_^2 d_e^2+ exp(-k_^2 ρ_e^2/2) I_0(k_^2 ρ_e^2/2).The dispersion curves ϖ(k_⊥) and γ(k_⊥) for aselection of different values of β_e and at fixed Δ_e are shown infigure <ref>. We can use the ordinary-mode dispersion relation (<ref>) to derive the threshold for thisinstability at exactly perpendicular wavevectors. We note that the left-hand side of (<ref>)is strictly positive; thus for solutions to exist, it is required that thereexist a range of perpendicular wavenumbers over which the right-hand side of (<ref>)is also positive. For k_ρ_e ≲ 1, the right-hand side is always negative because |Δ_e| ≪ 1. We therefore consider the limit k_ρ_e ≫1 (assuming γ∼Ω_e), for which1/√() k_ρ_e∑_n = 1^∞2 γ^2/γ^2+n^2 Ω_e^2≈ |Δ_e|-k_^2 ρ_e^2/β_e- 1/√() k_ρ_e.The right-hand side of (<ref>) is maximal when k_ρ_e = (β_e/2√())^1/3,and, when maximal, also greater than zero if and only if|Δ_e|^3 β_e > 27/4 .Therefore the threshold (<ref>) is a necessarycondition for a purely perpendicular instability to exist. It is also asufficient condition, because the left-hand side of (<ref>)becomes arbitrarily small for small γ. Comparing the threshold (<ref>) to figure <ref>b, we conclude that the emergence of an instability with a purely perpendicularwavenumber at around β_e ∼ |Δ_e|^-3 is consistent withnumerical expectations. One can also show analytically that for γ≫Ω_e, the ordinary-modeinstability becomes identical to the oblique transverse instability (section <ref>). Motivated by thefact that γ≪ k_ v_the for the oblique transverse instability, or, equivalently, γ/Ω_e ≪ k_ρ_e, we first consider (<ref>)in the limit k_ρ_e ≫γ/Ω_e ∼ 1; we will subsequentlytake the subsidiary limit γ/Ω_e ≫ 1. The relevant dispersionrelation is (<ref>), which can be rewritten as1/√() k_ρ_e[γ/Ω_e(γ/Ω_e) - 1] ≈ -Δ_e-k_^2 ρ_e^2/β_e- 1/√() k_ρ_eusing the summation identity∑_n = 1^∞2 γ^2/γ^2+n^2 Ω_e^2 = γ/Ω_e(γ/Ω_e) - 1 .Now assuming γ≫Ω_e and using x≈ 1 for any number x ≫ 1,we deduceγ/Ω_e = -k_ρ_e/√()(Δ_e +k_^2 ρ_e^2/β_e) ,which is equivalent to (<ref>). Since |Δ_e| ≪ 1, our result is consistent with our initial assumption γ/Ω_e ≪ k_ρ_e.Thus, we conclude that, when β_e≫ |Δ_e|^-3, the CES ordinary-modeinstability is the dominant CES microinstability, but that in this limit, the instability is essentiallyidentical to the unmagnetised oblique transverse instability already described in section <ref>. § DISCUSSION AND CONCLUSIONSIn this paper, we have shown that the Chapman-Enskog description of classical, collisional plasma is valid fora wide range of plasma conditions.Microinstabilities are stabilised in such plasmas by two effects: collisionaldamping of instabilities, or β-dependentthresholds arising from a non-zero macroscopic magnetic field.By identifying the stable region for the leading-order corrections in the Chapman-Enskogexpansion, we have de facto identified the stable region for corrections to arbitrary order: ifone of the above effects is enough to maintain stability, any perturbations arising fromsmaller corrections will be unable to overcome the same effect. However, we have also demonstrated that for β≫ 1 there exists a significant region of the (d_e/L, λ/L) parameter space in which fast, small-scale instabilities are both possible and, in fact, generic. Indeed, in the strongly magnetised plasmas (that is, ρ_s ≪λ_s for both electrons and ions) on which we have focused our investigation, it transpires that collisional damping isnever able to prevent the most important kinetic instabilities, and thus strongly magnetised, high-β plasmascannot be modelled by standard Chapman-Enskog theory if λ/L ≳ 1/β. Thisfinding has significant implications for our understanding of various plasma environments, including those found in astrophysicalcontexts and also those created in laser-plasma experiments on high-energy laser facilities. When kinetic instabilities do arise in a Chapman-Enskog plasma, we have characterised all of them systematically,deriving simple expressions for their thresholds and growth rates in terms of basic parameters such as β, λ/L and the mass ratio μ_e = m_e/m_i using a novelanalytical approach. Three of the instabilities – the CET whistler instability (section <ref>), the CET slow-wave instability (section <ref>), and the CET long-wavelength kinetic-Alfvén wave (KAW) instability (section <ref>) – aredriven by heat fluxes in a Chapman-Enskog plasma, while the remaining ten – the CES mirrorinstability (section <ref>), the CES whistler instability (section<ref>), the CES transverse instability (sections <ref> and<ref>), the CES electron mirror instability (section<ref>), the CES firehose instability (sections<ref>, <ref>, <ref>, and <ref>), the CES parallel and oblique electron firehose instabilities (sections<ref> and <ref>, respectively), theCES electron-scale-transition (EST) instability (section <ref>), the CES whisper instability (section<ref>), and the CES ordinary-mode instability (section <ref>)– are driven by ion- and/or electron-velocity shears. While many of theseinstabilities, or versions thereof, had been considered previously, four of them (the CET slow-wave, CET long-wavelength KAW, CES EST and CES whisperinstabilities) are new; the whisper instability in particular seems to be of some interestboth conceptually and practically, because it is associated with a newlydiscovered plasma wave (the whisper wave), and the instability is much faster than itscompetitors over quite a wide range of values of λ/L and β. An important question to address is that of the dominantmicroinstability overall: in a given plasma (with fixed d_e/L, λ/L, and β), amongst the many instabilities that we have found,which is the dominant one? As we explained in section <ref>, the answer to this question depends on assumptions about the relative magnitude of temperature- and velocity-gradient scale lengths L_T and L_V.Assuming the scalings (<ref>) in section <ref> for a Chapman-Enskog plasma whose largest-scale fluid motions are sonic (in other words, Ma≲ 1), we find that, assuming also Ma λ/L_V to be largeenough to trigger all of the aforementioned instabilities, the three mostcompetitive ones are on electron scales: the CET whistler, CESwhisper, and transverse instabilities. These have growth rates [see (<ref>), (<ref>) and (<ref>), respectively]γ_whistler,T ∼η_eΩ_e ∼μ_e^1/4 Ma λ_i/L_V Ω_e , γ_whisper ∼|ϵ_e|^3/4 β_e^1/4/[log|ϵ_e| β_e]^1/2Ω_e∼(λ_i/L_V)^3/4 μ_e^3/8 Ma^3/4 β_e^1/4/[log(μ_e^1/2 β_e Ma λ_i/L_V )]^1/2 Ω_e , γ_trans ∼ϵ_e^3/2 β_e^1/2 Ω_e ∼μ_e^3/4 (λ_i/L_V)^3/2 β_e^1/2 Ω_e. Although the threshold for the CET whistler instability is lessrestrictive than for the whisper instability, at the whisper instabilitythreshold |ϵ_e| β_e ∼β_e^2/7∼ |ϵ_e|^-2/5 it follows thatγ_whistler,T/γ_whisper∼η_e [logϵ_e β_e]^1/2/ϵ_e^2/5∼μ_e^1/20(λ_i/L_V)^3/5[log(μ_e^1/2β_e Ma λ_i/L_V )]^1/2≪ 1 .Thus, the fact that CE plasmas typically support fluid motions on smaller scalesthan temperature gradients (see section <ref>) implies that CES microinstabilities are morepotent at sufficiently high plasma β_e. Yet, for β_e ≲μ_e^-1/2Ma^-1 L_V/λ_i, the CET whistler instability is the most rapidlygrowing microinstability. Finally, for β_e ≲μ_e^-1/4Ma^-1 L_V/λ_i, none of these electron-scale instabilities is triggered at all, withonly the ion-scale firehose and mirror instabilities operating.In short, the dominant microinstability is acomplicated function of the parameter regime. For reference, in table <ref> of section <ref> we show the (approximate) growth rates forall of the instabilities considered in this paper if the scalings (<ref>) are adopted, and figure <ref> shows a schematic stability map for the same case [A note of caution is warranted: if a Chapman-Enskog plasma is unstable tomicroinstabilities, then the heat fluxes and rate-of-strain tensors will be modified, potentially alteringboth L_T and L_V. There is no a priori reason to think that such a plasma willobey Braginskii-type scalings of the form (<ref>) – and so using this ordering to estimatemicroinstability growth rates is incorrect in kinetically unstable Chapman-Enskog plasmas.]. We believe that our study – which is the firstsystematic investigation of the kinetic stability of a classical, collisional, magnetised plasma –provides a significant step forward towards a comprehensive understanding of this state of matter.It is perhaps inevitable, however, given the conceptual extent of the problem,that there remain a number of questions concerning thestability of the Chapman-Enskog distribution function that we have not addressed here.In terms of linear theory, a numerical study usinga better collision operator to find the exact stability boundaries could be usefullycarried out – although we do not anticipate that this would lead to analteration of the basic scalings of those boundaries derived in this paper.Another issue not addressed by this work is that of linear coupling between CET andCES microinstabilities; it is not immediately obvious to what extent microinstabilities with similargrowth rates might aid each other's growth.The analysis could also be extended to two-species plasmas not in thermal equilibrium, as well as high-Z plasmas (with important applications in laser-plasmaphysics). Perhaps the most interesting future development of this work would be the determination of transport coefficients for plasmas falling into the unstable regimes. This requires quasi-linear or nonlinear treatment.Nonetheless, the results presented here can be seen as both a guide and a warning to those wishing to address this fundamental question.They are a guide in the sense that a correct characterisation of transport coefficientsrequires knowledge of the fastest-growing linear modes, which our study provides.But they are also as a warning in that an isolated treatment of one type of microinstability withoutreference to the full range of possible others could lead to a mischaracterisation of transport properties.The best hope for a correct calculation of transport in a weakly collisional, high-β plasma is,therefore, the following programme: for a plasma with particular conditions,identify the fastest microinstability, calculate the saturated magnitude of the fluctuationsproduced by it, determine the anomalous transport coefficients with those fluctuations present,re-calculate of the stability of this plasma, and so on, until a self-consistent picture emerges.It is likely that such a picture will involve a distribution function whose underlying nature dependson macroscopic motions, and hence transport coefficients that are themselves properties of flowshear, temperature gradients, and large-scale magnetic fields.Carrying out such calculations is a non-trivial task, but not impossible.To carry out this research, AFAB was supported by DOE awards DE-SC0019046 and DE-SC0019047through the NSF/DOE Partnership in Basic Plasma Science and Engineering, and also by UKRI (grant number MR/W006723/1). The work of AAS was supported in part by grants from STFC (ST/N000919/1 and ST/W000903/1) and EPSRC (EP/M022331/1 and EP/R034737/1), as well as by the Simons Foundation via a Simons Investigator award. This research was in part funded by Plan S funders (UKRI, STFC and EPSRC); for the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. AFAB would like to express his deepest gratitude to Matthew Kunz and Eliot Quataert for many helpful discussions about the paper generally, for highlighting some important considerations pertaining to the linear theory of the firehose instability, and for their ongoing support, without which the paper would never have been completed. All the authors would also like to thank Anatoly Spitkovsky for bringing several key early papers on the topic of this paper to the authors' attention. § GLOSSARY OF NOTATION USED IN THE PAPERAs an aid to reading, we provide a glossary of the notation that we usein our paper in tables <ref> and <ref> of this appendix. § DERIVATION OF THE CHAPMAN-ENSKOG DISTRIBUTION FUNCTION §.§ The Chapman-Enskog expansion in strongly magnetised plasmaThere exist a number of lucid explanations of how theCE distribution functions (<ref>) arise in a collisional, strongly magnetised two-species electron-ion plasma (ρ_s ≪λ_s for s = i, e) – the monographof <cit.>, but also (for example) <cit.>,Chapter 4. For that reason, we do not provide a full derivation of(<ref>). However, in this appendix, we describe a calculationthat allows for a direct derivation of the CE distribution function for astrongly magnetised collisional plasma, without first having to perform the CEexpansion for arbitrary values of ρ_s/λ_s. The first part of the calculation is the same as in <cit.>,pp. 76-78. For the reader's convenience, we present a summarised version. We consider the Maxwell-Vlasov-Landau equation (<ref>) ofspecies s in a frame co-moving with the fluid rest frame ofthat species. Defining the peculiar velocity variable v_s' = v-V_sin the fluid rest frame, (<ref>) becomesD f_s/D t + v_s'f_s + [Z_s e/m_s(E'+ v_s' ×B/c) -DV_s/D t]∂ f_s/∂v_s' - v_s' V_s ∂ f_s/∂v_s' =∑_s'ℭ(f_s,f_s') ,where E' ≡E + V_s ×B/cis the electric field measured in the moving frame, and D/D t≡/ t + V_sis the convective derivative. Initially ordering λ_s ∼ρ_s, andassuming the plasma is collisional (λ_s/L ≪ 1), we rearrange (<ref>)so that the largest terms are grouped together (on the left-hand side):∑_s'ℭ(f_s,f_s') -Z_s e/m_s c(v_s' ×B) ∂ f_s/∂v_s'= D f_s/D t + v_s'f_s + (Z_s e/m_sE' -DV_s/D t)∂ f_s/∂v_s' - v_s' ( V_s )∂ f_s/∂v_s'.We then expand the distribution functions f_s in small parameter λ_s/L ≪ 1:f_s = f_s^(0) + f_s^(1) + … ,and solve (<ref>) order by order in λ_s/L for f_s^(0) and f_s^(1). The subsequent treatment of the collision operator for the electron distributionfunction is a little different from the ion distribution function, so we treateach case individually. §.§.§ Electrons For the electrons, we can rewrite the total collision operator in a convenient form if we assume that T_i ∼ T_e, and V_i ∼v_thi:∑_s'ℭ(f_e,f_s') = ℭ_ee(f_e) + ℭ_ei^(0)(f_e) +ℭ_ei^(1)(f_e) ,where the electron-electron collision operator ℭ_ee(f_e) and electron-ion collision operators ℭ_ei^(0)(f_e) and ℭ_ei^(1)(f_e) are ℭ_ee(f_e) ≡ℭ(f_e,f_e) , ℭ_ei^(0)(f_e) ≡ν_ei(v) v^3 /v [1/v (I- v̂ v̂ ) f_e/v ] , ℭ_ee^(1)(f_e) ≡ν_ei(v) m_e v_e' u_ei/T_e n_e/^3/2 v_the^3exp(-ṽ_e^2) . Here ν_ei(v) is the velocity-dependent collision frequencyν_ei(v) ≡3 √()/4 τ_e(v_the/v)^3,and the total electron-ion collision operator ℭ(f_e,f_i) is givenby ℭ(f_e,f_i) = ℭ_ei^(0)(f_e) +ℭ_ei^(1)(f_e). This reformulation of the electron-ion collisionoperator is possible, because the assumptions T_i ∼ T_e, and V_i ∼v_thi mean that, from the perspective of the electrons, the ion distribution is sharply peaked around the ion fluid velocity: in other words, f_i ≈ n_iδ(v-V_i). Furthermore, the reformulation isconvenient because the total electron collision operator (<ref>)becomes independent of the ion distribution function. Thus, the asymptotic expansion (<ref>)for the electron distribution function is decoupled from the ions. Substituting (<ref>), the ordered kinetic equation (<ref>) for the electron distribution becomesℭ_ee(f_e) + ℭ_ei^(0)(f_e) + e/m_e c (v_e' ×B )∂f_e/∂v_e' = D f_e/D t + v_e' f_e - (e/m_e E' +D V_e/D t)∂f_e/∂v_e' - v_e' ( V_e ) ∂f_e/∂v_e'- ℭ_ei^(1)(f_e) ,where we note that under assumptions T_i ∼ T_e, and V_i ∼v_thi , ℭ_ei^(1)(f_e) ∼μ_e^1/2ℭ_ei^(0)(f_e) is much smaller thanℭ_ei^(0)(f_e).Then applying expansion (<ref>) with s = egivesℭ_ee(f_e^(0)) + ℭ_ei^(0)(f_e^(0)) + e/m_e c(v_e' ×B) ∂ f_e^(0)/∂v_e' = 0 .It can be shown <cit.> that the only solution of (<ref>)is (as expected) a Maxwellian distribution:f_e^(0) = n_e/^3/2 v_the^3exp(-|v_e'|^2/v_the^2).After some algebraic manipulation, it can also be shown that the leading-order perturbed electron distribution function f_e^(1)(v) satisfiesℭ_ee(f_e^(1)) +ℭ_ei^(0)(f_e^(1)) + e/m_e c( v_e' ×B)f_e^(1)/v_e'= {(|v_e'|^2/v_the^2 -5/2)v_e' ∇log T_e + v_e' [R_e/p_e + m_e u_eiν_ei(v)/T_e]+ m_e/2 T_e(v_e'v_e' - |v_e'|^2/3I) :W_e } f_e^(0) ,where R_e and so on are defined in the main text, in equations(<ref>). §.§.§ Electrons in strongly magnetised limit We now solve for f_e^(1) in a strongly magnetised plasma, i.e., ρ_e ≪λ_e. In this subsidiary limit, both the collision integrals on theleft-hand-side of (<ref>) and the terms on its right-hand side are much smaller than the termproportional to the magnetic field; in other words, v_e' ×B f_e^(1)/v_e'≈ 0 .We then define coordinate system {v_e',v_e',ϕ'} by v_e' ≡ẑv_e', v_e' = v_e' - v_e'ẑ, v_e' = |v_e' | and ϕ' = ϕ, where ẑ = B/|B| and ϕ is the gyrophase angle. The velocity gradient operator in this system isf_e^(1)/v_e'= ẑ f_e^(1)/ v_e' + v_e'/v_e' f_e^(1)/ v_e' + 1/v_e'^2v_e' ×ẑ f_e^(1)/ϕ'.This, when combined with (<ref>), implies that f_e^(1) is approximately gyrotropic: f_e^(1)(v') ≈⟨ f_e^(1)⟩_ϕ'(v_',v_'),where we have defined the gyro-average ⟨ f_e^(1)⟩_ϕ' of the electron distribution function by⟨ f_e^(1)⟩_ϕ'≡1/2 ∫_0^2 dϕ' f_e^(1). Now gyro-averaging (<ref>), we obtainℭ_ee(⟨ f_e^(1)⟩_ϕ') +ℭ_ei^(0)(⟨ f_e^(1)⟩_ϕ') = {[(|v_e'|^2/v_the^2 -5/2) ∇_log T_e + R_e/p_e + m_e u_eiν_ei(v)/T_e] v_e' + (ẑẑ - 1/3I) :W_e (v_e'^2/v_the^2 - v_e'^2/2 v_the^2) } f_e^(0),where we have used the gyrophase isotropy of the collision operators to commute the order of gyro-averaging on the left-hand side. (<ref>) is a linear equation for ⟨ f_e^(1)⟩_ϕ', so by tensor invariance, it must have a solution of the form⟨ f_e^(1)⟩_ϕ'=τ_e {[A_e^T(|v_e'|/v_the) ∇_log T_e +A_e^R(|v_e'|/v_the) R_e/p_e+ (A_e^u(|v_e'|/v_the) -1)m_e u_ei/T_e τ_e] v_e' + C_e(|v_e'|/v_the) (ẑẑ - 1/3I) :W_e (v_e'^2/v_the^2 - v_e'^2/2 v_the^2) } f_e^(0),where τ_e is defined by equation (<ref>a) in the main text, and the isotropic functions A_e^T(|v_e'|/v_the), A_e^R(|v_e'|/v_the)and C(|v_e'|/v_the) are determined byinverting the collision operators (see appendix <ref> for an example of how this calculation is done for a simple choice of collision operator). The total electron CEdistribution function becomesf_e(v_e',v_e') ={1 + τ_e [A_e^T(|v_e'|/v_the) ∇_log T_e +A_e^R(|v_e'|/v_the) R_e/p_e+(A_e^u(|v_e'|/v_the) -1 ) m_e u_ei/T_e τ_e] v_e' + C_e(|v_e'|/v_the) (ẑẑ - 1/3I) :W_e (v_e'^2/v_the^2 - v_e'^2/2 v_the^2) } f_e^(0).We emphasize that this quantity is expressed in the rest frame of the electron fluid[Reintroducing the parameters η_e^T, η_e^R, η_e^u and ϵ_e into (<ref>) gives the expression (<ref>) that is quoted in section <ref>.]. Finally, we recover (<ref>a) by transforming (<ref>) into the frameco-moving with the ion fluid. Since u_ei∼λ_e v_the/L ≪v_the, this transformation applied to the non-Maxwellian component f_e^(1) of the electron distribution function only produces corrections of magnitude ∼(λ_e/L) f_e^(1), and thus any correction terms are negligible. The only importantcontribution is from the shifted Maxwellian:exp(-|v_e'|^2/v_the^2) ≈exp(-ṽ_e^2)[1 + 2 ṽ_eu_ei/v_the] + … ,where ṽ_e = (v-V_i)/v_the. Combining (<ref>) with(<ref>), we deducef_e(ṽ_e,ṽ_e) ={1 + [A_e^T(ṽ_e) λ_e ∇_log T_e +A_e^R(ṽ_e) λ_e R_e/p_e + A_e^u(ṽ_e) λ_e m_e u_ei/T_e τ_e] ṽ_e+ τ_e C_e(ṽ_e) (ẑẑ - 1/3I) :W_e (ṽ_e^2 - ṽ_e^2/2) } f_e^(0).Introducing the parameters η_e^T, η_e^R, η_e^u and ϵ_e defined by equations (<ref>a), (<ref>b), (<ref>c) and (<ref>e) gives the final result(<ref>a).§.§.§ Ions The derivation of the equivalent result (<ref>b) for the ion distribution is mostly similar, but withone key difference: the total ion collisionoperator is dominated by the ion-ion collision operator ℭ_ii(f_i) ≡ℭ(f_i,f_i):∑_s'ℭ(f_i,f_s') = ℭ_ii(f_i) + ℭ(f_i,f_e) ≈ℭ_ii(f_e) .This is because ion-electron collisions are small in the mass ratio compared to ion-electroncollisions. After some algebra, it can be shown that the equivalent of (<ref>) for the perturbed ion distribution f_i^(1) isℭ_ii(f_i^(1)) -Z_i e/m_i c(v_i' ×B)f_i^(1)/v_i'= [(|v_i'|^2/v_thi^2 -5/2)v_i' ∇log T_i + m_i/2 T_i(v_i'v_i' - |v_i'|^2/3I) :W_i ] f_i^(0),where the lowest-order distribution is Maxwellian:f_i^(0)(v) = n_i/^3/2 v_thi^3exp(-|v_i'|^2/v_thi^2).We emphasise that the main differences between (<ref>) and (<ref>)are the presence of only one collision operator on the left-hand side of (<ref>) and the absence of any termproportional to the ion-electron friction force R_ie on theright-hand-side of (<ref>). Once (<ref>)has been written down, the method for obtaining the ion CE distribution function(<ref>b) in a strongly magnetised plasma is near-identical tothat of the electron distribution function. Gyro-averaging givesℭ_ii(f_i^(1)) = [(|v_i'|^2/v_thi^2 -5/2) v_i' ∇_log T_i + (ẑẑ - 1/3I) :W_i (v_i'^2/v_thi^2 - v_i'^2/2 v_thi^2) ] f_i^(0), from which it follows thatf_i(v_i',v_i') =[1 + τ_i A_i(|v_i'|/v_thi) v_i' ∇_log T_i + C_i(|v_i'|/v_thi) (ẑẑ - 1/3I) :W_i (v_i'^2/v_thi^2 - v_i'^2/2 v_thi^2) ] f_i^(0).On substituting for parameters η_i and ϵ_i defined by(<ref>d) and (<ref>f), respectively, we obtain(<ref>b). §.§ Deriving isotropic functions of velocity for the CE solutionIn this appendix, we illustrate how to calculate the isotropic functions A_e^T(ṽ_e),A_e^R(ṽ_e), A_e^u(ṽ_e), A_i(ṽ_i), C_e(ṽ_e) and C_i(ṽ_i) arising in the electron and ion CE distribution functions for the particularcases of two simplified collision operators: the Krook collision operator and the Lorentzcollision operator. §.§.§ Krook collision operatorThe Krook collision operator <cit.> for species s is given byℭ_K(f_s) ≡ -1/τ_s(f_s - f_s^(0)) ,where τ_s is the collision time of species s (assumed velocity-independent), and f_s^(0) = n_s/^3/2 v_ths^3exp(-|v_e'|^2/v_ths^2)is a Maxwellian distribution with density n_s, mean velocity V_e and temperature T_s determined from f_s via (<ref>). For thischoice of collision operator, i.e., assuming∑_s'ℭ(f_s,f_s')=ℭ_K(f_s)for all particle species, calculating the CE distribution function isparticularly simple. Substituting equation (<ref>) for theelectron CE distribution function into the electron Krook collision operator, we findℭ_K(f_e) = - {[A_e^T(|v_e'|/v_the) ∇_log T_e +A_e^R(|v_e'|/v_the) R_e/p_e+ (A_e^u(|v_e'|/v_the) -1)m_e u_ei/T_e τ_e] v_e' + C_e(|v_e'|/v_the) (ẑẑ - 1/3I) :W_e (v_e'^2/v_the^2 - v_e'^2/2 v_the^2) } f_e^(0).By comparison to (<ref>), which, on substituting the Krook operator, becomes ℭ_K(f_e^(1))= {[(|v_e'|^2/v_the^2 -5/2) ∇_log T_e + R_e/p_e + m_e u_ei/T_e τ_e] v_e' + (ẑẑ - 1/3I) :W_e (v_e'^2/v_the^2 - v_e'^2/2 v_the^2) } f_e^(0),we can immediately deduce thatA_e^T(ṽ_e) = -(ṽ_e^2-5/2) , A_e^R(ṽ_e) = -1 , A_e^u(ṽ_e) = 0 , C_e(ṽ_e) = -1 .The CE electron-ion-drift term vanishes for a Krook operator because the operator neglects inter-species collisions; by the same token, neither T_i and T_e nor V_i and V_e will equilibrate. For the ion CE distribution, it follows from (<ref>) substituted into (<ref>) thatℭ_K(f_i) = - [ A_i(|v_i'|/v_thi) v_i' ∇_log T_i + C_i(|v_i'|/v_thi) (ẑẑ - 1/3I) :W_i (v_i'^2/v_thi^2 - v_i'^2/2 v_thi^2) ] f_i^(0) ,which gives, on comparison with (<ref>), thatA_i(ṽ_i) = -(ṽ_i^2-5/2) , C_i(ṽ_i) = -1 . §.§.§ Lorentz collision operatorThe Lorentz collision operator for species s is defined byℭ_L(f_s) ≡ν_s(v) v^3 /v[1/v(I- v̂v̂)f_s/v] ,where ν_s(v) is a velocity-dependent scattering rate. We emphasise that the Lorentz collision operator is still simplified and physically complete compared to the full Landau collision operator, as it merely isotropises thedistribution function over long times. However, such an operator does arise asthe largest component of the electron-ion collision operator [see (<ref>b) in appendix<ref>], and is, in fact, the exact electron collision operator in the limit of highly-chargedions: the so called `Lorentz approximation' <cit.>. To calculate the electron CE distribution function, we substitute (<ref>)into the collision operator (<ref>) (with s = e). Using the identities/v [1/v(I- v̂ v̂ ) /v ( a v) ] = - 2 a v/v^3, /v [1/v(I- v̂ v̂ ) /v ( v A v )] = -6 v A v/v^3 for any constant vector a and any symmetric, traceless, constantmatrix A, it follows thatℭ_L(f_e) = - ν̂_e(ṽ_e) {[2 A_e^T(|v_e'|/v_the) ∇_log T_e +2 A_e^R(|v_e'|/v_the) R_e/p_e+ 2 (A_e^u(|v_e'|/v_the) -1)m_e u_ei/T_e τ_e] v_e'+ 6 C_e(|v_e'|/v_the) (ẑẑ - 1/3I) :W_e (v_e'^2/v_the^2 - v_e'^2/2 v_the^2) } f_e^(0),where ν̂_s≡ν_s(ṽ_s) τ_s is the non-dimensionalisedcollision rate for species s. As with the Krook operator, we compare (<ref>) to(<ref>), substituting a Lorentz collision operator for the latter, viz., ℭ_L(f_e^(1))= {[(|v_e'|^2/v_the^2 -5/2) ∇_log T_e + R_e/p_e + m_e u_eiν_e(ṽ_e)/T_e] v_e' + (ẑẑ - 1/3I) :W_e (v_e'^2/v_the^2 - v_e'^2/2 v_the^2) } f_e^(0).We deduce from the comparison thatA_e^T(ṽ_e) = -1/2 ν̂_e(ṽ_e) (ṽ_e^2-5/2) , A_e^R(ṽ_e) = -1/2 ν̂_e(ṽ_e) , A_e^u(ṽ_e) = 1/2 , C_e(ṽ_e) = -1/6 ν̂_e(ṽ_e) .The isotropic functions A_i(ṽ_i) and C_i(ṽ_i), which are given by A_i(ṽ_i) = -1/2 ν_i(ṽ_i) τ_i (ṽ_i^2-5/2) , C_i(ṽ_i) = -1/6 ν_i(ṽ_i) τ_i . can be deducedin an analogous manner. § DERIVATION OF HOT, MAGNETISED PLASMA DISPERSION RELATION FOR ARBITRARY DISTRIBUTION FUNCTIONSIn this appendix we re-derive the hot-plasmadispersion relation, given by (<ref>) in section <ref> <cit.>. Our derivation also introduces a (simplified)collision operator in order to show that substitution (<ref>) stated in section <ref>provides a simple technique for including the effect of collisions on linearelectromagnetic perturbations. Consider a kinetic, magnetised plasma in equilibrium composed of one electron species and multiple ionsspecies, with (assumed constant) background magnetic field B_0. As in section <ref>, we denote the (gyrotropic) equilibrium distribution function of species s as f_s0 = f_s0(v_,v_). and then consider a collisionless, linear perturbation δ f_s to this equilibrium state, with wavevector k and complex frequency ω:δ f_s = δ f_s exp{i(𝐤𝐫 - ω t)}.The electromagnetic perturbations associated with the perturbed distribution functionshave the forms given in (<ref>), viz.,δE =δE exp{i(𝐤 𝐫 - ωt)},δB =δB exp{i(𝐤 𝐫 - ωt)} ,and satisfy Faraday's law and the Maxwell-Amp̀ere's law: δB/t =- c×δE , ×δB =4 /c δj + 1/c δE/t,where the current perturbation isδj = δjexp{i(𝐤𝐫 - ω t)}= ∑_s Z_s e ∫d^3 v v δ f_s .To close these equations, we relate δ f_s to the electromagnetic field perturbations by linearising the Maxwell-Vlasov-Landau equation(<ref>). The linearisation f_s= f_s0 + δ f_s then gives that the perturbed distribution function of species s satisfies δf_s/ t + v∇δ f_s + Z_s e/m_s c(v×B_0 ) δf_s/v = -Z_s e/m_s(δE + v×δB/c)f_s0/v -ν_s δ f_s ,where we have replaced the full linearised collision operator with a simplified Krook collision operator with constant collision frequency ν_s = τ_s^-1 for species s.For any particular equilibrium distribution function, (<ref>a),(<ref>b),(<ref>) and (<ref>)are a closed set of governing equations.We now write these equations in terms of k and ω using (<ref>),(<ref>a), and (<ref>b): -i ωδB = -i c k ×δE, i k ×δB=4 /c δj - i ω/c δE, δj =∑_s Z_s e ∫d^3 v v δf_s , (-i ω̂_s + i k v + Ω̃_s /ϕ) δf_s =-Z_s e/m_s (δE + v ×δB/c) f_s0/v,where we have defined the (signed) Larmorfrequency of species s asΩ̃_s ≡Z_s e B_0/m_s c = Z_s/|Z_s|Ω_s ,and introduced the modified complex frequency ω̂_s ≡ω + iν_s. Note that Z_e = -1, so that Ω̃_e < 0. We then eliminate δBin (<ref>b) and (<ref>d) using (<ref>a) to givek^2 c^2/ω^2 [δE - k̂ (k̂ ·δE)]=4 i/ωδj - δE, δj =∑_s Z_s e ∫d^3 v v δf_s , (-i ω̂_s + i k v + Ω̃_s /ϕ) δf_s =-Z_s e/m_s [δE + k/ω v ×(k̂ ×δE)] f_s0/v . Next, we derive an expression for δ f_s in terms of δE. For arbitrary wavelengths compared to the Larmor radius ρ_s of species s, expressingδ f_s in terms of the equilibrium distribution function and δE requires inversion of the gyrophase-angle derivative in (<ref>). This can be done for any f_s0 in an orthonormal coordinate system with basis vectors {x̂,ŷ,ẑ} defined by equations (<ref>). By Fourier transforming δ f_s in ϕ, it can then be shown thatδ f_s = -Z_s e i/m_s ω( f_s0/ v_-v_/v_ f_s0/ v_) ẑδE + exp(-i k_ρ̃_s ṽ_ssinϕ) ∑_n = -∞^∞δ f_s,nexp(i m ϕ) ,where the series coefficients are given by δ f_s,n = -Z_s e i/m_s1/ω̂_s - k_ v_ - n Ω̃_s[ f_s0/ v_+k_/ω(v_ f_s0/ v_-v_ f_s0/ v_)] u_n^*δE,and the vector u_n in the basis {x̂,ŷ,ẑ} isu_n = v_/v_J_n(k_ρ̃_s ṽ_s) ẑ + n J_n(k_ρ̃_s ṽ_s)/k_ρ̃_s ṽ_sx̂ - i J_n'( k_ρ̃_s ṽ_s) ŷ,J_n( k_ρ̃_s ṽ_s) denoting the n-th order Bessel function of the first kind. We can then take advantage of the independence of f_s0 of the gyroangle to show that the current perturbation isδj= - ∑_s 2Z_s^2 e^2 i/m_s ω∫_-∞^∞d v_∫_0^∞d v_(v_ f_s0/ v_-v_ f_s0/ v_) v_ẑ(ẑδE) + ∑_s 2Z_s e ∫_C_Ld v_∫_0^∞d v_ v_^2 ∑_n = -∞^∞δ f_s,nu_n ,where C_L denotes the usual Landau contour. This can be written as Ohm's law:δj= σδE ,where σ is the conductivity tensor. In the absence ofcollisions (ν_s = 0), this is given by (<ref>). If the collision frequency ν_s ≠ 0 is non-zero, then ω̂_s/|k_| v_ths = ω̃_s + i/|k_| τ_s v_ths=ω̃_s + i/|k_| λ_s ,from which the substitution (<ref>) proposed in section <ref> follows. Substituting Ohm's law (<ref>) into Ampère's law (<ref>a) gives thesingular nonlinear eigenvalue equation[c^2 k^2/ω^2(k̂k̂-I)+ 𝔈] δE =0,where𝔈≡I + 4 i/ωσis the plasma dielectric tensor (<ref>).Taking the determinant of (<ref>) gives thedesired result (<ref>).§ ELECTROSTATIC INSTABILITIES OF CE PLASMA In this appendix, we calculate the electrostatic hot-plasma dispersion relationfor arbitrary distribution functions (appendix <ref>). We then show (appendix <ref>) that for frequencies ω such that ω̃_s = ω/k_ v_ths≪ 1, the dominant contribution to the longitudinal conductivityk̂σk̂ is from the Maxwellian component, and strictly positive;the small O(η_s,ϵ_s) non-Maxwellian distortion associated with the CE distribution function results in only an O(η_s,ϵ_s) distortion to k̂σk̂. We then illustrate the possibility of electrostatic instabilities associatedwith the CE distribution function by calculating the growth rate of the parallel CEbump-on-tail instability (appendix <ref>). Finally,in appendix <ref>, we show that the only electrostatic instabilities that canoccur have a growth rate which is exponentially small in dimensionlessparameters O(η_s,ϵ_s), for arbitrary frequencies. Thus, itfollows that electrostatic instabilities generally have a small growth rate incomparison to electromagnetic instabilities for a CE plasma. §.§ The electrostatic hot-plasma dispersion relationBeginning from the singular eigenvalue equation (<ref>), viz., [c^2 k^2/ω^2(k̂k̂-I)+ 𝔈] δE =0,we consider the electrostatic modes, for which δE = (k̂δE) k̂. For them, the hot-plasma dispersion relation becomes 𝔈_33 = k^2 + 4 i/ωk̂σk̂= 0.Employing the expression (<ref>) for the conductivity tensor, wecalculate the longitudinal conductivity:k̂σk̂= - i/4 ω∑_s ω_ps^2 [ 2/√()k_^2/k^2∫_-∞^∞dṽ_s ṽ_s∫_0^∞dṽ_sΛ_s(ṽ_s,ṽ_s) + ω̃_s2/√()∫_C_Ldṽ_s∫_0^∞dṽ_sṽ_s^2 Ξ_s(ṽ_s,ṽ_s) ∑_n = -∞^∞k̂R_snk̂/ζ_sn -ṽ_s] ,where k̂R_snk̂=J_n(k_ρ̃_s ṽ_s)^2/k^2 ρ̃_s^2 ṽ_s^2(n^2 + 2 n k_ρ̃_s ṽ_s + k_^2 ρ̃_s^2 ṽ_s^2 ) =k_^2 J_n(k_ρ̃_s ṽ_s)^2/k^2 ṽ_s^2(n/k_ρ̃_s + ṽ_s)^2 .By way of the identity∑_n = -∞^∞ J_n(k_ρ̃_s ṽ_s)^2 (n/k_ρ̃_s + ṽ_s)^2/ζ_sn -ṽ_s = -ṽ_s + ω̃_s∑_n = -∞^∞ J_n(k_ρ̃_s ṽ_s)^2 n/k_ρ̃_s + ṽ_s/ζ_sn -ṽ_s,which follows directly from the Bessel function identity∑_n=-∞^∞ J_n(k_ρ̃_s ṽ_s)^2 = 1 ,it follows thatω̃_s 2/√()∫_C_Ldṽ_s∫_0^∞dṽ_sṽ_s^2 Ξ_s(ṽ_s,ṽ_s) ∑_n = -∞^∞k̂R_snk̂/ζ_sn -ṽ_s= - ω̃_s2/√()k_^2/k^2∫_C_Ldṽ_s ṽ_s∫_0^∞dṽ_s[f̃_s0/ṽ_s + Λ_s(ṽ_s,ṽ_s)/ω̃_s]+ ω̃_s2/√()k_^2/k^2∫_C_Ldṽ_s∫_0^∞dṽ_sΛ_s(ṽ_s,ṽ_s)∑_n = -∞^∞ J_n(k_ρ̃_s ṽ_s)^2 n/k_ρ̃_s + ṽ_s/ζ_sn -ṽ_s+ ω̃_s^2 2/√()k_^2/k^2∫_C_Ldṽ_s∫_0^∞dṽ_sf̃_s0/ṽ_s∑_n = -∞^∞ J_n(k_ρ̃_s ṽ_s)^2 n/k_ρ̃_s + ṽ_s/ζ_sn -ṽ_s= -2/√()k_^2/k^2∫_-∞^∞dṽ_s ṽ_s∫_0^∞dṽ_sΛ_s(ṽ_s,ṽ_s)+ 2 ω̃_s^2/√()∫_C_Ldṽ_s∫_0^∞dṽ_s∑_n = -∞^∞(ṽ_sf̃_s0/ṽ_s + n/k_ρ̃_sf̃_s0/ṽ_s)J_n(k_ρ̃_s ṽ_s)^2/ζ_sn -ṽ_s,where ω̃_s≡ω/k v_ths. We conclude thatk̂σk̂ = - i/4 ω∑_s ω_ps^2 [ ω̃_s^2 2/√()∫_C_Ldṽ_s∫_0^∞dṽ_s∑_n = -∞^∞Π_n(ṽ_s,ṽ_s)J_n(k_ρ̃_s ṽ_s)^2/ζ_sn -ṽ_s] ,whereΠ_n(ṽ_s,ṽ_s) ≡ṽ_sf̃_s0/ṽ_s + n/k_ρ̃_sf̃_s0/ṽ_s.The electrostatic component of the dielectric tensor is then𝔈_33 = k^2 + ∑_s k_Ds^2 [ 1/√()∫_C_Ldṽ_s∫_0^∞dṽ_s∑_n = -∞^∞Π_n(ṽ_s,ṽ_s)J_n(k_ρ̃_s ṽ_s)^2/ζ_sn -ṽ_s] ,and the electrostatic hot-plasma dispersion relation (<ref>) becomesk^2 + ∑_s k_Ds^2 [ 1/√()∫_C_Ldṽ_s∫_0^∞dṽ_s∑_n = -∞^∞Π_n(ṽ_s,ṽ_s)J_n(k_ρ̃_s ṽ_s)^2/ζ_sn -ṽ_s] = 0 ,where the Debye wavenumber k_Ds of species s is defined byk_Ds≡√(2)ω_ps/v_ths.§.§ The electrostatic dielectric response at low frequenciesIn this appendix, we perform a Taylor expansion of the electrostatic component 𝔈_33of the dielectric tensor (<ref>) in ω̃_s≪ 1. Before carrying out the expansion, we first substitute the identityΠ_n(ṽ_s,ṽ_s) = ω̃_sΞ_s(ṽ_s,ṽ_s)+ (ṽ_s-ζ_sn) f̃_s0/ṽ_sinto (<ref>), which then becomes𝔈_33= k^2 - ∑_s k_Ds^2 1/√()∫_-∞^∞dṽ_s∫_0^∞dṽ_sf̃_s0/ṽ_s+ ∑_s k_Ds^2 [ ω̃_s/√()∫_C_Ldṽ_s∫_0^∞dṽ_sΞ_s(ṽ_s,ṽ_s) ∑_n = -∞^∞J_n(k_ρ̃_s ṽ_s)^2/ζ_sn -ṽ_s] .Now carrying out the Taylor expansion in ω̃_s≪ 1, we see that, to the leading order in this expansion,𝔈_33≈ k^2 + ∑_s k_Ds^2 1/√()∫_-∞^∞dṽ_sf̃_s0(ṽ_s,0) .For the CE distributionf̃_s0(ṽ_s,0) = exp(-ṽ_s^2) {1+η_s A_s(ṽ_s) ṽ_s + ϵ_s C_s(ṽ_s) ṽ_s^2} ,we have1/√()∫_-∞^∞dṽ_sf̃_s0(ṽ_s,0) = 1 + ϵ_s/2 √()∫_0^∞dṽ_sṽ_s^2 C_s(ṽ_s) exp(-ṽ_s^2) ,where the term in the CE distribution function proportional to η_s has vanished on account of having odd parity with respect to ṽ_s. We conclude that the non-Maxwellian contribution to (<ref>) is O(η_s,ϵ_s) in comparison to the Maxwellian contribution, and sothe electrostatic component of the dielectric tensor for low-frequency fluctuations is just𝔈_33≈ k^2 + ∑_s k_Ds^2 , or, writing (<ref>) explictly in terms of ω̃_sand the plasma frequency ω_ps of species s, 𝔈_33≈ k^2 + ∑_s ω_ps^2/ω^22 k_^2/k^2ω̃_s^2 . It follows that 𝔈_33^(0) and 𝔈_33^(1) defined by (<ref>) are given by 𝔈_33^(0) = 0 ,𝔈_33^(1) = ω_pe^2/ω^2 ∑_s Z_s T_e/T_s 2 k_^2/k^2 . where we have neglected the displacement current term (k ≪ k_De), and the temperature of species s is denoted by T_s.§.§ Existence of electrostatic instabilities for a CE plasma That electrostatic instabilities can exist is most simply shown in the limit ofpurely parallel, high-frequency fluctuations: k_ = 0, k_ = k, ω̃_s = ω̃_s≫1, andϖ≡ ω≫ ω≡γ.For purely parallel modes, the only non-zero term in the sum of Bessel functionsin the electrostatic hot-plasma dispersion relation (<ref>)is the n = 0 term; thus, (<ref>) simplifies to𝔈_33= k^2 + ∑_s k_Ds^2 ( 1/√()∫_C_Ldṽ_s∫_0^∞dṽ_sṽ_sf̃_s0/ṽ_s1/ω̃_s-ṽ_s) = 0 .Next, we expand (<ref>) around the real frequencyϖ, using (<ref>); this gives𝔈_33(ω, k) ≈𝔈_33(ϖ, k) + iγ∂𝔈_33/∂ω(ϖ, k) .Taking the imaginary part of (<ref>) allows for anexpression for γ to be derived in terms of ϖ:γ≈ - [∂𝔈_33/∂ω(ϖ, k)]^-1 𝔈_33(ϖ, k).To calculate γ, we use 𝔈_33(ϖ, k) = k^2 + ∑_s k_Ds^2 ( 1/√() P∫d ṽ_s ∫_0^∞ d ṽ_s ṽ_s f̃_s0/ṽ_s 1/ω̃_s-ṽ_s ) ,𝔈_33(ϖ, k)= -√() k^2 ∫_0^∞ d ṽ_s ṽ_s f̃_s0/ṽ_s(ω̃_s,ṽ_s), where, to the leading order, ω̃_s≈ϖ/k v_ths. Now expanding (<ref>a) in ω̃_s≫ 1, we find that𝔈_33(ϖ, k)≈k^2 - ∑_s k_Ds^2/ω̃_s^2≈ k^2 (1-ω_pe^2/ϖ^2) ,where we have integrated (<ref>a) by parts, usedidentity∫_-∞^∞dṽ_s∫_0^∞dṽ_s ṽ_sf̃_s0(ṽ_s,ṽ_s) = √(),and neglected the small ion contribution to the dielectric tensor. We conclude that – as expected – the real frequency of such modes is simply the plasma frequency: ϖ≈±ω_pe. This in turn implies thatω̃_e = k_De/√(2) k≫ 1 .In other words, electrostatic modes in this limit are simply plasma oscillationswith wavelengths much greater than the Debye length. We immediately deduce that if ϖ≈ω_pe (without loss of generality, we can consider the mode with ϖ > 0), then∂𝔈_33/∂ω(ϖ, k) ≈2 k^2/ω_pe,which in turn implies that γ is positive if and only if, for some k,𝔈_33(ω_pe, k) > 0.For the electron CE distribution function (<ref>), we havef̃_e0/ṽ_e= - exp(-ṽ_e^2){2 ṽ_e + η_e [(2 ṽ_e^2 - 1) A_e(ṽ_e) - ṽ_e^2/ṽ_e A_e'(ṽ_e) ]+ ϵ_e [ 2 ṽ_e C_e(ṽ_e) (ṽ_e^2-ṽ_e^2/2-1)- ṽ_e/ṽ_e(ṽ_e^2-ṽ_e^2/2)C_e'(ṽ_e) ] } .As shown in appendix <ref>, for a Krook collision operator it follows that (assuming η_e^R = η_e^u = 0)A_e(ṽ_e) = -(ṽ_e^2 - 5/2) , C_e(ṽ_e) = -1 . We then see that𝔈_33(ω_pe, k)=√() k^2 [ k_De/√(2) k - η_e (k_De^2/4 k^2 -3/4) (k_De^2/k^2 - 1) - ϵ_e k_De/√(2) k(k_De^2/2 k^2-3/2)] exp(-k_De^2/2 k^2).This expression changes sign from negative to positive when k ≲η_e^1/3k_De, or k ≲ϵ_e^1/2 k_De; thus, plasma waves with sufficientlylong wavelengths are driven unstable by the non-Maxwellian component of theCE distribution function. Physically, this is the bump-on-tail instability; this arises because the distribution function is no longer monotonically decreasing at (parallel) particle velocities v_≳η_e^-1/3 v_the, orv_≳η_e^-1/3 v_the, and so plasma waves can extract energy from particles via the Landau resonance. Substituting (<ref>) into (<ref>), the growth rate of instabilities satisfying k ≪ k_De becomesγ≈ω_pe√()/2 √(2)k_De/k( 1 - η_e k_De^3/2 √(2) k^3-ϵ_e k_De^2/2 k^2) exp(-k_De^2/2 k^2).Maximising this expression with respect to k, it can then be shown that the peak growth rate for CE electron-temperature-gradient-drivenmicroinstabilities (ϵ_e = 0) isγ_max≈3 √()/4η_e^1/3exp(-η_e^-2/3-1) ω_peat the wavenumber k_peak≈η_e^1/3/√(2)[1-η_e^2/3/2] k_De,whereas for CE electron-shear-driven microinstabilities (η_e = 0), γ_max≈√()/2ϵ_e^1/2exp(-ϵ_e^-1-1) ω_peat the wavenumber k_peak≈ϵ_e^1/2/√(2)[1-ϵ_e/2] k_De.§.§ Impossibility of electrostatic instabilities with `fast' growth rates The existence of electrostatic instabilities was demonstrated in appendix (<ref>); however, the growth rates of the exemplified instabilities were shownto be exponentially small in theparameters η_e or ϵ_e. In this appendix,we provide a proof that there cannot exist electrostatic instabilities whose growth rate scales algebraically withη_s or ϵ_s.To substantiate this claim properly, it is necessary to consider perturbations with frequencies ω satisfying ω≪ k_ v_ths and ω≳ k_ v_ths separately. §.§.§ Low-frequency electrostatic modes: ω≪ k_ v_ths The impossibility of low-frequency electrostatic instabilities followsimmediately from equation (<ref>), whichshows that the leading-order term in the ω̃_s≪ 1 expansionof the electrostatic component of the dielectric tensor is non-zero. Itfollows that the electrostatic component of the dielectric tensor is strictly positiveat low frequencies. Since the electrostatic component of the dielectric tensor must vanish in order for the electrostatic dispersion relation (<ref>)to be satisfied, we conclude that there do not exist electrostatic modes with ω≪ k_v_ths, let alone instabilities. §.§.§ Other electrostatic modes: ω≳ k_ v_ths For all other electrostatic perturbations, we suppose that there exist microinstabilities with growth rates which scale algebraicallywith η_s, ϵ_s, and then prove that that such an supposition is incompatible with the hot-plasmaelectrostatic dispersion relation. Consider some unstable perturbation satisfying the electrostatic dispersion relation (<ref>), with complex frequency ω = ϖ +iγ, and γ > 0. We then defineϖ̃_s ≡ϖ/k_ v_ths , γ̃_s ≡γ/k_ v_ths ,so that ω̃_s = ϖ̃_s + iγ̃_s.For unstable perturbations satisfying (<ref>), it followsfrom the real and imaginary parts of the dispersion relation that0 = k^2 - ∑_s k_Ds^2 { 1/√() ∫_-∞^∞ d ṽ_s ∫_0^∞ d ṽ_s ∑_n = -∞^∞ [ Π_n(ṽ_s,ṽ_s) ×(ṽ_s - ϖ̃_s+ n/k_ ρ̃_s)J_n(k_ ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ ρ̃_s)^2 + γ̃_s^2 ] } , 0 = γ∑_s k_Ds^2 μ_s^-1/2 { 1/√() ∫_-∞^∞ d ṽ_s ∫_0^∞ d ṽ_s ∑_n = -∞^∞ [ Π_n(ṽ_s,ṽ_s) ×J_n(k_ ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ ρ̃_s)^2 + γ̃_s^2 ] } , where μ_s ≡ m_e/m_s, and we have utilised the fact that the Landau contour simplifies to the real line for unstable perturbations. Using (<ref>b), we caneliminate part of (<ref>a) to give0 = k^2 - ∑_s k_Ds^2 {1/√()∫_-∞^∞dṽ_s∫_0^∞dṽ_s∑_n = -∞^∞[ Π_n(ṽ_s,ṽ_s) ×(ṽ_s + n/k_ρ̃_s)J_n(k_ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ρ̃_s)^2 + γ̃_s^2] }.Next, we substitute for Π_n(ṽ_s,ṽ_s) usingΠ_n(ṽ_s,ṽ_s) =Λ_s(ṽ_s,ṽ_s)+ (ṽ_s+n/k_ρ̃_s) f̃_s0/ṽ_s,to give0 = k^2 - ∑_s k_Ds^2 {1/√()∫_-∞^∞dṽ_s∫_0^∞dṽ_s∑_n = -∞^∞[f̃_s0/ṽ_s(ṽ_s + n/k_ρ̃_s)^2 J_n(k_ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ρ̃_s)^2 + γ̃_s^2+ Λ_s(ṽ_s,ṽ_s) (ṽ_s + n/k_ρ̃_s) J_n(k_ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ρ̃_s)^2 + γ̃_s^2] }.This expression is very helpful for contradicting the premise of the existenceof unstable electrostatic modes. We illustrate this claim with a simple example – a pure Maxwellian distribution function – before considering theCE distribution. For a Maxwellian distribution for which Λ_s(ṽ_s,ṽ_s) =0, and f̃_s0/ṽ_s = - 2 ṽ_sexp(-ṽ_s^2),(<ref>) becomes0 = k^2 + ∑_s k_Ds^2 [ 2/√()∫_-∞^∞dṽ_s∫_0^∞dṽ_sṽ_sexp(-ṽ_s^2)×∑_n = -∞^∞(ṽ_s + n/k_ρ̃_s)^2 J_n(k_ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ρ̃_s)^2 + γ̃_s^2] .The integrand on the right-hand-side of (<ref>) is strictly positive– a contradiction. Therefore, we recover the standard result that there cannotexist unstable perturbations if the underlying distribution is Maxwellian. We now consider the CE distribution (<ref>). In order for an instability to arise, it isclear that the integrand on the right-hand-side of (<ref>)has to be positive – and further, the contribution of the integrand from thatinterval has to dominate all other (negative) contributions to the total integral. Toprove that these conditions cannot be satisfied for the CE distributionfunction, we consider the two terms in the integrand on the right-hand-side of (<ref>)separately. For the first term, f̃_s0/ṽ_s(ṽ_s + n/k_ρ̃_s)^2 J_n(k_ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ρ̃_s)^2 + γ̃_s^2 > 0if and only iff̃_s0/ṽ_s< 0 .For the CE distribution function (<ref>), f̃_s0/ṽ_s= - ṽ_sexp(-ṽ_s^2){2+ η_s [2 ṽ_s A_s(ṽ_s) - ṽ_s/ṽ_s A_s'(ṽ_s) ]+ ϵ_s [ 2 C_s(ṽ_s) (ṽ_s^2-ṽ_s^2/2+1/2)- 1/ṽ_s(ṽ_s^2-ṽ_s^2/2)C_s'(ṽ_s) ] }.Thus, for ṽ_s≲ 1 and ṽ_s≲ 1, we see thatf̃_s0/ṽ_s<0, because η_s, ϵ_s ≪ 1. The only values of ṽ_s where this inequalitycould be reversed are large: ṽ_s≫ 1. Assuming that A_s(ṽ_s) ∼ṽ_s^ι_η and C_s(ṽ_s) ∼ṽ_s^ι_ϵ for ṽ_s≫ 1, where ι_η and ι_ϵare constants, it follows that for ṽ_s≳η_s^-1/(ι_η+1) , ϵ_s^-1/(ι_ϵ+2),the non-Maxwellian terms are comparable to the Maxwellian ones. However, forsuch ṽ_s,f̃_s0/ṽ_s∼η_s^-1/(ι_η+1)exp(-η_s^-2/(ι_η+1)), ϵ_s^-1/(ι_ϵ+1)exp(-ϵ_s^-2/(ι_ϵ+1)) ,while (ṽ_s + n/k_ρ̃_s)^2 J_n(k_ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ρ̃_s)^2 + γ̃_s^2≲ϖ̃_s^2/γ̃_s^2if it is assumed that |ϖ| ≫ |γ|. Since we assumed that γ̃_s is only algebraically small in ϵ_s and/or η_s, weconclude that the contribution to the integrand on the right-hand-side of (<ref>) fromṽ_s satisfying (<ref>) is asymptotically small compared to othercontributions, and thus cannot change the sign of the total integral. For the second term, we consider the nth term of the sum independently. Recalling from (<ref>) thatΛ_s(ṽ_s,ṽ_s)= - ṽ_sexp(-ṽ_s^2) [η_s A_s(ṽ_s) - 3 ϵ_s C_s(ṽ_s)ṽ_s],it follows that for ṽ_s∼ 1, Λ_s(ṽ_s,ṽ_s)/f̃_s0/ṽ_s∼η_s/ṽ_s + n/k_ρ̃_s, ϵ_s/ṽ_s + n/k_ρ̃_s.Thus, for ṽ_s∼ 1, the non-Maxwellian term is only comparable tothe Maxwellian one for |ṽ_s + n/k_ρ̃_s| ≲η_s,ϵ_s. However, this non-Maxwellian contribution is in fact always smaller that othernon-Maxwellian contributions, which by (<ref>) are inturn smaller than the equivalent Maxwellian contributions. Depending on the magnitude of |n/k_ρ̃_s|, this claim is justified in two different ways. * |n/k_ρ̃_s| ≲ 1: in this case, let the interval of non-dimensionalised parallel velocities ṽ_s satisfying |ṽ_s + n/k_ρ̃_s| ≲η_s,ϵ_s be denoted by ℐ. Then, there exists another finite interval of ṽ_s∼ 1 such that |ṽ_s + n/k_ρ̃_s|∼ 1. It therefore follows that ∫_ℐ dṽ_sΛ_s(ṽ_s,ṽ_s) (ṽ_s + n/k_ρ̃_s) J_n(k_ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ρ̃_s)^2 + γ̃_s^2∼ η_s^2 ∫_-∞^∞dṽ_sΛ_s(ṽ_s,ṽ_s) (ṽ_s + n/k_ρ̃_s) J_n(k_ρ̃_s ṽ_s)^2/(ṽ_s - ϖ̃_s+ n/k_ρ̃_s)^2 + γ̃_s^2,where we have assumed that |ϖ̃_s| ≫|γ̃_s| (and also |ϖ̃_s| ≳ 1). The claimimmediately follows.* |n/k_ρ̃_s| ≫ 1: in this case, it followsimmediately that |ṽ_s + n/k_ρ̃_s| ≲η_s,ϵ_s if and only if ṽ_s≫ 1. Via a similar argument tothat presented for large ṽ_s for the first term in the integrand on the right-hand-side of(<ref>), contributions to the total integral will beexponentially small in η_s, ϵ_s, and thus are unable to reverse thesign of the total integral. Thus, we have confirmed that there cannot exist electrostatic instabilitieswith growth rates which are algebraic in small parameters η_s, ϵ_s.§ WEAK GROWTH OF HIGH-FREQUENCY PERTURBATIONSIn this appendix, we present an argument that all perturbations in a CE plasma with complex frequency ω = ϖ+iγ satisfying the `high-frequency'conditions |ω| ≳ k_ v_ths and |ϖ| ≫ |γ| for all particle specieshave a growth rate that is at most exponentially small in η_s, and ϵ_s. This argument does not prove that all perturbations satisfying |ω| ≳ k_ v_ths in a CE plasma are stable, in that it does not apply to perturbations whose damping or growth rate is not small compared to their frequency. §.§ Deriving conditions for stability We begin with the result that for any linear electromagnetic perturbation with real frequency ϖ > 0, growth rate γ, wavevector k, and electric-fieldperturbationδE =δEexp{i(𝐤𝐫 - ϖ t)+γ t} ,the dissipation rate 𝔔 of theperturbation is related to the anti-Hermitian part of the plasma dielectrictensor evaluated at the perturbation's real frequency <cit.>:𝔔 = iϖδE^*𝔈^A(k,ϖ) δE,where the anti-Hermitian part 𝔈^A is defined by𝔈^A =1/2(𝔈-𝔈^†),with 𝔈^† representing the conjugate transpose of 𝔈. If the mode is damped, then the dissipation rate is positive: 𝔔 >0. Since 𝔈^A is anti-Hermitian, it is diagonalisable in some orthonormal basis {ê_a,ê_b,ê_c},with imaginary eigenvalues(-iς_a,-iς_b,-iς_c),where ς_a, ς_b, and ς_c are real numbers. Thedissipation rate 𝔔 can be written in terms of these eigenvectors as𝔔 = ϖ( ς_a |ê_a δE|^2+ ς_b |ê_b δE|^2 + ς_c |ê_c δE|^2) .Thus, for unstable perturbations to exist, it must be the case that at least one of thenumbers ς_a, ς_b, and ς_c has to be negative (without loss of generality, we will assume ς_a < 0); if this is the case, then the dissipation rate (and hence the growth rate)is a linear function of ς_a. We will show that if |ω| ≳ k_ v_ths,ς_a, ς_b, and ς_c can only be negative ifthey are exponentially small in η_s and ϵ_s. To prove this, consider the characteristic polynomialϱ(ς) ≡[𝔈^A(k,ϖ) - ςI]of 𝔈^A evaluated at the real frequency ϖ and wavevector k; it is a cubic, and thus can be writtenϱ(ς) = -ς^3 - iϱ_2 ς^2 + ϱ_1 ς + iϱ_0 ,where ϱ_0, ϱ_1, and ϱ_2 depend on𝔈^A. Since 𝔈^A has eigenvalues(-iς_a,-iς_b,-iς_c),it follows thatϱ(ς) = -(ς+iς_a) (ς+iς_b) (ς+iς_c)= -ς^3 - iς^2 (ς_a+ς_b+ς_c) + ς(ς_aς_b+ς_bς_c+ς_cς_a) + iς_aς_bς_c ,and soϱ_0 = ς_aς_b ς_c , ϱ_1 = ς_aς_b+ς_bς_c+ς_cς_a ,ϱ_2 = ς_a+ς_b+ς_c . This implies that ς_a, ς_b, and ς_c arepositive if ϱ_0, ϱ_1, and ϱ_2 are positive. Furthermore, ϱ_0, ϱ_1, and ϱ_2can be used to provide bounds for ς_a, ς_b, and ς_cusing an inequality discovered by <cit.>: ς_-≤ς_a, ς_b, ς_c ≤ς_+,whereς_± = -ϱ_2/3±2/3√(ϱ_2^2 - 3 ϱ_1^2).In particular, the expression (<ref>) for the root bounds implies that if ϱ_1 and ϱ_2 areexponentially small in η_s and ϵ_s, then so are ς_a, ς_b, andς_c.We can also evaluate ϱ(ς) in terms of the components of 𝔈^Ain the coordinate basis{x̂,ŷ,ẑ}:ϱ(ς) = -ς^3 + ς^2 (𝔈_xx^A+𝔈_yy^A+𝔈_zz^A) - ς(𝔈_xx^A𝔈_yy^A + 𝔈_yy^A𝔈_zz^A+ 𝔈_zz^A𝔈_xx^A + (𝔈_xy^A)^2 + (𝔈_yz^A)^2 + (𝔈_xz^A)^2) +𝔈^A ,where we have used the symmetries (<ref>) of the dielectric tensor to give ϱ(ς) in terms of only the (six) independent components of 𝔈^A. (<ref>) gives ϱ_0 = - i𝔈^A , ϱ_1 = -𝔈_xx^A 𝔈_yy^A - 𝔈_yy^A 𝔈_zz^A- 𝔈_zz^A 𝔈_xx^A - (𝔈_xy^A)^2-(𝔈_yz^A)^2 - (𝔈_xz^A)^2 ,ϱ_2 = -i (𝔈_xx^A+𝔈_yy^A+𝔈_zz^A) . The anti-Hermiticity of 𝔈^A impliesthat 𝔈_xx^A = - i𝔈_xx^A, 𝔈_yy^A=- i𝔈_yy^A, 𝔈_zz^A= - i𝔈_zz^A,and 𝔈_xz^A= - i𝔈_xz^A, while 𝔈_xy^A = 𝔈_xy^A and 𝔈_yz^A = 𝔈_yz^A, as is indeed necessary for ϱ_0, ϱ_1, and ϱ_2 to be real numbers. Thus, in order to establish stability it issufficient for our purposes to show thati 𝔈^A < 0, 𝔈_xx^A 𝔈_yy^A + 𝔈_yy^A 𝔈_zz^A+ 𝔈_zz^A 𝔈_xx^A + (𝔈_xy^A)^2+(𝔈_yz^A)^2 + (𝔈_xz^A)^2 < 0,i (𝔈_xx^A+𝔈_yy^A+𝔈_zz^A) < 0 . When these inequalities are not strictly satisfied,then we can instead estimate the magnitude of (<ref>b) and(<ref>c) to determine bounds for ς_a, ς_b, andς_c. §.§ Evaluating conditions for stability Combining equations (<ref>) with (<ref>) gives an expression for the general plasma dielectric tensor (assuming k_ > 0 without loss of generality):𝔈=I + ∑_s ω_ps^2/ω^2[ 2/√()∫_-∞^∞dṽ_s ṽ_s∫_0^∞dṽ_sΛ_s(ṽ_s,ṽ_s) ẑẑ+ ω̃_s2/√()∫_C_Ldṽ_s∫_0^∞dṽ_sṽ_s^2 Ξ_s(ṽ_s,ṽ_s) ∑_n = -∞^∞R_sn/ζ_sn -ṽ_s] ,where all salient quantities are defined in section <ref>. Nowevaluating the anti-Hermitian part of (<ref>) for ω = ϖ, ω̃_s =ϖ̃_s, we find 𝔈^A = -i∑_s ω_ps^2/ϖ^2[2√()ϖ̃_s∫_0^∞dṽ_sṽ_s^2 ∑_n = -∞^∞Ξ_s(ζ_sn,ṽ_s) R_sn(ζ_sn,ṽ_s) ] .We now consider stability conditions (<ref>) in turn.First evaluating (<ref>c), it can be shown thati(𝔈_xx^A.+ . 𝔈_yy^A+𝔈_zz^A) = 2√()∑_s ω_ps^2/ϖ^2ϖ̃_s∑_n = -∞^∞{∫_0^∞dṽ_sṽ_s^2 Ξ_s(ζ_sn,ṽ_s) × [n^2 J_n(k_ρ̃_s ṽ_s)^2/k_^2 ρ̃_s^2 ṽ_s^2 + J_n'(k_ρ̃_s ṽ_s)^2+ ζ_sn^2/ṽ_s^2 J_n(k_ρ̃_s ṽ_s)^2] }.It is clear that the right-hand-side (<ref>) isnegative if Ξ_s(ζ_sn,ṽ_s) < 0 .For a Maxwellian distribution, Ξ_s(ζ_sn,ṽ_s) = f̃_s0/ṽ_s(ζ_sn,ṽ_s) = - 2 ṽ_sexp(-ṽ_s^2) exp(-ζ_sn^2) < 0 ,and thus i(𝔈_xx^A+𝔈_yy^A+𝔈_zz^A) <0, as required. For the CE distribution (<ref>), Ξ_s(ṽ_s,ṽ_s) =- ṽ_sexp(-ṽ_s^2){2+ η_s [2 ṽ_s A_s(ṽ_s) - ṽ_s/ṽ_s A_s'(ṽ_s) ]+ ϵ_s [ 2 C_s(ṽ_s) (ṽ_s^2-ṽ_s^2/2+1/2)- 1/ṽ_s(ṽ_s^2-ṽ_s^2/2)C_s'(ṽ_s) ] }- ṽ_s/ω̃_sexp(-ṽ_s^2) [η_s A_s(ṽ_s) - 3 ϵ_s C_s(ṽ_s)ṽ_s].For |ω̃_s| ≳ 1, it is clear for ṽ_s ≲1 that the largest contribution to Ξ_s(ṽ_s,ṽ_s) comesfrom the Maxwellian term; the non-Maxwellian terms areO(η_s,ϵ_s). Thus, for ζ_sn, ṽ_s≲1, Ξ_s(ζ_sn,ṽ_s) < 0. As discussed inappendix (<ref>), for ζ_sn≫ 1,the sign of Ξ_s(ζ_sn,ṽ_s) < 0 can in principle bereversed. However, the magnitude of Ξ_s(ζ_sn,ṽ_s) is exponentially small for such ζ_sn, and thus so is ϱ_2. The remaining conditions (<ref>a) and (<ref>b) are much more tedious totreat; thus for simplicity, we explicitly consider only the case when a single particle species provides the dominant contribution to the dielectric tensor. Under this assumption, it can be shown that𝔈_xx^A𝔈_yy^A + 𝔈_yy^A𝔈_zz^A+𝔈_zz^A𝔈_xx^A + (𝔈_xy^A)^2+(𝔈_yz^A)^2 + (𝔈_xz^A)^2= 2 ω_ps^4/ϖ^4ϖ̃_s^2 ∑_m = -∞^∞∑_n = -∞^∞{∫_0^∞dṽ_s^(1)∫_0^∞dṽ_s^(2) ṽ_s^(1)ṽ_s^(2) ×[ Ξ_s(ζ_sm,ṽ_s^(1)) Ξ_s(ζ_sn,ṽ_s^(2)) 𝔄 _mn(α_s, ṽ_s^(1),ṽ_s^(2)) ] },where α_s ≡ k_ρ̃_s and𝔄 _mn (α_s,ṽ_s^(1),ṽ_s^(2))≡ 1/α_s^2[m ṽ_s^(2) J_m(α_s ṽ_s^(1)) J_n'(α_s ṽ_s^(2)) - n ṽ_s^(1) J_m'(α_s ṽ_s^(1)) J_n(α_s ṽ_s^(2)) ]^2+ 1/α_s^2[m ζ_snṽ_s^(2) J_m(α_s ṽ_s^(1)) J_n'(α_s ṽ_s^(2)) - n ζ_smṽ_s^(1) J_m'(α_s ṽ_s^(1)) J_n(α_s ṽ_s^(2)) ]^2+[ζ_snṽ_s^(2) J_m(α_s ṽ_s^(1)) J_n'(α_s ṽ_s^(2)) - ζ_smṽ_s^(1) J_m'(α_s ṽ_s^(1)) J_n(α_s ṽ_s^(2)) ]^2 .Being a sum of positive terms, 𝔄 _mn is positive for all nand m, and thus we again conclude that the integrand on the right-hand side of (<ref>)is negative if Ξ_s(ζ_sm,ṽ_s) < 0 and Ξ_s(ζ_sn,ṽ_s) <0. Via similar reasoning to that applied to ϱ_2 in the previous paragraph, it follows that forthe CE distribution function, the only way in which this condition can beviolated is for either ζ_sm≫ 1 or ζ_sn≫ 1 – both ofwhich give rise to exponentially small terms. Thus, either ϱ_1 > 0 or ϱ_1is exponentially small in η_s and ϵ_s. Finally, for (<ref>a), it is necessary to evaluate𝔈^A; this becomes (after much tedious algebra)𝔈^A= -4/3i^3/2ω_ps^6/ϖ^6ϖ̃_s^3 × ∑_m = -∞^∞∑_n = -∞^∞∑_l = -∞^∞{∫_0^∞dṽ_s^(1)∫_0^∞dṽ_s^(2)∫_0^∞dṽ_s^(3) ṽ_s^(1)ṽ_s^(2)ṽ_s^(3)× [ Ξ_s(ζ_sm,ṽ_s^(1)) Ξ_s(ζ_sn,ṽ_s^(2)) Ξ_s(ζ_sl,ṽ_s^(3)) 𝔅 _mnl(α_s, ṽ_s^(1),ṽ_s^(2),ṽ_s^(3)) ] },where𝔅 _mnl (α_s,ṽ_s^(1),ṽ_s^(2),ṽ_s^(3))≡ {m J_m(α_s ṽ_s^(1))[ṽ_s^(1)ζ_snJ_n(α_s ṽ_s^(2)) J_l'(α_s ṽ_s^(3)) - ṽ_s^(3)ζ_slJ_n'(α_s ṽ_s^(2)) J_l(α_s ṽ_s^(3))] + n J_n(α_s ṽ_s^(1))[ṽ_s^(2)ζ_slJ_l(α_s ṽ_s^(2)) J_m'(α_s ṽ_s^(3)) - ṽ_s^(1)ζ_smJ_l'(α_s ṽ_s^(2)) J_m(α_s ṽ_s^(3))]+ l J_l(α_s ṽ_s^(1))[ṽ_s^(3)ζ_smJ_m(α_s ṽ_s^(2)) J_n'(α_s ṽ_s^(3)).. - ṽ_s^(2)ζ_snJ_m'(α_s ṽ_s^(2)) J_n(α_s ṽ_s^(3)) ] }^2 .Similarly to 𝔄 _mn, 𝔅 _mnl is strictlypositive for all m, n and l, meaning that the integrandon the right-hand side of (<ref>)is negative if Ξ_s(ζ_sm,ṽ_s) < 0, Ξ_s(ζ_sn,ṽ_s) <0, and Ξ_s(ζ_sl,ṽ_s) < 0. For the CE distribution,exactly the same argument as before applies to show that either ϱ_0 >0 or it is exponentially small. In summary, we have now verified that the only situation in which the stabilityconditions (<ref>) are not satisfied are those for whichϱ_0, ϱ_1 and ϱ_2 are exponentially small in η_s and ϵ_s. In the latter case, considerations ofbounds (<ref>) and (<ref>)implies that ς_a, ς_b, and ς_c are also allexponentially small in η_s and ϵ_s. The claim of the appendixfollows. § PROPERTIES OF LEADING-ORDER EXPANSION 𝔈^(0) OF DIELECTRIC TENSOR (<REF>) IN Ω̃_S≪ 1 FOR A WEAKLY ANISOTROPIC DISTRIBUTION FUNCTION§.§ Symmetries of 𝔈_s^(0) in coordinate basis {x̂,ŷ,ẑ} In this appendix, we show that the leading-order expansion 𝔈_s^(0) [cf. (<ref>a)] of the dielectric tensor 𝔈_s of species s [cf. (<ref>)] in ω̃_s≪ 1 arising in a non-relativistic plasma with only weak anisotropy of its particle distribution function obeys additional symmetries(<ref>), viz., (𝔈_s^(0))_xz = - k_/k_ (𝔈_s^(0))_xx ,(𝔈_s^(0))_yz = k_/k_ (𝔈_s^(0))_xy ,(𝔈_s^(0))_zz = k_^2/k_^2 (𝔈_s^(0))_xx .when k ρ_s ∼ 1.The term `weak anisotropy' means that the magnitude of angular anisotropy – mathematically represented by the function Λ_s defined by (<ref>) – satisfiesΛ_s ≲ω̃_s for all particle species when ṽ_s ∼1. We begin the proof by substituting (<ref>) into (<ref>) to give 𝔈_s≡ ω_ps^2/ω^2[ 2/√()k_/|k_|∫_-∞^∞dṽ_s ṽ_s∫_0^∞dṽ_sΛ_s(ṽ_s,ṽ_s) ẑẑ+ ω̃_s2/√()∫_C_Ldṽ_s∫_0^∞dṽ_sṽ_s^2 Ξ_s(ṽ_s,ṽ_s) ∑_n = -∞^∞R_sn/ζ_sn -ṽ_s] .Then, under the assumed ordering ω̃_s∼Λ_s,the function Ξ_s defined by (<ref>) satisfies Ξ_s ∼ 1 for ṽ_s ∼ 1; therefore, 𝔈_s has order-unity elements as ω̃_s→ 0. Let us expand 𝔈_s in a Taylor series around ω̃_s = 0:𝔈_s = ω̃_s𝔈_s^(0)+ δ𝔈_s ,where δ𝔈_s = O(ω̃_s^2), and the matrix elements of 𝔈_s^(0)are given below:(𝔈_s^(0) )_xx ≡ -2 ω_ps^2/√() ω^2 ∑_n=-∞^∞ [ n^2/k_^2 ρ̃_s^2∫_C_L d ṽ_s/ṽ_s+n/|k_| ρ̃_s ×∫_0^∞ d ṽ_s Ξ_s(ṽ_s,ṽ_s) J_n(k_ ρ̃_s ṽ_s)^2 ] , (𝔈_s^(0) )_xy ≡-2 i ω_ps^2/√()ω^2 ∑_n=-∞^∞ [ n/k_ ρ̃_s ∫_C_L d ṽ_s/ṽ_s+n/|k_| ρ̃_s . . ×∫_0^∞ d ṽ_s ṽ_s Ξ_s(ṽ_s,ṽ_s) J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) ] ,(𝔈_s^(0) )_xz ≡-2 ω_ps^2/√()ω^2 ∑_n=-∞^∞ [ n/k_ ρ̃_s∫_C_L ṽ_s d ṽ_s/ṽ_s+n/|k_| ρ̃_s×∫_0^∞ d ṽ_s Ξ_s(ṽ_s,ṽ_s) J_n(k_ ρ̃_s ṽ_s)^2 ] ,(𝔈_s^(0) )_yx ≡- (𝔈_s^(0) )_xy , (𝔈_s^(0) )_yy ≡-2 ω_ps^2/√()ω^2 ∑_n=-∞^∞ [ ∫_C_L d ṽ_s/ṽ_s+n/|k_| ρ̃_s ×∫_0^∞ d ṽ_s ṽ_s^2 Ξ_s(ṽ_s,ṽ_s) J_n'(k_ ρ̃_s ṽ_s)^2 ] ,(𝔈_s^(0) )_yz ≡-2 i ω_ps^2/√()ω^2 ∑_n=-∞^∞[∫_C_L ṽ_s d ṽ_s/ṽ_s+n/|k_| ρ̃_s . . ×∫_0^∞ d ṽ_s ṽ_s Ξ_s(ṽ_s,ṽ_s) J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) ] ,(𝔈_s^(0) )_zx ≡(𝔈_s^(0) )_xz , (𝔈_s^(0) )_zy ≡-(𝔈 _s^(0) )_yz , (𝔈_s^(0) )_zz ≡2 ω_ps^2/√() ω̃_s ω^2 ∫_-∞^∞ d ṽ_s ṽ_s ∫_0^∞ d ṽ_s Λ_s(ṽ_s,ṽ_s)- 2 ω_ps^2/√()ω^2 ∑_n=-∞^∞ ∫_C_L ṽ_s^2 d ṽ_s/ṽ_s+n/|k_| ρ̃_s ∫_0^∞ d ṽ_s Ξ_s(ṽ_s,ṽ_s) J_n(k_ ρ̃_s ṽ_s)^2 .Next, noting thatṽ_s/ṽ_s+n/|k_| ρ̃_s = 1 - n/|k_| ρ̃_sṽ_s/ṽ_s+n/|k_| ρ̃_s,as well as ∑_n=-∞^∞n/k_ρ̃_s∫_C_Ldṽ_s∫_0^∞dṽ_sΞ_s(ṽ_s,ṽ_s) J_n(k_ρ̃_s ṽ_s)^2= 0,we see that the double integral in (<ref>c) can be rearranged to give(𝔈_s^(0))_xz=2 ω_ps^2/√()ω^2∑_n=-∞^∞[ n^2/ |k_| k_ρ̃_s^2∫_C_Ldṽ_s/ṽ_s+n/|k_| ρ̃_s ×∫_0^∞dṽ_sΞ_s(ṽ_s,ṽ_s) J_n(k_ρ̃_s ṽ_s)^2 ] , = -k_/|k_|(𝔈_s^(0))_xx .Similarly, it can be shown that(𝔈_s^(0))_yz=2 iω_ps^2/√()ω^2∑_n=-∞^∞[ n/|k_| ρ̃_s∫_C_Ldṽ_s/ṽ_s+n/|k_| ρ̃_s.. ×∫_0^∞dṽ_s ṽ_sΞ_s(ṽ_s,ṽ_s) J_n(k_ρ̃_s ṽ_s) J_n'(k_ρ̃_s ṽ_s) ] ,=k_/|k_| (𝔈_s^(0))_xy .Finally, (𝔈_s^(0))_zz can also be written in terms of (𝔈_s^(0))_xx: because ṽ_s^2/ṽ_s+n/|k_| ρ̃_s = ṽ_s - n/|k_| ρ̃_s + n^2/|k_| ^2ρ̃_s^21/ṽ_s+n/|k_| ρ̃_s,it follows that(𝔈_s^(0))_zz=k_^2/k_^2(𝔈_s^(0))_xx +2 ω_ps^2/√()ω̃_sω^2∫_-∞^∞dṽ_sṽ_s∫_0^∞dṽ_sΛ_s(ṽ_s,ṽ_s) - 2 ω_ps^2/√()ω^2∑_n=-∞^∞∫_-∞^∞ṽ_sdṽ_s∫_0^∞dṽ_sΞ_s(ṽ_s,ṽ_s) J_n(k_ρ̃_s ṽ_s)^2+ 2 ω_ps^2/√()ω^2∑_n=-∞^∞n/k_ρ̃_s∫_-∞^∞dṽ_s∫_0^∞dṽ_sΞ_s(ṽ_s,ṽ_s) J_n(k_ρ̃_s ṽ_s)^2 =k_^2/k_^2(𝔈_s^(0))_xx - 2 ω_ps^2/√()ω^2∫_-∞^∞dṽ_s∫_0^∞dṽ_s ṽ_s∂f̃_s0/∂ṽ_s+ 2 ω_ps^2/√()ω̃_sω^2∫_-∞^∞dṽ_sṽ_s∫_0^∞dṽ_sΛ_s(ṽ_s,ṽ_s) [1-∑_n=-∞^∞ J_n(k_ρ̃_s ṽ_s)^2 ]=k_^2/k_^2(𝔈_s^(0))_xx + 2 ω_ps^2/√()ω^2∫_-∞^∞dṽ_s∫_0^∞dṽ_sΛ_s(ṽ_s,ṽ_s) ,where we have used the identity∑_n=-∞^∞ J_n(k_ρ̃_s ṽ_s)^2 = 1 .Thus, we conclude that since the anisotropy is assumed small, (𝔈_s^(0))_zz = k_^2/k_^2(𝔈_s^(0))_xx+ O(ω̃_s),completing the proof. §.§ Evaluating the dielectric tensor in coordinate basis {e_1,e_2,e_3} To demonstrate that the components of the dielectric tensor 𝔈_s^(0) are given by(<ref>), viz., (𝔈_s^(0))_11 =k^2/k_^2 (𝔈_s^(0))_xx,(𝔈_s^(0))_12 = -(𝔈_s^(0))_21 = k/k_(𝔈_s^(0))_xy, (𝔈_s^(0))_22 =(𝔈_s^(0))_yy , we use (<ref>) to express 𝔈_s^(0)in the form𝔈_s^(0)= (𝔈_s^(0))_xxx̂x̂+ (𝔈_s^(0))_xy(x̂ŷ-ŷx̂) + (𝔈_s^(0))_yyŷŷ- k_/|k_| (𝔈_s^(0))_xx(x̂ẑ+ẑx̂) + k_/|k_| (𝔈_s^(0))_xy(ŷẑ-ẑŷ) + k_^2/k_^2 (𝔈_s^(0))_xxẑẑ .Noting that k̂ = k_/k x̂ +k_/k ẑ , ŷ ×k̂ = k_/k x̂ -k_/k ẑ,we can rewrite (<ref>) as𝔈_s^(0)=k^2/k_^2 (𝔈_s^(0))_xx(ŷ×k̂) (ŷ×k̂) + k/|k_| (𝔈_s^(0))_xy[ (ŷ×k̂)ŷ-ŷ(ŷ×k̂)] + (𝔈_s^(0))_yyŷŷ =k^2/k_^2 (𝔈_s^(0))_xxe_1 e_1 + k/|k_| (𝔈_s^(0))_xy(e_1e_2 - e_2 e_1 ) + (𝔈_s^(0))_yye_2 e_2 ,leading to the desired results (<ref>). In addition, we see that 𝔈_s^(0)·k̂ = 0; thus, the results (<ref>) claimingthat certain components of𝔈_s are small in ω̃_sare justified. § DIELECTRIC TENSOR COMPONENTS FOR THE CE DISTRIBUTION FUNCTION (<REF>) In this appendix, we calculate the components of the dielectric tensor arisingfrom the CE distribution function (<ref>), with isotropic functions A_e^T(ṽ_e), A_e^R(ṽ_e), A_e^u(ṽ_e), C_e(ṽ_e), A_i(ṽ_i) andC_i(ṽ_i) chosen as appropriate for a Krook collision operator (see appendix <ref>), viz., A_e^T(ṽ_e) = -(ṽ_e^2 - 5/2) , A_e^R(ṽ_e) = -1 , A_e^u(ṽ_e) = 0 , A_i(ṽ_i) = -(ṽ_i^2 - 5/2) , C_e(ṽ_e) = -1 , C_i(ṽ_i) = -1 . This, via (<ref>), allows for the dielectrictensor 𝔈_sto be calculated order by order in ω̃_s.We carry out these calculations in the case of non-relativistic fluctuations,and so𝔈≈4 i/ωσ = ∑_s 𝔈_s ,where we remind the reader that [cf. (<ref>)]𝔈_s =ω_ps^2/ω^2[2/√()k_/|k_|∫_-∞^∞dṽ_s ṽ_s∫_0^∞dṽ_sΛ_s(ṽ_s,ṽ_s) ẑẑ+ ω̃_s2/√()∫_C_Ldṽ_s∫_0^∞dṽ_sṽ_s^2 Ξ_s(ṽ_s,ṽ_s) ∑_n = -∞^∞R_sn/ζ_sn -ṽ_s] , ζ_sn≡ω̃_s - n/|k_| ρ̃_s , f̃_s0(ṽ_s,ṽ_s) ≡^3/2 v_ths^3/n_s0 f_s0(k_/|k_| v_thsṽ_s,v_thsṽ_s) , Λ_s(ṽ_s,ṽ_s) ≡ṽ_sf̃_s0/ṽ_s-ṽ_sf̃_s0/ṽ_s, Ξ_s(ṽ_s,ṽ_s) ≡f̃_s0/ṽ_s + Λ_s(ṽ_s,ṽ_s)/ω̃_s ,and (R_sn )_xx ≡n^2 J_n(k_ ρ̃_s ṽ_s)^2/k_^2 ρ̃_s^2 ṽ_s^2 ,(R_sn )_xy ≡i n J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s)/k_ ρ̃_s ṽ_s ,(R_sn )_xz ≡n J_n(k_ ρ̃_s ṽ_s)^2/k_ ρ̃_s ṽ_s k_ ṽ_s/|k_| ṽ_s, (R_sn )_yx ≡- (R_sn )_xy (R_sn )_yy ≡J_n'(k_ ρ̃_s ṽ_s)^2 , (R_sn )_yz ≡i n J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) k_ ṽ_s/|k_| ṽ_s ,(R_sn )_zx ≡(R_sn )_xz (R_sn )_zy ≡-(R_sn )_yz (R_sn )_zz ≡ṽ_s^2/ṽ_s^2 J_n(k_ ρ̃_s ṽ_s)^2 . The components of the dielectric tensor 𝔈_s in coordinate basis {e_1,e_2,e_3}are related to the components in coordinate basis {x̂,ŷ,ẑ}by (𝔈_s)_11 = k_^2/k^2 (𝔈_s)_xx - 2 k_ k_/k^2 (𝔈_s)_xz + k_^2/k^2 (𝔈_s)_zz,(𝔈_s)_12 = k_/k(𝔈_s)_xy + k_/k(𝔈_s)_yz, (𝔈_s)_13 = k_ k_/k^2 [(𝔈_s)_xx - (𝔈_s)_zz] + (k_^2/k^2- k_^2/k^2) (𝔈_s)_xz ,(𝔈_s)_21 = -(𝔈_s)_12 ,(𝔈_s)_22 = (𝔈_s)_yy ,(𝔈_s)_23 = -k_/k(𝔈_s)_xy + k_/k(𝔈_s)_yz ,(𝔈_s)_31 = (𝔈_s)_13 ,(𝔈_s)_32 = -(𝔈_s)_23 ,(𝔈_s)_33 = k_^2/k^2 (𝔈_s)_xx + 2 k_ k_/k^2 (𝔈_s)_xz + k_^2/k^2 (𝔈_s)_zz .For clarity, we calculate separately the Maxwelliancontribution M_s of the total CE distribution function and the non-Maxwellian contribution P_s associated with the CEelectron friction, temperature-gradient, and shear terms to 𝔈_s – viz., we decompose 𝔈_s as follows [cf. (<ref>)]:𝔈_s = ω_ps^2/ω^2(M_s + P_s ).§.§ Maxwellian distribution§.§.§ General dielectric tensorConsider a non-dimensionalised Maxwellian distribution function:f̃_s(ṽ_s,ṽ_s) = exp(-ṽ_s^2) .The Maxwellian is isotropic, so (<ref>) givesΛ_s(ṽ_s,ṽ_s) = 0,while (<ref>) becomesΞ_s(ṽ_s,ṽ_s) = - 2 ṽ_sexp(-ṽ_s^2) .Substituting this into (<ref>) gives(M_s)_xx =4/√() ω̃_s ∑_n=-∞^∞ [ n^2/k_^2 ρ̃_s^2∫_C_L exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s J_n(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) ] ,(M_s)_xy = 4 i/√() ω̃_s ∑_n=-∞^∞ [ n/k_ ρ̃_s ∫_C_L exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s^2 J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) exp(-ṽ_s^2) ] ,(M_s)_xz = 4/√() ω̃_s ∑_n=-∞^∞ [ n/k_ ρ̃_s∫_C_L ṽ_s exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s J_n(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) ] ,(M_s)_yx = (M_s)_xy , (M_s)_yy = 4/√() ω̃_s ∑_n=-∞^∞ [ ∫_C_L exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s^3 J_n'(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2)] ,(M_s)_yz = -4 i/√() ω̃_s ∑_n=-∞^∞ [ ∫_C_L ṽ_s exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s^2 J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) exp(-ṽ_s^2)] , (M_s)_zx = (M_s)_xz , (M_s)_zy = -(M_s)_yz ,(M_s)_zz = 4/√() ω̃_s ∑_n=-∞^∞ [ ∫_C_L ṽ_s^2 exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s J_n(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) ] . Using the integral identities1/√()∫_C_L u exp(-u^2) du/u-z= 1 + z Z(z), 1/√()∫_C_L u^2 exp(-u^2) du/u-z= z[1 + z Z(z)] , involving the plasma dispersion function, and the identities∫_0^∞ d t t J_n(αt)^2 exp(-t^2)= 1/2exp(-α^2/2) I_n(α^2/2) , ∫_0^∞ d t t^2 J_n(αt) J_n'(αt) exp(-t^2)= α/4exp(-α^2/2) [I_n'(α^2/2)-I_n(α^2/2)] , ∫_0^∞ d t t^3 J_n'(αt)^2 exp(-t^2)= 1/4exp(-α^2/2){2n^2/α^2 I_n(α^2/2) .. - α^2 [I_n'(α^2/2)-I_n(α^2/2)] } , involving Bessel functions (here α a real number), we obtain expressions for the dielectric components (<ref>)in terms of special functions:(M_s)_xx =2ω̃_s ∑_n=-∞^∞ n^2/k_^2 ρ̃_s^2 Z(ζ_sn) exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s)_xy = i ω̃_s ∑_n=-∞^∞ n Z(ζ_sn) exp(-k_^2 ρ̃_s^2/2) [I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(M_s)_xz = 2 ω̃_s ∑_n=-∞^∞ n/k_ ρ̃_s [1+ζ_sn Z(ζ_sn)] exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s)_yx = (M_s)_xy, (M_s)_yy = ω̃_s ∑_n=-∞^∞ Z(ζ_sn) ×exp(-k_^2 ρ̃_s^2/2) [( 2 n^2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2) I_n(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2 I_n'(k_^2 ρ̃_s^2/2) ] ,(M_s)_yz = -i ω̃_s ∑_n=-∞^∞ k_ ρ̃_s [1+ζ_sn Z(ζ_sn)] ×exp(-k_^2 ρ̃_s^2/2)[I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(M_s)_zx = (M_s)_xz , (M_s)_zy = -(M_s)_yz ,(M_s)_zz = 2 ω̃_s ∑_n=-∞^∞ ζ_sn [1+ζ_sn Z(ζ_sn)] exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) .The components of the dielectric tensor (<ref>) in coordinate basis {e_1,e_2,e_3} then follow from(<ref>), though we do not write these outexplicitly. §.§.§ Dielectric tensor in low-frequency limit, {x̂,ŷ,ẑ} coordinate frame Now, to consider the low-frequency limit ω̃_s≪ 1, we Taylor expand(<ref>) in ω̃_s. Noting that ω̃_sonly appears via the argument ζ_sn = ω̃_s - n/|k_|ρ̃_s, we use the differential identity Z'(z) = -2[1 + z Z(z)] to obtain the expansions Z(ζ_sn) = Z(-n/|k_|ρ̃_s) - 2 ω̃_s [1 - n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)]+ O(ω̃_s^2) ,1+ ζ_sn Z(ζ_sn) = 1 -n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s) + ω̃_s [(1- 2 n^2/|k_|^2 ρ̃_s^2 )Z(-n/|k_| ρ̃_s)+ 2 n/|k_| ρ̃_s ] + O(ω̃_s^2) , ζ_sn [1+ζ_sn Z(ζ_sn)] =- n/|k_| ρ̃_s [1 - n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)]+ ω̃_s [1- 2 n^2/|k_|^2 ρ̃_s^2 .. - 2 n/|k_| ρ̃_s (1- n^2/|k_|^2 ρ̃_s^2 )Z(-n/|k_| ρ̃_s) ] + O(ω̃_s^2) . Then, expanding the dielectric tensor asM_s = ω̃_sM_s^(0) + ω̃_s^2M_s^(1) + O(ω̃_s^3) ,we have (M_s^(0))_xx = 2 ∑_n=-∞^∞ n^2/k_^2 ρ̃_s^2 Z(-n/|k_|ρ̃_s) exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s^(0))_xy = i ∑_n=-∞^∞ n Z(-n/|k_| ρ̃_s) ×exp(-k_^2 ρ̃_s^2/2) [I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(M_s^(0))_xz = 2 ∑_n=-∞^∞ n/k_ ρ̃_s [1 -n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s^(0))_yy = ∑_n=-∞^∞ Z(-n/|k_| ρ̃_s) ×exp(-k_^2 ρ̃_s^2/2) [( 2 n^2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2) I_n(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2 I_n'(k_^2 ρ̃_s^2/2) ] ,(M_s^(0))_yz = i ∑_n=-∞^∞ k_ ρ̃_s [1 -n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2)[I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(M_s^(0))_zz = -2 ∑_n=-∞^∞ n/|k_| ρ̃_s [1 - n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,and (M_s^(1))_xx =-4 ∑_n=-∞^∞ n^2/k_^2 ρ̃_s^2 [1 - n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s^(1))_xy = -2i ∑_n=-∞^∞ n [1 - n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2) [I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(M_s^(1))_xz = 2 ∑_n=-∞^∞ n/k_ ρ̃_s [(1- 2 n^2/|k_|^2 ρ̃_s^2 )Z(-n/|k_| ρ̃_s) + 2 n/|k_| ρ̃_s ] ×exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s^(1))_yy = -2 ∑_n=-∞^∞ [1 - n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2) [( 2 n^2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2) I_n(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2 I_n'(k_^2 ρ̃_s^2/2) ] ,(M_s^(1))_yz = -i ∑_n=-∞^∞ k_ ρ̃_s[(1- 2 n^2/|k_|^2 ρ̃_s^2 )Z(-n/|k_| ρ̃_s) + 2 n/|k_| ρ̃_s ] ×exp(-k_^2 ρ̃_s^2/2)[I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(M_s^(1))_zz = 2 ∑_n=-∞^∞ [1- 2 n^2/|k_|^2 ρ̃_s^2 - 2 n/|k_| ρ̃_s (1- n^2/|k_|^2 ρ̃_s^2 )Z(-n/|k_| ρ̃_s) ] ×exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) . These expressions can be simplified somewhat using two further types of algebraic manipulation.First, for z a real number, we can split the plasma dispersion into real and imaginary partsasZ(z) =1/√()𝒫∫_-∞^∞exp(-u^2)du/u-z+ i√()exp(-z^2)= Z(z) +i√()exp(-z^2).Thus, we see that the real part of Z(z) is an odd function for real z, while the imaginary part is an even function. As a consequence,only one of the real or imaginary parts of the plasma dispersion function willenter into the summations in (<ref>) and (<ref>). Secondly, we utilise the generating function of themodified Bessel function, viz., ∑_n = -∞^∞ I_n(α) t^n = exp[α/2(t+1/t)],to deduce the following identities:∑_n = -∞^∞ I_n(α) = exp(α), ∑_n = -∞^∞ n^2 I_n(α) = αexp(α), ∑_n = -∞^∞ [ I_n'(α) - I_n(α) ] =0 , ∑_n = -∞^∞ n^2 [I_n'(α)- I_n(α)] = exp(α) .Combining these results, we obtain from (<ref>) and (<ref>) the following expressions for the componentsof M_s^(0) and M_s^(1): (M_s^(0))_xx =4 i √() ∑_m=1^∞ m^2/k_^2 ρ̃_s^2 exp(-m^2/k_^2 ρ̃_s^2) exp(-k_^2 ρ̃_s^2/2) I_m(k_^2 ρ̃_s^2/2)= i F(k_ ρ̃_s,k_ ρ̃_s),(M_s^(0))_xy = -i ∑_m=-∞^∞ m [Z(m/|k_| ρ̃_s)] exp(-k_^2 ρ̃_s^2/2) [I_m'(k_^2 ρ̃_s^2/2)-I_m(k_^2 ρ̃_s^2/2) ]= -i G(k_ ρ̃_s,k_ ρ̃_s),(M_s^(0))_xz = -4 i √() ∑_m=-∞^∞ m^2/k_ |k_| ρ̃_s^2 exp(-m^2/k_^2 ρ̃_s^2) exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) = -i k_/|k_| F(k_ ρ̃_s,k_ ρ̃_s),(M_s^(0))_yy = i √() ∑_m=-∞^∞exp(-m^2/k_^2 ρ̃_s^2) ×exp(-k_^2 ρ̃_s^2/2) [( 2 m^2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2) I_m(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2 I_m'(k_^2 ρ̃_s^2/2) ]= iH(k_ ρ̃_s,k_ ρ̃_s),(M_s^(0))_yz = -i ∑_m=-∞^∞ m k_/|k_| [Z(m/|k_| ρ̃_s)] exp(-k_^2 ρ̃_s^2/2) [I_m'(k_^2 ρ̃_s^2/2)-I_m(k_^2 ρ̃_s^2/2) ]= -i k_/|k_| G(k_ ρ̃_s,k_ ρ̃_s),(M_s^(0))_zz =4 i √() ∑_m=1^∞ m^2/k_^2 ρ̃_s^2 exp(-m^2/k_^2 ρ̃_s^2) exp(-k_^2 ρ̃_s^2/2) I_m(k_^2 ρ̃_s^2/2)= i k_^2 /k_^2 F(k_ ρ̃_s,k_ ρ̃_s),and (M_s^(1))_xx =-2 {1 + ∑_m=-∞^∞ 2 m^3/|k_| k_^2 ρ̃_s^3 [Z(m/|k_| ρ̃_s)] . . ×exp(-k_^2 ρ̃_s^2/2) I_m(k_^2 ρ̃_s^2/2) } =-4/3 W(k_ ρ̃_s,k_ ρ̃_s) ,(M_s^(1))_xy = 4 √() ∑_m=1^∞ m^2/|k_| ρ̃_s exp(-m^2/k_^2 ρ̃_s^2) ×exp(-k_^2 ρ̃_s^2/2) [I_m'(k_^2 ρ̃_s^2/2)-I_m(k_^2 ρ̃_s^2/2) ],(M_s^(1))_xz = 2 {k_/|k_| + ∑_m=-∞^∞ (2 m^3/|k_|^2 k_ ρ̃_s^3 -m/k_ ρ̃_s)[Z(m/|k_| ρ̃_s)] . . ×exp(-k_^2 ρ̃_s^2/2) I_m(k_^2 ρ̃_s^2/2) } ,(M_s^(1))_yy = -2 { 1 + ∑_m=-∞^∞ m/|k_| ρ̃_s [Z(m/|k_| ρ̃_s)] . ×. exp(-k_^2 ρ̃_s^2/2) [( 2 m^2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2) I_m(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2 I_m'(k_^2 ρ̃_s^2/2) ] }=-4/3 Y(k_ ρ̃_s,k_ ρ̃_s) ,(M_s^(1))_yz = -√() ∑_m=-∞^∞ k_ ρ̃_s (1-2 m^2/|k_|^2 ρ̃_s^2 ) exp(-m^2/k_^2 ρ̃_s^2) ×exp(-k_^2 ρ̃_s^2/2)[I_m'(k_^2 ρ̃_s^2/2)-I_m(k_^2 ρ̃_s^2/2) ],(M_s^(1))_zz = 2 {1 - k_^2/k_^2 + ∑_m=-∞^∞ 2 m/|k_| ρ̃_s (1- m^2/|k_|^2 ρ̃_s^2 )[Z(m/|k_| ρ̃_s)] . . ×exp(-k_^2 ρ̃_s^2/2) I_m(k_^2 ρ̃_s^2/2) } ,where we have reintroduced the special functions F(x,y), G(x,y) andH(x,y) defined by (<ref>), as well as W(x,y) and Y(x,y) definedby (<ref>). As anticipatedfrom the arguments presented in appendix <ref>, M_s^(0)obeys the symmetries(M_s^(0))_xz = - k_/k_(M_s^(0))_xx , (M_s^(0))_yz = k_/k_(M_s^(0))_xy , (M_s^(0))_zz = k_^2/k_^2 (M_s^(0))_xx . §.§.§ Dielectric tensor in low-frequency limit, {e_1,e_2,e_3} coordinate frame Having evaluated the first- and second-order terms in the expansion for components of the dielectric tensor in the coordinate basis {x̂,ŷ,ẑ},we can use (<ref>) to find equivalent expressionsin the coordinate basis {e_1,e_2,e_3}.Explicitly, we have the following transformations forM_s^(0): (M_s^(0))_11 = k_^2/k^2 (M_s^(0))_xx - 2 k_ k_/k^2 (M_s^(0))_xz + k_^2/k^2 (M_s^(0))_zz,(M_s^(0))_12 = k_/k(M_s^(0))_xy + k_/k(M_s^(0))_yz, (M_s^(0))_13 = k_ k_/k^2 [(M_s^(0))_xx - (M_s^(0))_zz] + (k_^2/k^2- k_^2/k^2) (M_s^(0))_xz ,(M_s^(0))_22 = (M_s^(0))_yy ,(M_s^(0))_23 = -k_/k(M_s^(0))_xy + k_/k(M_s^(0))_yz ,(M_s^(0))_33 = k_^2/k^2 (M_s^(0))_xx + 2 k_ k_/k^2 (M_s^(0))_xz + k_^2/k^2 (M_s^(0))_zz ,and similiarly for M_s^(1). On account of the symmetries derived in appendix <ref>, we find for M_s^(0) that (M_s^(0))_11 = k^2/k_^2(M_s^(0))_xx ,(M_s^(0))_12 = k/k_(M_s^(0))_xy, (M_s^(0))_21 = -(M_s^(0))_12 ,(M_s^(0))_22 = (M_s^(0))_yy ,with all other components vanishing. This agrees with (<ref>)stated in the main text. On substitution ofidentities (<ref>), (<ref>) arerecovered.As for M_s^(1), from the results (<ref>) derived in appendix <ref>, we have the following identities:(M_s^(1))_xz + k_/k_ (M_s^(1))_xx = -2 ∑_m=-∞^∞ m/k_ ρ̃_s [Z(m/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2) I_m(k_^2 ρ̃_s^2/2) ,(M_s^(1))_yz -k_/k_ (M_s^(1))_xy = -√() ∑_m=-∞^∞ k_ ρ̃_s exp(-m^2/k_^2 ρ̃_s^2) ×exp(-k_^2 ρ̃_s^2/2) [I_m'(k_^2 ρ̃_s^2/2)-I_m(k_^2 ρ̃_s^2/2) ] , (M_s^(1))_zz + k_/k_ (M_s^(1))_xz = 2 {1 + ∑_m=-∞^∞ m/|k_| ρ̃_s [Z(m/|k_| ρ̃_s)] . . ×exp(-k_^2 ρ̃_s^2/2) I_m(k_^2 ρ̃_s^2/2) } .Thus, we can decompose the dielectric components (M_s^(1))_xz, (M_s^(1))_yz and (M_s^(1))_zz in terms of the remaining components of M_s^(1) as follows:(M_s^(1))_xz = -k_/k_ (M_s^(1))_xx -L(k_ ρ̃_s,k_ ρ̃_s ) , (M_s^(1))_yz =k_/k_ (M_s^(1))_xy -N(k_ ρ̃_s,k_ ρ̃_s ) , (M_s^(1))_zz = -k_/k_ (M_s^(1))_xz + [2 + k_/k_ L(k_ ρ̃_s,k_ ρ̃_s )]= k_^2/k_^2 (M_s^(1))_xx + 2[1 + k_/k_ L(k_ ρ̃_s,k_ ρ̃_s )],where the special functions L(x,y) and N(x,y) are defined by L(x,y) ≡∑_m=-∞^∞ 2 m/y Z(m/x) exp(-y^2/2) I_m(y^2/2) , N(x,y) ≡√() ∑_m=-∞^∞ y exp(-m^2/x^2) exp(-y^2/2) [I_m'(y^2/2)-I_m(y^2/2) ] .This leads to the following expressions: (M_s^(1))_11 = k^2/k_^2 (M_s^(1))_xx + 2 [k_^2/k^2 + k_/k_ L(k_ ρ̃_s,k_ ρ̃_s )] ,(M_s^(1))_12 = k/k_ (M_s^(1))_xy -k_/k N(k_ ρ̃_s,k_ ρ̃_s ), (M_s^(1))_13 = - 2 k_ k_/k^2 - L(k_ ρ̃_s,k_ ρ̃_s ) ,(M_s^(1))_22 = (M_s^(1))_yy ,(M_s^(1))_23 = -k_/k N(k_ ρ̃_s,k_ ρ̃_s ) ,(M_s^(1))_33 = 2 k_^2/k^2 . We note that M_s^(1) does not possess the same symmetry properties as M_s^(0). §.§.§ Asymptotic forms of M_s^(0) and M_s^(1) In this appendix, we write down asymptotic forms at smalland large x and y for the special functionsF(x,y), G(x,y), H(x,y), L(x,y) and N(x,y) definedby (<ref>) and (<ref>), respectively. Physically, this corresponds via (<ref>) to considering thedielectric response associated with M_s^(0) and M_s^(1) for modes with parallel and perpendicular wavenumbers verysmall (or very large) with respect to the inverse Larmor radius of species s.Detailed derivations are left as an exercise to keen readers (and can be verifiednumerically). Proceeding systematically through various limits, we have the following results:* x ∼ 1, y ≪ 1:F(x,y) = √() exp(-1/x^2) [1+O(y^2) ], G(x,y) = [Z(1/x)] [1+O(y^2) ] , H(x,y) = √() exp(-1/x^2) [1+O(y^2) ], L(x,y) = y [Z(1/x)] [1+O(y^2) ], N(x,y) = √() y [2 exp(-1/x^2) - 1 ] [1+O(y^2) ].* x, y ≫ 1F(x,y) = √() x^3/(x^2+y^2)^3/2 [1+O(1/x^2+y^2) ] , G(x,y) = -2 x^3/(x^2+y^2)^2 [1+O(1/x^2+y^2) ] , H(x,y) = √() x/(x^2+y^2)^1/2 [1+O(1/x^2+y^2) ] , L(x,y) = -2 x y/x^2+y^2 [1+O(1/x^2+y^2) ],N(x,y) = √() x/y (x^2+y^2)^1/2 [1+O(1/x^2+y^2) ].We observe that the asymptotic forms (<ref>) are in fact valid even for y ≲ 1.* x ≪ 1, y ∼ 1:F(x,y) = 4 √()/y^2 exp(-y^2/2) I_1(y^2/2)exp(-1/x^2) {1+O[exp(-3/x^2) ] }, G(x,y) = -x exp(-y^2/2) [I_0(y^2/2)- I_1(y^2/2)] [1+O(x^2) ] , H(x,y) = √() y^2 exp(-y^2/2) [I_0(y^2/2)- I_1(y^2/2)] [1+O(x^2) ], L(x,y) = -2 x/y [1-exp(-y^2/2) I_0(y^2/2)] [1+O(x^2) ], N(x,y) = -√() y exp(-y^2/2) [I_0(y^2/2)- I_1(y^2/2)] [1+O(x^2) ].* x, y ≪ 1:F(x,y) = √() exp(-1/x^2) {1+O[exp(-3/x^2),y^2 ] }, G(x,y) = -x [1 - (3/4y^2 - 1/2 x^2) .. + (3/4 x^4-15/32 x^2 y^2 + 5/16y^4) ] [1+O(x^6, x^4 y^2, x^2 y^4, y^6) ] , H(x,y) = √() y^2 [ 1 - (3/4y^2 - 1/2 x^2) ..+ (3/4 x^4-15/32 x^2 y^2 + 5/16y^4) ] [1+O(x^6, x^4 y^2, x^2 y^4, y^6) ], L(x,y) = -x y [1+O(x^2, y^2) ], N(x,y) = -√() y [1+O(x^2) ] [1+O(x^2, y^2) ]. * x ≪ 1, y ≫ 1:F(x,y) =4/y^3 exp(-1/x^2) {1+O[exp(-3/x^2), 1/y^2 ] } , G(x,y) = -x/√() y^3 [1+O(1/y^2) ] , H(x,y) = 1/y [1+O(1/y^2) ], L(x,y) = -2 x/y [1-1/√() y][1+O(x^2,1/y^3) ], N(x,y) = -1/y^2 [1+O(1/y^2) ] .§.§.§ Unmagnetised Maxwellian dielectric responseIn this paper, we consider microinstabilities over a wide range of scales, from k ρ_i ≪ 1to sub-electron-scale microinstabilities with k ρ_e ≫ 1. Therefore, theordering k ρ_s ∼ 1 assumed in section <ref> for the derivation of the low-frequencydielectric tensor in a magnetised plasma cannot hold for both ions and electrons (as was noted in section <ref> and discussed in section <ref>). While thederivation of the dielectric tensor in a strongly magnetised plasma (k ρ_s ≪ 1)is straightforwardly performed by asymptotic analysis applied directly to the hot, magnetised plasma conductivity tensor (<ref>),the equivalent calculation for k ρ_s ≫ 1 is most easily done by directanalysis of the Vlasov equation with B_0 = 0. In this appendix, wepresent such a calculation. We begin from (<ref>), butwithΩ̃_s = 0 (and ignoring the displacement current):k^2 c^2/ω^2 [δE - k̂ (k̂ ·δE)]=4 i/ωδj , δj =∑_s Z_s e ∫d^3 v v δf_s ,(-i ω+ i k v ) δf_s =-Z_s e/m_s [δE + k/ω v ×(k̂ ×δE)] f_s0/v.As with the magnetised case, we substitute the perturbed distributionfunction (<ref>c) into the current (<ref>b) :δj = -i∑_s Z_s^2 e^2/m_s∫d^3 v v/ω - kv[δE + k/ωv×(k̂×δE)]f_s0/v.Non-dimensionalising the distribution function viaf̃_s0(ṽ_s) ≡^3/2 v_ths^3/n_s0 f_s0(v_thsṽ_s) ,we obtain δj = -i/4 ω∑_s ω_ps^2 ω̃_s/^3/2∫d^3 ṽ_̃s̃ ṽ_̃s̃/ω̃_s - k̂ṽ_s[δE + 1/ω̃_sṽ_s ×(k̂×δE)] f̃_s0/ṽ_s,where ω̃_s = ω/k v_ths. For a Maxwellian distribution,withf̃_s0(ṽ_s) = exp(-ṽ_s^2),the second term in (<ref>) vanishes, leavingδj = σδE,where the conductivity tensor isσ =i/4 ω∑_s ω_ps^2 2 ω̃_s/^3/2∫d^3 ṽ_̃s̃ ṽ_̃s̃ṽ_̃s̃/ω̃_s - k̂ṽ_sexp(-ṽ_s^2).The integral can be evaluated to giveσ = -i/4 ω∑_s ω_ps^2 ω̃_s{Z(ω̃_s)(I-k̂k̂) + 2 [ω̃_s + ω̃_s^2 Z(ω̃_s) ]k̂k̂}.The dielectric tensor in an unmagnetised Maxwellian plasma for general ω̃_s is, therefore,𝔈^ (UM) = ∑_s ω_ps^2/ω^2ω̃_s{Z(ω̃_s) (I-k̂k̂) + 2 [ω̃_s + ω̃_s^2 Z(ω̃_s) ]k̂k̂}.Note that it follows from (<ref>) that 𝔈k̂ =0, so we conclude that for non-zero fluctuations, either k̂δE =0 or 1 + ω̃_s Z(ω̃_s) =0. We do not find the conventional longitudinal plasma waves because we have neglected thedisplacement current in Maxwell's equations. The only modes that satisfy 1 + ω̃_s Z(ω̃_s) =0 arestrongly damped, with ω̃_s ∼ 1. Thus, all modes satisfyingω̃_s ≪ 1 must be purely transverse. For ω̃_s ≪ 1, the unmagnetised dielectric response therefore simplifies to𝔈^ (UM) = i√()(I-k̂k̂) ∑_s ω_ps^2/ω^2ω̃_s[1+O(ω̃_s ) ] . §.§.§ Validity of approximation M_s ≈M_s^(0) for large or small k_ρ_s and k_ρ_sIn carrying out the expansion of the Maxwellian dielectric tensor (<ref>) in ω̃_s, we assumed that k ρ_s ∼ 1; however, in general, we will wish to consider microinstabilities that exist attypical wavenumbers k ρ_s ≪ 1 or k ρ_s ≫ 1. Indeed, since themass ratio μ_e = m_e/m_i is very small, if we wish toconsider the combined response of both species, it is inevitable that for one of them, k ρ_s ≪ 1 or k ρ_s ≫ 1. Thus, it remains to assess when the approximation M_s ≈M_s^(0)is valid in these limits. We show in this appendix that this approximation is appropriate in thelimit k_ρ_s ≫ 1, for arbitrary k_ρ_s; however, for k_ρ_s ≪ 1, theapproximation breaks down for some dielectric components – indeed, in the limit k_ρ_s, k_ρ_s ≪ 1, it breaks down for all but two components.For these instances, an alternative expression for the dielectric tensor is derived below. The validity of the k_ρ_s ≫ 1 limit is most simply demonstrated by comparing the components ofM_s^(0) to the unmagnetised dielectric response(<ref>).Recalling that (M_s^(0))_11 = i ω̃_s k^2/k_^2 F(k_ ρ̃_s,k_ ρ̃_s) ,(M_s^(0))_12 = i ω̃_s k/k_ G(k_ ρ̃_s,k_ ρ̃_s), (M_s^(0))_21 = -(M_s^(0))_12 ,(M_s^(0))_22 = i ω̃_s H(k_ ρ̃_s,k_ ρ̃_s) ,and applying the asymptotic results (<ref>), we find(M_s^(0))_11 ≈i √() ω̃_s k_/k ,(M_s^(0))_12 ≈-2 i ω̃_s k_^2/k^2 1/k ρ_s, (M_s^(0))_22 ≈i √() ω̃_s k_/k . We note these expressions are valid for arbitrary k_ρ_s. The equivalent components of the unmagnetised (normalised) dielectric tensor M_s ≈ω^2 𝔈_s^ (UM)/ω_ps^2 are (M_s)_11 = i √() ω̃_s ,(M_s)_12 =(M_s^(0))_21 = 0,(M_s)_22 = i √()ω̃_s.Noting that ω̃_s = ω̃_s k_/k, we see that thediagonal terms are identical, while the non-zero e_1 e_2 term present in the k ρ_s ≫ 1limit of M_s^(0) becomes asymptotically small in 1/k ρ_s ≪ 1. To demonstrate that the approximation M_s ≈M_s^(0) is not accurate in the limit k_ρ_s ≪ 1, we consider the fullMaxwellian dielectric tensor assuming ω̃_s≲ 1 and k_ρ_s ≪ 1.If this long-wavenumber dielectric tensor subsequently evaluated at low frequencies ω̃_s≪ 1 gives the sameresult as M_s^(0) for any particular component of M_s, thenthe approximation for that component is reasonable; otherwise, the approximationhas to be modified at sufficiently small k_ρ_s ≪ 1.If k_ρ_s ≪ 1 and ω̃_s≲ 1, it follows thatfor n ≠ 0, |ζ_sn| ≡|ω̃_s - n/k_ρ̃_s| ≫ 1 .In this case, we can simplify the plasma dispersion function via a large-argumentexpansion:Z(ζ_sn) ≈ -1/ζ_sn-1/2 ζ_sn^3 + …The long-wavelength dielectric tensor is then (M_s)_xx ≈ -2 ω̃_s ∑_n=-∞^∞ n^2/ ζ_sn k_^2 ρ̃_s^2 exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s)_xy ≈-i ω̃_s ∑_n=-∞^∞ n/ζ_sn exp(-k_^2 ρ̃_s^2/2) [I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(M_s)_xz ≈- ω̃_s ∑_n=-∞^∞ n/ζ_sn^2 k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s)_yx = -(M_s)_xy ,(M_s)_yy ≈- ω̃_s [ ∑_n ∈ℤ^≠ {1/ζ_sn exp(-k_^2 ρ̃_s^2/2)×[( 2 n^2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2) I_n(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2 I_n'(k_^2 ρ̃_s^2/2) ] } - Z(ω̃_s) k_^2 ρ̃_s^2 exp(-k_^2 ρ̃_s^2/2) {I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) } ] ,(M_s)_yz ≈i ω̃_s [ ∑_n ∈ℤ^≠ { 1/2 ζ_sn^2 k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2)[I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ] }+ [1+ ω̃_s Z(ω̃_s) ] k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2) {I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) } ] ,(M_s)_zx = (M_s)_xz , (M_s)_zy = -(M_s)_yz ,(M_s)_zz ≈- ω̃_s [ ∑_n ∈ℤ^≠ { 1/ζ_sn exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) }- 2 ω̃_s [1+ ω̃_s Z(ω̃_s) ] exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2)] ,where ℤ^≠ denotes non-zero integers.We note that the error associated with neglecting higher-order terms in ζ_snis O(k_^2 ρ_s^2). Next, using-1/ζ_sn = 1/n/k_ρ̃_s-ω̃_s≈k_ρ̃_s/n[1+ω̃_sk_ρ̃_s/n + O(ω^2/Ω_e^2) ] ,we can isolate the dependence of each dielectric tensor component onω̃_s. It is clear that any sum involving an odd power of n vanishes, meaning thatthe leading-order contributions in k_ρ̃_s from the summation terms arise from the highest power of ω̃_sgives an even power of n. The resulting approximate expressions are (M_s)_xx ≈2 k_^2/k_^2 ω̃_s^2 [1-exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2)] ,(M_s)_xy ≈ i ω̃_s k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2) [I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) ],(M_s)_xz ≈-4 k_^2 ρ̃_s^2k_/k_ ω̃_s^2 ∑_n =1^∞ 1/n^2 exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s)_yy ≈ω̃_s exp(-k_^2 ρ̃_s^2/2) { Z(ω̃_s) k_^2 ρ̃_s^2[I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) ]+ 2 ω̃_s k_^2 ρ̃_s^2 ∑_n =1^∞ [( 2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2/n^2) I_n(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2/n^2 I_n'(k_^2 ρ̃_s^2/2) ] } ,(M_s)_yz ≈i ω̃_s [1+ ω̃_s Z(ω̃_s) ] ×k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2)[I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) ],(M_s)_zz ≈2 ω̃_s^2 [1+ ω̃_s Z(ω̃_s) ] exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2),where we have again used the sum identities (<ref>).Note that we have retained a term in (M_s)_yy which is quadratic in k_ρ̃_s, even though there exists another term which is independent of k_ρ̃_s. This is because the latter term becomes arbitrarily small in the limit k_ρ_s ≪ 1, whereas the former is independent of k_ρ_s; hence, if k_ρ_s ≪ k_ρ_s, the latter term can become dominant. Now consideringthe limit ω̃_s≪ 1, while holding k_ρ_s ≪ 1 at some fixed value , the plasmadispersion function can now be approximated by its small-argument expansionZ(ω̃_s) ≈ i √(),to give(M_s)_xx ≈ 2 k_^2/k_^2 ω̃_s^2 [1-exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2)] ,(M_s)_xy ≈i ω̃_s k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2) [I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) ],(M_s)_xz ≈-4 k_^2 ρ̃_s^2k_/k_ ω̃_s^2 ∑_n =1^∞ 1/n^2 exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(M_s)_yy ≈ω̃_s exp(-k_^2 ρ̃_s^2/2) { i √() k_^2 ρ̃_s^2 [I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) ] + 2 ω̃_s^2 k_^2 ρ̃_s^2 ∑_n =1^∞ [( 2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2/n^2) I_n(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2/n^2 I_n'(k_^2 ρ̃_s^2/2) ] } ,(M_s)_yz ≈i ω̃_s [1+ i √() ω̃_s ] ×k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2)[I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) ],(M_s)_zz ≈2 ω̃_s^2 [1+ i √() ω̃_s ] exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2).For comparison, we state below the long-wavelength limit of M_s^(0)using asymptotic expressions (<ref>): (M_s^(0))_xx =4 i √() ω̃_s /k_^2 ρ̃_s^2 exp(-1/k_^2 ρ̃_s^2) exp(-k_^2 ρ̃_s^2/2) I_1(k_^2 ρ̃_s^2/2) ,(M_s^(0))_xy = i ω̃_s |k_| ρ̃_s exp(-k_^2 ρ̃_s^2/2) [I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) ] ,(M_s^(0))_xz = -4 i √() ω̃_s /k_ k_ ρ̃_s^2 exp(-1/k_^2 ρ̃_s^2) exp(-k_^2 ρ̃_s^2/2) I_1(k_^2 ρ̃_s^2/2),(M_s^(0))_yy = i √() ω̃_s k_^2 ρ̃_s^2 exp(-k_^2 ρ̃_s^2/2) [I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) ] , (M_s^(0))_yz = i ω̃_s k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2)[I_0(k_^2 ρ̃_s^2/2)-I_1(k_^2 ρ̃_s^2/2) ],(M_s^(0))_zz =4 i √() ω̃_s/k_^2 ρ̃_s^2 exp(-1/k_^2 ρ̃_s^2) exp(-k_^2 ρ̃_s^2/2) I_1(k_^2 ρ̃_s^2/2).Assuming k_ρ_s ∼ 1, we observe that while three of the six unique dielectric tensor components areidentical for both ω̃_s→ 0, k_ρ_s ≪ 1fixed, and k_ρ_s → 0, ω̃_s≪ 1 fixed [(M_s)_xy, (M_s)_yy, and (M_s)_yz],the other three [(M_s)_xx, (M_s)_xz, and (M_s)_zz] are not. Instead, the dominant terms are the quadratic terms (M_s^(1))_xx, (M_s^(1))_xz, and (M_s^(1))_zz in the ω̃_s≪ 1expansion. In the limit k_ρ_s ≪ 1, (M_s)_yyalso departs from the approximation (M_s^(0))_yyfor sufficiently small k_ρ_s as compared to k_ρ_s, instead being accurately described by (M_s^(1))_yy. As a consequence, we must assess the conditions underwhich one approximation or the other is valid. This is most simply answered byobserving that the expressions for (M_s^(0))_xx, (M_s^(0))_xz, and (M_s^(0))_zzfrom (<ref>a), (<ref>c)and (<ref>f) are exponentially small; thus,for k_ρ_s ≪ 1/log(1/ω̃_s), we must useapproximations (<ref>a),(<ref>c), (<ref>e) for (M_s)_xx, (M_s)_xz, and(M_s)_zz. In addition, if k_^2 ρ_s^2 ≪ω̃_s k_^2ρ_s^2 ≪ 1, then (M_s)_yy≈2ω_ps^2/ω^2ω̃_s^2 k_^2 ρ̃_s^2becomes the appropriate approximation for (M_s)_yy. §.§.§ Calculation of secord-order corrections to dispersion relationIn this appendix, we justify the relations (<ref>) used in appendix <ref> –that is, for k_ρ_s ≪ 1,[(M_s)_13]^2/(M_s^(1))_33 ≲(M_s)_11 ,(M_s)_13(M_s)_23/(M_s^(1))_33 ≲ω̃_e (M_s)_12 ≪(M_s)_12 ,[(M_s)_23]^2/(M_s^(1))_33 ≲ω̃_e (M_s)_22 ≪(M_s)_22.We also prove the identity (<ref>), or (M_e^(1) + M_i^(1))_11 - [(M_e^(1) + M_i^(1))_13]^2/2 (M_e^(1))_33 = -4/3 W_e - 4/3 W_i - 1/4(L_e + L_i)^2used to derive the dispersion relation (<ref>). To complete the first task, we begin with the expressions (<ref>) for thedielectric components, and substitute (<ref>a), (<ref>b), (<ref>d)and (<ref>d) for (M_s^(1))_xx, (M_s^(0))_xy, (M_s^(0))_xy, (M_s^(0))_yy and (M_s^(1))_xy, respectively. This gives (<ref>)directly in terms of special functions G(x,y), H(x,y), L(x,y), N(x,y), W(x,y) and Y(x,y):(M_s)_11 ≈-4 k^2/3 k_^2 ω̃_s^2 W(k_ ρ̃_s,k_ ρ̃_s ) + 2 ω̃_s^2 [k_^2/k^2 + k_/k_ L(k_ ρ̃_s,k_ ρ̃_s )] ,(M_s)_12 ≈ - i k/k_ ω̃_s G(k_ ρ̃_s,k_ ρ̃_s ), (M_s)_13 ≈-ω̃_s^2 [2 k_ k_/k^2 + L(k_ ρ̃_s,k_ ρ̃_s )] ,(M_s)_22 ≈i ω̃_s H(k_ ρ̃_s,k_ ρ̃_s ) -4/3 ω̃_s^2 Y(k_ ρ̃_s,k_ ρ̃_s ) ,(M_s)_23 ≈-k_/k ω̃_s^2 N(k_ ρ̃_s,k_ ρ̃_s ) ,(M_s)_33 ≈2 k_^2/k^2 ω̃_s^2 . We then apply the k_ρ_s ≪ 1 limits of the aforementioned special functionsusing Appendices <ref> and <ref> – in particular, (<ref>b), (<ref>c), (<ref>d), (<ref>e), (<ref>a), and (<ref>c): (M_s)_11 ≈2 ω̃_s^2 exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2) ,(M_s)_12 ≈i ω̃_s k ρ̃_s exp(-k_^2 ρ̃_s^2/2) [I_0(k_^2 ρ̃_s^2/2)- I_1(k_^2 ρ̃_s^2/2)], (M_s)_13 ≈-ω̃_s^2 2 k_/k exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2) ,(M_s)_22 ≈i √() ω̃_s k_^2 ρ̃_s^2 exp(-k_^2 ρ̃_s^2/2) [I_0(k_^2 ρ̃_s^2/2)- I_1(k_^2 ρ̃_s^2/2)] + ω̃_s^2 k_^2 ρ̃_s^2,(M_s)_23 ≈√() ω̃_s^2 k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2) [I_0(k_^2 ρ̃_s^2/2)- I_1(k_^2 ρ̃_s^2/2)] ,(M_s)_33 ≈2 k_^2/k^2 ω̃_s^2 .We can now make the relevant comparisons presented in(<ref>), and obtain the desired results:[(M_s)_13]^2/(M_s)_11 (M_s)_33 ≈exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2) ≲1, (M_s)_13(M_s)_23/(M_s)_12 M_s)_33≈i ω̃_s exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2) ≲ω̃_s , [(M_s)_23]^2/(M_s)_22 (M_s)_33≈- i √()/2 ω̃_s exp(-k_^2 ρ̃_s^2/2) [I_0(k_^2 ρ̃_s^2/2)- I_1(k_^2 ρ̃_s^2/2)] ≲ω̃_s ,where we used the inequalities exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2) ≤1 , exp(-k_^2 ρ̃_s^2/2) [I_0(k_^2 ρ̃_s^2/2)- I_1(k_^2 ρ̃_s^2/2)] ≤1 ,valid for arbitrary values of k_ρ̃_s. To derive (<ref>), we use(<ref>a),(<ref>c)and (<ref>f) to derive the following expressions:(M_e^(1) + M_i^(1))_11 =k^2/k_^2[(M_e^(1))_xx + (M_i^(1))_xx ] + 2 [2 k_^2/k^2 + k_/k_ ( L_e + L_i )] , (M_e^(1) + M_i^(1))_13 =4 k_ k_/k^2 + L_e + L_i , 2 (M_e^(1))_33 = 4 k_^2/k^2 ,where we have introduced the notation L_e = L(k_ρ̃_e,k_ρ̃_e ), L_i = L(k_ρ_i,k_ρ_i ). Then, [(M_e^(1) + M_i^(1))_13]^2/2 (M_e^(1))_33= [2 k_/k + k/2 k_( L_e + L_i) ]^2 , which in turn gives (M_e^(1) + M_i^(1))_11 - [(M_e^(1) + M_i^(1))_13]^2/2 (M_e^(1))_33 =k^2/k_^2[(M_e^(1))_xx + (M_i^(1))_xx - 1/4(L_e + L_i)^2] .The identities (<ref>) give(<ref>), completing the proof.§.§ CE electron-friction termFor an electron distribution of the form f̃_e(ṽ_e,ṽ_e) = - η_e^Rṽ_eexp(-ṽ_e^2)with η_s^R≪ 1 a constant, it follows thatΛ_e(ṽ_e,ṽ_e) = - η_e^Rṽ_eexp(-ṽ_e^2),whileΞ_e(ṽ_e,ṽ_e) = - η_e^R/ω̃_eṽ_eexp(-ṽ_e^2) + O(η_e) .Since∫_-∞^∞dṽ_e ṽ_e∫_0^∞dṽ_eΛ_e(ṽ_e,ṽ_e) = 0when Λ_e(ṽ_e,ṽ_e) is given by (<ref>),the function Ξ_e(ṽ_e,ṽ_e) is just proportional to that arising for a Maxwellian distribution [cf. (<ref>)], and so the dielectric response associated with the CE electron-frictionterm is too:P_e = η_e^R/2M_e.§.§ CE temperature-gradient-driven termsFor the CE temperature-gradient-driven term arising from a Krook operator, which takes the formf̃_s(ṽ_s,ṽ_s) = - η_s ṽ_s(ṽ_s^2 - 5/2) exp(-ṽ_s^2) ,it follows (assuming η_e^R = 0) thatΛ_s(ṽ_s,ṽ_s) = - η_s ṽ_s(ṽ_s^2 - 5/2)exp(-ṽ_s^2),andΞ_s(ṽ_s,ṽ_s) = - η_s/ω̃_sṽ_s(ṽ_s^2 - 5/2) exp(-ṽ_s^2) + O(η_s) .Then, to leading order in η_s, (P_s)_xx =2/√() η_s ∑_n=-∞^∞ [ n^2/k_^2 ρ̃_s^2∫_C_L exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s J_n(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) (ṽ_s^2 - 5/2) ] ,(P_s)_xy = 2 i/√() η_s∑_n=-∞^∞ [ n/k_ ρ̃_s ∫_C_L exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s^2 J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) exp(-ṽ_s^2)(ṽ_s^2 - 5/2) ] ,(P_s)_xz = 2/√() η_s ∑_n=-∞^∞ [ n/k_ ρ̃_s∫_C_L ṽ_s exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . .×∫_0^∞ d ṽ_s ṽ_s J_n(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) (ṽ_s^2 - 5/2) ] ,(P_s)_yx = -(P_s)_xy ,(P_s)_yy = 2/√() η_s ∑_n=-∞^∞ [ ∫_C_L exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s^3 J_n'(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) (ṽ_s^2 - 5/2) ] ,(P_s)_yz = -2 i /√() η_s ∑_n=-∞^∞ [ ∫_C_L ṽ_s exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s^2 J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) exp(-ṽ_s^2) (ṽ_s^2 - 5/2) ] ,(P_s)_zx = (P_s)_xz , (P_s)_zy = -(P_s)_yz ,(P_s)_zz = 2 /√() η_s ∑_n=-∞^∞ [ ∫_C_L ṽ_s^2 exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s J_n(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) (ṽ_s^2 - 5/2) ] . In addition to the plasma-dispersion-function identities (<ref>) and Bessel-function identities (<ref>), we use1/√()∫_C_L u^3 exp(-u^2) du/u-z= 1/2 + z^2[1 + z Z(z)] , 1/√()∫_C_L u^4 exp(-u^2) du/u-z= z{1/2 + z^2[1 + z Z(z) ,] } and∫_0^∞ d t t^3 J_n(αt)^2 exp(-t^2)= 1/2exp(-α^2/2){I_n(α^2/2) + α^2/2 [I_n'(α^2/2) - I_n(α^2/2)]} , ∫_0^∞ d t^4 t^2 J_n(αt) J_n'(αt) exp(-t^2)= α/4exp(-α^2/2) [ (α^2-2 + 2 n^2/α^2 ) I_n(α^2/2)+ (1-α^2 ) I_n'(α^2/2) ] , ∫_0^∞ d t^5 t^3 J_n'(αt)^2 exp(-t^2)= 1/2exp(-α^2/2)×{ [3 α^2 /2 - α^4/2 + n^2 (1/α^2 -3/2) ] I_n(α^2/2)+ ( α^4/2 + n^2/2 - α^2 ) I_n'(α^2/2) } , to obtain again the expressions for the dielectric components (<ref>)in terms of special mathematical functions (a tedious, but elementary calculation):(P_s)_xx = η_s ∑_n=-∞^∞ n^2/k_^2 ρ̃_s^2 exp(-k_^2 ρ̃_s^2/2) { k_^2 ρ̃_s^2/2 Z(ζ_sn) I_n'(k_^2 ρ̃_s^2/2)+ [ζ_sn + Z(ζ_sn) (ζ_sn^2 - 3/2 -k_^2 ρ̃_s^2/2) ] I_n(k_^2 ρ̃_s^2/2) } , (P_s)_xy = i η_s/2 ∑_n=-∞^∞ n exp(-k_^2 ρ̃_s^2/2) { [ ζ_sn + Z(ζ_sn) (ζ_sn^2 - 3/2 -k_^2 ρ̃_s^2/2) ] I_n'(k_^2 ρ̃_s^2/2) + [Z(ζ_sn) (1/2 +k_^2 ρ̃_s^2/2 + 2 n^2/k_^2 ρ̃_s^2-ζ_sn^2 ) -ζ_sn ] I_n(k_^2 ρ̃_s^2/2) } , (P_s)_xz = η_s ∑_n=-∞^∞ n/k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2) { k_^2 ρ̃_s^2/2 [1 + ζ_sn Z(ζ_sn) ] I_n'(k_^2 ρ̃_s^2/2) + [ζ_sn^2 - 1 -k_^2 ρ̃_s^2/2 + ζ_sn Z(ζ_sn) (ζ_sn^2 - 3/2 -k_^2 ρ̃_s^2/2) ] I_n(k_^2 ρ̃_s^2/2)} ,(P_s)_yx = (P_s)_xy ,(P_s)_yy = η_s ∑_n=-∞^∞ exp(-k_^2 ρ̃_s^2/2) { [ (n^2/k_^2 ρ̃_s^2 + k_^2 ρ̃_s^2/2 )ζ_sn+ Z(ζ_sn)(n^2 ζ_sn^2/k_^2 ρ̃_s^2 + k_^2 ρ̃_s^2 ζ_sn^2/2 + k_^2 ρ̃_s^2/4 - k_^4 ρ̃_s^4/2 - 3 n^2/2 - 3 n^2/2 k_^2 ρ̃_s^2 ) ] I_n(k_^2 ρ̃_s^2/2) }+ [Z(ζ_sn) (1/2 + k_^2 ρ̃_s^2 + n^2/k_^2 ρ̃_s^2 - ζ_sn^2 ) - ζ_sn ] k_^2 ρ̃_s^2/2 I_n'(k_^2 ρ̃_s^2/2) ,(P_s)_yz = -i η_s/2 ∑_n=-∞^∞ k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2) ×{ [k_^2 ρ̃_s^2 + 2 n^2/k_^2 ρ̃_s^2- ζ_sn^2 + ζ_sn Z(ζ_sn) (k_^2 ρ̃_s^2 + 1/2 + 2 n^2/k_^2 ρ̃_s^2- ζ_sn^2)] I_n(k_^2 ρ̃_s^2/2)+ [ζ_sn^2-1- k_^2 ρ̃_s^2 + ζ_sn Z(ζ_sn) (ζ_sn^2 - 3/2 - k_^2 ρ̃_s^2) ]I_n'(k_^2 ρ̃_s^2/2) } , (P_s)_zx = (P_s)_xz , (P_s)_zy = -(P_s)_yz ,(P_s)_zz = η_s ∑_n=-∞^∞ exp(-k_^2 ρ̃_s^2/2) { k_^2 ρ̃_s^2/2 ζ_sn [1 + ζ_sn Z(ζ_sn) ] I_n'(k_^2 ρ̃_s^2/2) +[ζ_sn^3 - ζ_sn -k_^2 ρ̃_s^2 ζ_sn/2 + ζ_sn^2 Z(ζ_sn) (ζ_sn^2 - 3/2 -k_^2 ρ̃_s^2/2) ] I_n(k_^2 ρ̃_s^2/2)} . §.§.§ Dielectric tensor in low-frequency limitIn the low-frequency limit ω̃_s≪ 1 under the ordering k_ρ_s ∼ k_ρ_s ∼ 1, the expressions (<ref>) can be approximated by theleading-order term of the expansion of P_s, that is P_s ≈P_s^(0) + O(ω̃_s^2),where (P_s^(0))_xx = η_s ∑_n=-∞^∞ n^2/k_^2 ρ̃_s^2 exp(-k_^2 ρ̃_s^2/2) { k_^2 ρ̃_s^2/2 Z(-n/|k_| ρ̃_s) I_n'(k_^2 ρ̃_s^2/2)+ [-n/|k_| ρ̃_s + Z(-n/|k_| ρ̃_s) (n^2/|k_|^2 ρ̃_s^2 - 3/2 -k_^2 ρ̃_s^2/2) ] I_n(k_^2 ρ̃_s^2/2) } , (P_s^(0))_xy = i η_s/2 ∑_n=-∞^∞ n exp(-k_^2 ρ̃_s^2/2) ×{ [Z(-n/|k_| ρ̃_s) (1/2 +k_^2 ρ̃_s^2/2 + 2 n^2/k_^2 ρ̃_s^2-n^2/|k_|^2 ρ̃_s^2 ) + n/|k_| ρ̃_s] I_n(k_^2 ρ̃_s^2/2) + [ -n/|k_| ρ̃_s + Z(-n/|k_| ρ̃_s) (n^2/|k_|^2 ρ̃_s^2 - 3/2 -k_^2 ρ̃_s^2/2) ] I_n'(k_^2 ρ̃_s^2/2) } ,(P_s^(0))_xz = η_s ∑_n=-∞^∞ n/k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2) { k_^2 ρ̃_s^2/2 [1 -n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s) ] I_n'(k_^2 ρ̃_s^2/2) +I_n(k_^2 ρ̃_s^2/2) [n^2/|k_|^2 ρ̃_s^2 - 1 -k_^2 ρ̃_s^2/2 -n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s) (n^2/|k_|^2 ρ̃_s^2 - 3/2 -k_^2 ρ̃_s^2/2) ] } ,(P_s^(0))_yx = (P_s^(0))_xy ,(P_s^(0))_yy = η_s ∑_n=-∞^∞ exp(-k_^2 ρ̃_s^2/2) { [ -(n^2/k_^2 ρ̃_s^2 + k_^2 ρ̃_s^2/2 )n/|k_| ρ̃_s + Z(-n/|k_| ρ̃_s)(n^4/k_^2 k_^2 ρ̃_s^4 + n^2 k_^2/k_^2 + k_^2 ρ̃_s^2/4- k_^4 ρ̃_s^4/2 .. - 3 n^2/2 - 3 n^2/2 k_^2 ρ̃_s^2 ) ] I_n(k_^2 ρ̃_s^2/2) + k_^2 ρ̃_s^2/2 I_n'(k_^2 ρ̃_s^2/2)×[Z(-n/|k_| ρ̃_s) (1/2 + k_^2 ρ̃_s^2 + n^2/k_^2 ρ̃_s^2 -n^2/k_^2 ρ̃_s^2 ) +n/|k_| ρ̃_s ] } ,(P_s^(0))_yz = -i η_s/2 ∑_n=-∞^∞ k_ ρ̃_s exp(-k_^2 ρ̃_s^2/2) ×{ I_n(k_^2 ρ̃_s^2/2) [k_^2 ρ̃_s^2 + 2 n^2/k_^2 ρ̃_s^2-n^2/k_^2 ρ̃_s^2 -n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s) (k_^2 ρ̃_s^2 + 1/2 + 2 n^2/k_^2 ρ̃_s^2- n^2/k_^2 ρ̃_s^2)]+ I_n'(k_^2 ρ̃_s^2/2) [n^2/k_^2 ρ̃_s^2-1- k_^2 ρ̃_s^2-n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s) (n^2/k_^2 ρ̃_s^2 - 3/2 - k_^2 ρ̃_s^2) ] } , (P_s^(0))_zx = (P_s^(0))_xz , (P_s^(0))_zy = -(P_s^(0))_yz ,(P_s^(0))_zz = η_s ∑_n=-∞^∞ exp(-k_^2 ρ̃_s^2/2) { -n k_^2 ρ̃_s/2 |k_| [1 - n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s) ] I_n'(k_^2 ρ̃_s^2/2) + I_n(k_^2 ρ̃_s^2/2) [n/|k_| ρ̃_s -n^3/|k_|^3 ρ̃_s^3 +n k_^2 ρ̃_s/2 |k_| + n^2/k_^2 ρ̃_s^2 Z( - n/|k_| ρ̃_s) (n^2/k_^2 ρ̃_s^2 - 3/2 -k_^2 ρ̃_s^2/2) ] } .In this limit, we have utilised the approximation ζ_sn≈ - n/|k_|ρ̃_s. Similarly to the Maxwellian case, we can use the Bessel-function-summation identities(<ref>) and the symmetry properties of the plasma dispersionfunction with a real argument to show that (P_s^(0))_xx =2 i √() η_s exp(-k_^2 ρ̃_s^2/2) ∑_n=1^∞ n^2/k_^2 ρ̃_s^2 exp(-n^2/k_^2 ρ̃_s^2) ×[(n^2/k_^2 ρ̃_s^2 - 3/2 -k_^2 ρ̃_s^2/2)I_n(k_^2 ρ̃_s^2/2) + k_^2 ρ̃_s^2/2 I_n'(k_^2 ρ̃_s^2/2)]= i η_s I(k_ ρ̃_s,k_ ρ̃_s),(P_s^(0))_xy = -i η_s {1/2 |k_| ρ̃_s + 1/2 ∑_n=-∞^∞ n [Z(n/|k_| ρ̃_s)] exp(-k_^2 ρ̃_s^2/2) ×{(n^2/k_^2 ρ̃_s^2 - 3/2 - k_^2 ρ̃_s^2 ) I_n'(k_^2 ρ̃_s^2/2) +(1/2 + k_^2 ρ̃_s^2 +2 n^2/k_^2 ρ̃_s^2 -n^2/k_^2 ρ̃_s^2 ) I_n(k_^2 ρ̃_s^2/2) }= -i η_s J(k_ ρ̃_s,k_ ρ̃_s),(P_s^(0))_xz =-2 i √() η_s exp(-k_^2 ρ̃_s^2/2) ∑_n=1^∞ n^2/k_^2 ρ̃_s^2 exp(-n^2/|k_| k_ ρ̃_s^2) ×[(n^2/k_^2 ρ̃_s^2 - 3/2 -k_^2 ρ̃_s^2/2)I_n(k_^2 ρ̃_s^2/2) + k_^2 ρ̃_s^2/2 I_n'(k_^2 ρ̃_s^2/2)]= -i k_/|k_| η_s I(k_ ρ̃_s,k_ ρ̃_s), (P_s^(0))_yy = i √()/2 η_s exp(-k_^2 ρ̃_s^2/2) ∑_n=-∞^∞ exp(-n^2/k_^2 ρ̃_s^2)×{(n^2 + 1/2 k_^2 ρ̃_s^2 + k_^4 ρ̃_s^4 - n^2 k_^2/k_^2) I_n'(k_^2 ρ̃_s^2/2)+(2 n^4/k_^2k_^2 ρ̃_s^4 - 3 n^2/k_^2 ρ̃_s^2 - 3 n^2 + 1/2 k_^2 ρ̃_s^2 - k_^4 ρ̃_s^4 + n^2 k_^2/k_^2) I_n(k_^2 ρ̃_s^2/2) }= i η_s K(k_ ρ̃_s,k_ ρ̃_s),(P_s^(0))_yz = -i η_s {k_/2 k_^2 ρ̃_s + 1/2 ∑_n=-∞^∞ n k_/|k_| [Z(n/|k_| ρ̃_s)] exp(-k_^2 ρ̃_s^2/2) ×{(n^2/k_^2 ρ̃_s^2 - 3/2 - k_^2 ρ̃_s^2 ) I_n'(k_^2 ρ̃_s^2/2) +(1/2 + k_^2 ρ̃_s^2 +2 n^2/k_^2 ρ̃_s^2 -n^2/k_^2 ρ̃_s^2 ) I_n(k_^2 ρ̃_s^2/2) }= -i k_/|k_| η_s J(k_ ρ̃_s,k_ ρ̃_s),(P_s^(0))_zz =2 i √() η_s exp(-k_^2 ρ̃_s^2/2) ∑_n=1^∞ n^2/k_^2 ρ̃_s^2 exp(-n^2/k_^2 ρ̃_s^2) ×[(n^2/k_^2 ρ̃_s^2 - 3/2 -k_^2 ρ̃_s^2/2)I_n(k_^2 ρ̃_s^2/2) + k_^2 ρ̃_s^2/2 I_n'(k_^2 ρ̃_s^2/2)]= i k_^2/k_^2 η_s I(k_ ρ̃_s,k_ ρ̃_s),where the functions I(x,y), J(x,y) and K(x,y) aredefined by I(x,y) ≡2 √()/y^2 exp(-y^2/2) ×∑_m=1^∞ m^2 exp(-m^2/x^2) [y^2/2 I_m'(y^2/2)+ (m^2/x^2-3+y^2/2)I_m(y^2/2)] , J(x,y) ≡1/2x + 1/2 exp(-y^2/2)∑_m = -∞^∞ { m Z(m/x) [(m^2/x^2 - 3/2 - y^2) I_m'(y^2/2) . .. . + (1/2+y^2 +2 m^2/y^2-m^2/x^2)I_m(y^2/2)] } , K(x,y) ≡√()/2 exp(-y^2/2)∑_m = -∞^∞ {exp(-m^2/x^2) [(m^2 + 1/2y^2 +y^4 - m^2 y^2/x^2) I_m'(y^2/2) . . . . + (2 m^4/x^2 y^2 - 3 m^2/y^2-3m^2+1/2y^2 - y^4 + m^2 y^2/x^2)I_m(y^2/2)]} . §.§.§ Asymptotic limits of P_s^(0) In this appendix, we give simplified expressions in the limits of smalland large x and y for the special functionsI(x,y), J(x,y) and K(x,y) definedby (<ref>). Physically, this correspond,s via (<ref>), to considering thedielectric response associated with P_s^(0) for modes with parallel and perpendicular wavenumbers that are verysmall or very large with respect to the inverse Larmor radius of species s.Proceeding systematically through various limits, we have the following results:* x ∼ 1, y ≪ 1:I(x,y) = √()/2 (1/x^2-1/2)exp(-1/x^2) [1+O(y^2) ], J(x,y) = [(1/4-1/2 x^2)Z(1/x) + 1/2 x ] [1+O(y^2) ] , K(x,y) = √()/2 (1/x^2-1/2)exp(-1/x^2) [1+O(y^2) ] . * x, y ≫ 1:I(x,y) = -√() x^3/4 (x^2+y^2)^3/2 [1+O(1/x^2+y^2) ] , J(x,y) = -x^3/(x^2+y^2)^2 [1+O(1/x^2+y^2) ] , K(x,y) = -√() x/4 (x^2+y^2)^1/2 [1+O(1/x^2+y^2) ] . * x ≪ 1, y ∼ 1:I(x,y) = 2 √()/x^2 y^2 exp(-y^2/2-1 /x^2) I_1(y^2/2) [1+O(x^2) ] , J(x,y) = -x/2 exp(-y^2/2)×[y^2(I_0(y^2/2)-I_1(y^2/2)) - I_1(y^2/2)] [1+O(x^2) ] , K(x,y) = √()/2 exp(-y^2/2) [(1/2y^2 -y^4 ) I_0(y^2/2)+(1/2y^2 +y^4 ) I_1(y^2/2)] [1+O(x^2) ] .* x, y ≪ 1:I(x,y) = √()/2 x^2 exp(-1/x^2) [1+O(exp(-3/x^2),y^2 ) ], J(x,y) = -x (3/8y^2 - 1/4 x^2 ) [1+O(x^4, x^2 y^2, y^4) ] , K(x,y) = √()/4 y^2 [1+O(x^2, y^2) ]. §.§ CE shear termsFor a CE shear term of the formf̃_s(ṽ_s,ṽ_s) = - ϵ_s (ṽ_s^2 - ṽ_s^2/2) exp(-ṽ_s^2) ,we haveΛ_s(ṽ_s,ṽ_s) = -3 ϵ_s ṽ_s ṽ_s exp(-ṽ_s^2), Ξ_s(ṽ_s,ṽ_s) = -3 ϵ_s/ω̃_s ṽ_s ṽ_s exp(-ṽ_s^2) + O(ϵ_s) .This gives (P_s)_xx =6/√() ϵ_s ∑_n=-∞^∞ [ n^2/k_^2 ρ̃_s^2∫_C_L ṽ_s exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s J_n(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) ] ,(P_s)_xy = 6 i/√() ϵ_s ∑_n=-∞^∞ [ n/k_ ρ̃_s ∫_C_L ṽ_s exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s^2 J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) exp(-ṽ_s^2) ] ,(P_s)_xz = 6/√() ϵ_s ∑_n=-∞^∞ [ n/k_ ρ̃_s∫_C_L ṽ_s^2 exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s J_n(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) ] ,(P_s)_yx = (P_s)_xy (P_s)_yy = 6/√() ϵ_s ∑_n=-∞^∞ [ ∫_C_L ṽ_s exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s^3 J_n'(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2)] ,(P_s)_yz = -6 i/√() ϵ_s ∑_n=-∞^∞ [ ∫_C_L ṽ_s^2 exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s^2 J_n(k_ ρ̃_s ṽ_s) J_n'(k_ ρ̃_s ṽ_s) exp(-ṽ_s^2)] , (P_s)_zx = (P_s)_xz , (P_s)_zy = -(P_s)_yz ,(P_s)_zz = 6/√() ϵ_s { ∑_n=-∞^∞ [ ∫_C_L ṽ_s^3 exp(-ṽ_s^2) d ṽ_s/ṽ_s-ζ_sn . . ×∫_0^∞ d ṽ_s ṽ_s J_n(k_ ρ̃_s ṽ_s)^2 exp(-ṽ_s^2) ] - ∫_-∞^∞ d ṽ_s ṽ_s^2 ∫_0^∞ d ṽ_s ṽ_s exp(-ṽ_s^2) } . Again using the Bessel-function identities (<ref>), and the identities (<ref>) and (<ref>a)applicable to the plasma dispersion function, the dielectric tensor's elements become (P_s)_xx = 3ϵ_s ∑_n=-∞^∞ n^2/k_^2 ρ̃_s^2[1+ζ_sn Z(ζ_sn)] exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(P_s)_xy = 3 iϵ_s/2 ∑_n=-∞^∞ n[1+ζ_sn Z(ζ_sn)] ×exp(-k_^2 ρ̃_s^2/2) [I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(P_s)_xz = 3 ϵ_s ∑_n=-∞^∞ n/k_ ρ̃_s ζ_sn [1+ζ_sn Z(ζ_sn)] exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(P_s)_yx = (P_s)_xy ,(P_s)_yy = 3/2 ϵ_s∑_n=-∞^∞ [ 1+ ζ_sn Z(ζ_sn) ] ×exp(-k_^2 ρ̃_s^2/2) [( 2 n^2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2) I_n(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2 I_n'(k_^2 ρ̃_s^2/2) ] ,(P_s)_yz = -3 iϵ_s /2 ∑_n=-∞^∞ k_ ρ̃_s ζ_sn [1+ζ_sn Z(ζ_sn)] ×exp(-k_^2 ρ̃_s^2/2)[I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(P_s)_zx = (P_s)_xz , (P_s)_zy = -(P_s)_yz ,(P_s)_zz = 3 ϵ_s ∑_n=-∞^∞ ζ_sn^2 [1+ζ_sn Z(ζ_sn)] exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) . §.§.§ Dielectric tensor in low-frequency limitAs with the CE temperature-gradient term, under the ordering k_ρ_s ∼ k_ρ_s ∼ 1, the expressions (<ref>) can be approximated by theleading-order term of the expansion of P_s in the low-frequency limit ω̃_s≪1. Namely, we haveP_s ≈P_s^(0) + O(ω̃_s^2),where (P_s^(0))_xx =3 ϵ_s∑_n=-∞^∞ n^2/k_^2 ρ̃_s^2[1-n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(P_s^(0))_xy = 3 i ϵ_s /2 ∑_n=-∞^∞ n[1-n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2) [I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(P_s^(0))_xz = -3 ϵ_s∑_n=-∞^∞ n^2/k_ |k_| ρ̃_s^2 [1-n/|k_| ρ̃_sZ(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) ,(P_s^(0))_yx = (P_s^(0))_xy ,(P_s^(0))_yy = 3/2 ϵ_s∑_n=-∞^∞ [1-n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s) ] ×exp(-k_^2 ρ̃_s^2/2) [( 2 n^2/k_^2 ρ̃_s^2+k_^2 ρ̃_s^2) I_n(k_^2 ρ̃_s^2/2) -k_^2 ρ̃_s^2 I_n'(k_^2 ρ̃_s^2/2) ] ,(P_s^(0))_yz = 3 i ϵ_s/2 ∑_n=-∞^∞ n k_/|k_| [1-n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] ×exp(-k_^2 ρ̃_s^2/2)[I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ],(P_s^(0))_zx = (P_s^(0))_xz , (P_s^(0))_zy = -(P_s^(0))_yz ,(P_s^(0))_zz = 3 ∑_n=-∞^∞ n^2/k_^2 ρ̃_s^2 [1-n/|k_| ρ̃_s Z(-n/|k_| ρ̃_s)] exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) .In this calculation, we have utilised the approximation ζ_sn≈ - n/|k_|ρ̃_s. Similarly to the Maxwellian case, we can use the Bessel-function-summation identities(<ref>) and the symmetry properties of the plasma dispersionfunction with a real argument to show that (P_s^(0))_xx =3 ϵ_s{ 1/2 +exp(-k_^2 ρ̃_s^2/2) ∑_n=-∞^∞ n^3/|k_| k_^2 ρ̃_s^3 [ Z(n/|k_| ρ̃_s) ]I_n(k_^2 ρ̃_s^2/2) }= ϵ_s W(|k_| ρ̃_s,k_ ρ̃_s) ,(P_s^(0))_xy = 3√() ϵ_sexp(-k_^2 ρ̃_s^2/2) ×∑_n=1^∞n^2/|k_| ρ̃_sexp(-n^2/k_^2 ρ̃_s^2) [I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ] = -ϵ_s X(|k_| ρ̃_s,k_ ρ̃_s) ,(P_s^(0))_xz = -3 ϵ_s{ k_/2 |k_| +exp(-k_^2 ρ̃_s^2/2) ×∑_n=-∞^∞ n^3/k_ k_^2 ρ̃_s^3 [ Z(n/|k_| ρ̃_s) ]I_n(k_^2 ρ̃_s^2/2) }= -k_/|k_| ϵ_sW(|k_| ρ̃_s,k_ ρ̃_s) ,(P_s^(0))_yx = (P_s^(0))_xy ,(P_s^(0))_yy = 3/2 ϵ_s { 1 + exp(-k_^2 ρ̃_s^2/2) ∑_n=-∞^∞ 2 n^3/|k_| k_^2 ρ̃_s^3 [ Z(n/|k_| ρ̃_s) ]I_n(k_^2 ρ̃_s^2/2) + k_^2 ρ̃_s^2exp(-k_^2 ρ̃_s^2/2) ∑_n=-∞^∞ n/|k_| ρ̃_s ×[ Z(n/|k_| ρ̃_s) ][ I_n(k_^2 ρ̃_s^2/2) - I_n'(k_^2 ρ̃_s^2/2) ] } ,= ϵ_s Y(|k_| ρ̃_s,k_ ρ̃_s) ,(P_s^(0))_yz = 3√() ϵ_s exp(-k_^2 ρ̃_s^2/2) ×∑_n=1^∞k_ n^2/k_^2 ρ̃_sexp(-n^2/k_^2 ρ̃_s^2) [I_n'(k_^2 ρ̃_s^2/2)-I_n(k_^2 ρ̃_s^2/2) ] = -k_/|k_| ϵ_s X(|k_| ρ̃_s,k_ ρ̃_s) ,(P_s^(0))_zx = (P_s^(0))_xz , (P_s^(0))_zy = -(P_s^(0))_yz ,(P_s^(0))_zz = 3 ϵ_s { k_^2/2 k_^2 +exp(-k_^2 ρ̃_s^2/2) ∑_n=-∞^∞ n^3/|k_|^3 ρ̃_s^3 [ Z(n/|k_| ρ̃_s) ]I_n(k_^2 ρ̃_s^2/2) }= k_^2/k_^2 ϵ_s W(|k_| ρ̃_s,k_ ρ̃_s) ,where the functions W(x,y), Y(x,y) and X(x,y) aredefined byW(x,y) ≡3/2 + 3/xy^2 exp(-y^2/2) ∑_m = -∞^∞ m^3 Z(m/x) I_m(y^2/2) , X(x,y) ≡3 √()/x exp(-y^2/2) ∑_m=1^∞ m^2 [ I_m(y^2/2)- I_m'(y^2/2)] exp(-m^2/x^2) , Y(x,y) ≡W(x,y) - 3/2 y^2 G(x,y)/x.§.§.§ Asymptotic limits of P_s^(0) As we have done for the other special functions defined in this paper, in this appendix weprovide asymptotic expressions in the limits where x and yare very small or large for the special functions W(x,y), X(x,y) and Y(x,y)defined in (<ref>). These limits again correspond to parallel and perpendicularwavenumbers that are very small or very large with respect to the inverse Larmor radius of species s. Considering various asymptotic limits in a systematic fashion, we find* x ∼ 1, y ≪ 1:W(x,y) = [3/2 +3/2 x Z(1/x) ] [1+O(y^2) ], X(x,y) = -3 √()/2 x exp(-1/x^2) [1+O(y^2) ] , Y(x,y) =[3/2 +3/2 x Z(1/x) ] [1+O(y^2) ]. * x, y ≫ 1:W(x,y) = 3 x^2 (x^2-y^2)/2 (x^2+y^2)^2 [1+O(1/x^2+y^2) ] , X(x,y) = 3 √() x^2 (y^2-2 x^2)/4 (x^2+y^2)^5/2 [1+O(1/x^2+y^2) ] , Y(x,y) = 3 x^2/2(x^2 + y^2) [1+O(1/x^2+y^2) ] .* x ≪ 1, y ∼ 1:W(x,y) = -3 x^2/2 y^2 [1-exp(-y^2/2) I_0(y^2/2)] [1+O(x^2) ] , X(x,y) = 3 √()/x exp(-y^2/2) [I_0(y^2/2)-I_1(y^2/2)]×exp(-1/x^2) {1+O[exp(-3/x^2) ] } , Y(x,y) = 3/2 y^2 exp(-y^2/2) [I_0(y^2/2)- I_1(y^2/2)] [1+O(x^2) ]. * x, y ≪ 1:W(x,y) = -3/4x^2 [1+O(x^2,y^2 ) ], X(x,y) = 3 √()/x exp(-1/x^2) {1+O[exp(-3/x^2),y^2 ] } , Y(x,y) = [3/2 y^2 -3/4 x^2 -9/8(x^4- 2/3 x^2 y^2 + y^4) ]×[1+O(x^6, x^4 y^2, x^2 y^4, y^6 ) ].* x ≪ 1, y ≫ 1:W(x,y) =-3 x^2/2 y^2 [1+O(x^2,1/y^2) ] , X(x,y) =3/x y^3 exp(-1/x^2) {1+O[exp(-3/x^2),1/y^2 ] } , Y(x,y) = 3/2 √() y [1+O(x^2,1/y^2) ]. § DENSITY PERTURBATIONS FOR LOW-FREQUENCY MODESIn this appendix, we derive an expression for the (Fourier-transformed) perturbation of number densityδ n_s of species s associated with a low-frequency mode, in terms of the expanded terms of the dielectric tensor 𝔈_s = ω̃_s𝔈_s^(0) + ω̃_s^2 𝔈_s^(1) + … of species s and the perturbed electric field, δE; we will show that δ n_s is, in fact, independentof 𝔈_s^(0). We then derive an expression for the perturbed density of all sub-ion-Larmorscale (k ρ_i ≫ 1), low-frequency modes.§.§ Derivation of general expressions We begin with the continuity equation (<ref>a), which describes thetime evolution of the density of species s in terms of itself and the bulkvelocity of the same species. For any small-amplitude perturbation (with perturbed density δ n_s and bulk velocity δV_s) of some (much more slowly evolving) quasi-equilibrium state (with mean density n_s0≫δ n_s and bulk velocity V_s0≫δV_s), viz., n_s = n_s0 + δ n_s , V_s = V_s0 + δV_s,the continuity equation governing that perturbation then becomes∂δ n_s/∂ t + n_s0δV_s = 0 .Assuming the perturbation has the form δn_s = δn_s exp{i(k r - ωt)} ,δV_s= δV_s exp{i(k r - ωt)} ,we deduce from (<ref>) thatδ n_s = n_s0kδV_s/ω.The perturbed velocity δV_s can be written in terms ofthe dielectric tensor of species s using Ohm's law (<ref>) and(<ref>):δV_s = -iω/4Z_s e n_s0𝔈_s δE,whence, by way of (<ref>), δ n_s = -i/4Z_s ek𝔈_s δE.Finally, we note that the symmetries (<ref>) of 𝔈_s^(0)imply that it does not contribute to the right-hand side of(<ref>), which implies in turn thatδ n_s ≈ -iω̃_s^2/4Z_s ek𝔈_s^(1)δE.Thus, for low-frequency modes, δ n_s is a function of the electricfield and 𝔈_s^(1), but notof 𝔈_s^(0).We note that the condition (<ref>) implies that, forlow-frequency modes, quasi-neutrality is maintained: ∑_s Z_s δ n_s = -i/4ek𝔈_s δE= 0 .Thus, in a two-species plasma, the ion number density associated with a perturbation can be calculated if the electron number densityis known, and visa versa.§.§ Special case: sub-ion-Larmor scale modes in a two-species plasmaIn the special case of a two-species plasma whose characteristic parallel wavenumber satisfies k_ρ_i ≫ 1,a particularly simple expression for the perturbed number densities of ions(and electrons) can be derived: the Boltzmann response. This arises because the ion dielectric tensor 𝔈_i is unmagnetised, and so takes the simple form (valid for arbitrary ω̃_i = ω/k v_thi) that was derived in appendix <ref>:𝔈_i ≈𝔈_i^ (UM) = ω_pi^2/ω^2ω̃_i{(I-k̂k̂) Z(ω̃_i) + 2 [ω̃_i + ω̃_i^2 Z(ω̃_i) ]k̂k̂}.It follows thatk𝔈_i δE≈ω_pi^2/ω^22 ω̃_i^2 [1+ ω̃_i Z(ω̃_i) ] kδE.Now assuming that ω̃_i ≪ 1, it follows thatk𝔈_i^(1)δE≈2 ω_pi^2/ω^2k_^2/k^2kδE.Expression (<ref>) with s = i then givesδ n_i ≈ -Z e i n_i0/T_ik̂δE/k.Finally, introducing the electrostatic potential φ, whose Fouriertransform is related to the electrostatic component of the electric field viaφ̂ =ik̂δE/k,we deduce that δ n_i ≈ -Z e i n_i0/T_iφ̂,and δ n_e ≈ -Z e i n_e0/T_iφ̂,where we have used the quasi-neutrality relation n_e0 = Z n_i0 for theequilibrium state.§ CALCULATING THE ELECTROSTATIC FIELD FROM THE TRANSVERSE ELECTRIC FIELD In appendix <ref>, it was shown that for any functionwith a small anisotropy,𝔈_s^(0)k̂ = 0 ,which implies that the leading-order terms (in ω̃_s≪ 1) of the dielectric tensor areinsufficient to determine the electrostatic field. Todo this, we must go to the next order in ω̃_s≪ 1.To illustrate how such a calculation is done,in this appendix, we derive an expression for the electrostatic field component k̂δE in terms of thetransverse electric field δE_T and special functions when the underlying particle distribution function is Maxwellian. To achieve this aim, we first derive a relation between the components of the electricfield in the coordinate basis{x̂,ŷ,ẑ}. We beginwith the consistency condition (<ref>) appropriate for non-relativisticelectromagnetic fluctuations:k𝔈δE = 0 .Writing k̂, 𝔈 and δEin the basis{x̂,ŷ,ẑ},this becomes(k_𝔈_xx + k_𝔈_xz) δE_x + (k_𝔈_xy - k_𝔈_yz) δE_y + (k_𝔈_xz + k_𝔈_zz) δE_z = 0 .Now considering the case of fluctuations that satisfy ω̃_s≪ 1 for all particle speciess, and expanding the components of the dielectric in ω̃_s≪ 1, we find(k_𝔈_xx^(1) + k_𝔈_xz^(1)) δE_x + (k_𝔈_xy^(1) - k_𝔈_yz^(1)) δE_y + (k_𝔈_xz^(1) + k_𝔈_zz^(1)) δE_z = O(ω̃_s^3) ,where 𝔈^(1) = ∑_sω̃_s^2 𝔈_s^(1).From (<ref>), we have k_ 𝔈_xx^(1) + k_ 𝔈_xz^(1)= - ∑_s 2 k_ ω_ps^2 ω̃_s^2/ω^2 ∑_m=-∞^∞ m/k_ ρ̃_s Z(m/|k_| ρ̃_s) ×exp(-k_^2 ρ̃_s^2/2) I_m(k_^2 ρ̃_s^2/2) , k_ 𝔈_xy^(1) - k_ 𝔈_yz^(1)= ∑_s √() k_ ω_ps^2 ω̃_s^2/ω^2∑_m=-∞^∞ k_ ρ̃_s exp(-m^2/k_^2 ρ̃_s^2) ×exp(-k_^2 ρ̃_s^2/2) [I_m'(k_^2 ρ̃_s^2/2)-I_m(k_^2 ρ̃_s^2/2) ], k_ 𝔈_xz^(1) + k_ 𝔈_zz^(1) = ∑_s 2 k_ ω_ps^2 ω̃_s^2/ω^2 [1 + ∑_m=-∞^∞ m/|k_| ρ̃_s Z(m/|k_| ρ̃_s) . . ×exp(-k_^2 ρ̃_s^2/2) I_m(k_^2 ρ̃_s^2/2) ] .Thus, we have the following relationship between δ E_x, δ E_yand δ E_z:∑_s k_Ds^2/2 k_^2{-L(|k_| ρ̃_s,k_ρ̃_s) δ E_x+ N(|k_| ρ̃_s,k_ρ̃_s) δ E_y +[2+k_/k_L(|k_| ρ̃_s,k_ρ̃_s)] δ E_z} = 0 ,where k_Ds is the Debye wavenumber (<ref>), and L(x,y) and N(x,y) were defined previously by (<ref>). Using the identitiesδE_x = k_/k δE_1 + k_/kδE_3 , δE_y = δE_2 , δE_z = -k_/k δE_1 + k_/kδE_3 ,we can rearrange (<ref>) to give1/k_ k(∑_s k_Ds^2 ) δ E_3=∑_s k_Ds^2/2 k_^2{[k/k_ L(|k_| ρ̃_s,k_ρ̃_s) + 2 k_/k] δ E_1- N(|k_| ρ̃_s,k_ρ̃_s) δ E_2}.Thus, the electrostatic field is related to the transverse field byk̂δE=(∑_s Z_s T_e/T_s)^-1∑_s Z_s T_e/T_s{[k^2/2 k_^2 L(|k_| ρ̃_s,k_ρ̃_s) + k_/k_] δ E_1- k/2 k_ N(|k_| ρ̃_s,k_ρ̃_s) δ E_2}. § METHODOLOGY FOR CHARACTERISING CET MICROINSTABILITIESIn this appendix, we describe our method for calculating the real frequencies and growth rates of microinstabilities driven by the CE electron- and ion-temperature-gradient,and electron-friction terms when the Krook collisionoperator is assumed. The method follows that outlined in section<ref>: that is, motivated by the considerations of section <ref>, we assume that all significant CET microinstabilities are low frequency (ω≪ k_ v_ths for at least one particle species),and derive algebraic dispersion relations of such microinstabilities [a particular exampleof which is given by (<ref>)].The growth rate of CET microinstabilities [and, therefore, the stability of the electron and ion CE distribution functions(<ref>a) and (<ref>b)] as a function of their parallel and perpendicularwavenumbers k_ and k_⊥ is assessed by solving this dispersion relationfor the complex frequency ω, and then evaluating its imaginary part. As we explained in section <ref>, to construct thealgebraic, low-frequency dispersion relation for particular forms of CE distribution function for each particle species s, we must evaluate its (leading-order) non-Maxwelliancontribution to the dielectric tensor, P_s≈P_s^(0) [see (<ref>) and(<ref>) for the precise relation of this quantity to the dielectric tensor 𝔈_s].This is done for the CE electron-friction term in appendix <ref>, and for the CE temperature-gradient terms inappendix <ref>. We then deduce the algebraic dispersionrelations of CE electron-temperature-gradient-driven microinstabilities in appendix<ref>, and of CE ion-temperature-gradient-drivenmicroinstabilities in appendix <ref>. Within these two appendices, respectively, we also present derivations of the (further) simplifieddispersion relations for the parallel CET whistler instability (appendix<ref>), the parallel CET slow-hydromagnetic-waveinstability (appendix <ref>), and the CET long-wavelength KAW instability(appendix <ref>), from which the frequencies and growth ratesof these instabilities that are stated in section <ref> are calculated. §.§ Dielectric response of CE electron-friction termWe first consider the CE electron-frictionterm when evaluating P_e^(0), defined in (<ref>). We showed in appendix <ref> that, when a Krook collision operator was assumed, if η_e^T = η_i =0, then [see (<ref>)](P_e^(0))_11 = η_e^R/2 (M_e^(0))_11 , (P_e^(0))_12 = η_e^R/2 (M_e^(0))_12 , (P_e^(0))_21 = η_e^R/2 (M_e^(0))_21 , (P_e^(0))_22 = η_e^R/2 (M_e^(0))_22. It follows that the dispersion relation of all plasma modes is identical to that in a Maxwellian plasma, only with shifted complexfrequencies ω̃_e^*≡ω̃_e + η_e^R/2. Since (ω̃_e) < 0 for all modes in a Maxwellian plasma, we concludethat (ω̃_e^*) < 0 also, and hence theCE electron-friction term cannot drive any microinstabilities when a Krook collision operator is employed: instead, it merely modifies the real frequency of the waves. Thus, when characteristing CET microinstabilities,we henceforth ignore the CE electron-friction term, as well as the electron-ion-driftterm (viz., η_e^R = η_e^u = 0).§.§ Dielectric response of CE temperature-gradient termsNow consider the CE temperature-gradient terms. It is shown in appendix <ref>that P_s^(0) is given by(P_e^(0))_11 = i η_e^T k^2/k_^2 I(k_ ρ̃_e,k_ ρ̃_e) , (P_e^(0))_12 = - i η_e^T k/k_ J(k_ ρ̃_e,k_ ρ̃_e) , (P_e^(0))_21 = i η_e^T k/k_J(k_ ρ̃_e,k_ ρ̃_e) , (P_e^(0))_22 = i η_e^T K(k_ ρ̃_e,k_ ρ̃_e), where the special functions I(x,y), J(x,y) and K(x,y) are defined by (<ref>). Note that ρ̃_e < 0, by definition. The contribution P_i^(0) associated with the CEion-temperature-gradient terms is given by(P_i^(0))_11 = i η_i k^2/k_^2 I(k_ ρ_i,k_ ρ_i) , (P_i^(0))_12 = - i η_i k/k_ J(k_ ρ_i,k_ ρ_i) , (P_i^(0))_21 = i η_i k/k_J(k_ ρ_i,k_ ρ_i) , (P_i^(0))_22 = i η_i K(k_ ρ_i,k_ ρ_i). §.§ Approximate dispersion relation of CE electron-temperature-gradient-driven microinstabilitiesWe first consider microinstabilities for which ω̃_e = ω/k_ v_the∼η_e^T. It follows that ω̃_i = ω/k_ v_thi∼η_e^T μ_e^-1/2≫η_i. Therefore, the CE ion-temperature-gradient term is irrelevant for such instabilities, and we need consider only the electron-temperature-gradient term.We also assume that the Maxwellian contribution to the dielectric tensor, M_i,can be ignored for such microinstabilities– the validity of this assumption is discussed at the end of this section. The dispersion relation for microinstabilities under the ordering ω̃_e∼η_e^T ∼ 1/β_eis then given by (<ref>), with M_e^(0) and P_e^(0)substituted for by (<ref>) and (<ref>),respectively:[ω̃_e F(k_ρ̃_e,k_ρ̃_e) . +. η_e^T I(k_ρ̃_e,k_ρ̃_e)+ i k_^2 d_e^2] × [ω̃_e H(k_ρ̃_e,k_ρ̃_e) + η_e^T K(k_ρ̃_e,k_ρ̃_e)+i k^2 d_e^2]+[ω̃_e G(k_ρ̃_e,k_ρ̃_e) + η_e^T J(k_ρ̃_e,k_ρ̃_e)]^2 = 0. We remind the reader that we have ordered k^2 d_e^2 ∼η_e^T and k ρ_e ∼ 1. Noting that β_e = ρ_e^2/d_e^2, we can rewrite the skin-depth terms as follows: k_^2 d_e^2 = k_^2 ρ_e^2/β_e,k^2 d_e^2 = k^2 ρ_e^2/β_e. This allows for the dispersion relation (<ref>) to be arranged as a quadratic in the complex variable ω̃_eβ_e: A_T(k_ρ_e,k_ρ_e) ω̃_e^2 β_e^2 +B_T(k_ρ_e,k_ρ_e) ω̃_eβ_e + C_T(k_ρ_e,k_ρ_e) = 0 , whereA_T(k_ρ_e,k_ρ_e) = F_e H_e + G_e^2 ,B_T(k_ρ_e,k_ρ_e) =η_e^T β_e (F_e K_e +H_e I_e + 2G_e J_e) + i(F_e k^2 ρ_e^2 + H_e k_^2 ρ_e^2) ,C_T(k_ρ_e,k_ρ_e) =(η_e^T β_e)^2 (I_e K_e + J_e^2) - k^2 k_^2 ρ_e^4 +iη_e^T β_e(I_e k^2 ρ_e^2 + K_e k_^2 ρ_e^2) , and F_e ≡ F(k_ρ̃_e,k_ρ̃_e), G_e ≡ G(k_ρ̃_e,k_ρ̃_e), etc. Solving (<ref>) gives two roots; restoring dimensions to the complex frequency, they areω = Ω_e/β_e k_ρ_e -B_T±√(B_T^2 + 4A_TC_T)/2 A_T,recovering (<ref>).For a given wavenumber, we use (<ref>) to calculate the growth ratesof the perturbations – and, in particular, to see if positive growth ratesare present. If they are, it is anticipated that they will have typical sizeγ∼Ω_e/β_e ∼η_e^T Ω_e (or ω̃_e∼ 1/β_e ∼η_e^T).When deriving (<ref>), we assumed that neglecting the Maxwellian ion response was legitimate. It is clear thatif ω̃_i≫ 1, then thermal ions are effectively static toelectromagnetic perturbations, and so their contribution M_i to the dielectric tensor can be ignored.In terms of a condition onη_e^T, the scaling η_e^T∼ω̃_e gives η_e^T≫μ_e^1/2, so this regime is valid for sufficientlylarge η_e^T. For ω̃_i≲ 1, it is not immediatelyclear in the same way that the ion contribution to the dielectric tensor issmall. However, having deduced the typical magnitude of the complex frequency of perturbationswhilst ignoring ion contributions, we are now able to confirm that ourneglect of M_i was justified. Since k ρ_e ∼ 1 under the ordering assumed when deriving (<ref>), we conclude that the Maxwellian ion response isunmagnetised: k ρ_i ≫ 1. As a consequence, it can beshown (see appendix <ref>) that the transverse components of M_iare given by(M_i)_11 = (M_i)_22 = ω̃_i Z(ω̃_i) , (M_i)_12 = (M_i)_21 = 0 ,where ω̃_i≡ω/k v_thi = k_ω̃_i/k. Then, estimating the size of the neglected Maxwellian ion contribution to the dielectric tensor (assuming k_∼ k) as compared with the equivalent electron contribution, we find(𝔈_i)_11/(𝔈_e^(0))_11∼(𝔈_i)_22/(𝔈_e^(0))_22∼μ_e ω̃_i/ω̃_e|Z(ω̃_i)| ∼μ_e^1/2 |Z(ω̃_i)|,where we have used 𝔈_i = μ_e M_i and 𝔈_e^(0) = ω̃_eM_e^(0) +P_e^(0) (see section <ref>). Since |Z(z)| ≲ 1 for all z with positive imaginary part <cit.>, we conclude that the ion contribution to the dielectric tensoris indeed small for unstable perturbations, irrespective of the value of ω̃_i, and so itsneglect was valid. §.§.§ Derivation of frequency and growth rate of the parallel CET whistler instabilityThe dispersion relation ofunstable whistler waves with their wavevector parallel to B_0 is obtainedby taking the subsidiary limit k_ρ_e → 0 in (<ref>), and substituting ρ̃_e = - ρ_e:[ω̃_eβ_e √()exp(-1/k_^2 ρ_e^2) + η_e^T β_e √()/2(1/k_^2 ρ_e^2-1/2) exp(-1/k_^2 ρ_e^2) + i k_^2 ρ_e^2]^2 +{ω̃_eβ_eZ(1/k_ρ_e) + η_e^T β_e [1/2 k_ρ_e + (1/2 k_^2 ρ_e^2 - 1/4)Z(1/k_ρ_e)]}^2 = 0 .This can be factorised to give two roots; separating the complex frequency into real and imaginary parts via ω = ϖ + iγ, and defining ϖ̃_e≡ϖ/k_ v_the, γ̃_e≡γ/k_ v_the,we haveϖ̃_e β_e =η_e^T β_e (1/2 k_^2 ρ_e^2-1/4)+ (η_e^T β_e/2 k_ ρ_e - k_^2 ρ_e^2) Z(1/k_ ρ_e) /[Z(1/k_ ρ_e)]^2 + exp(-2/k_^2 ρ_e^2), γ̃_e β_e =√()(η_e^T β_e/2 k_ ρ_e - k_^2 ρ_e^2) /[Z(1/k_ ρ_e)]^2 exp(1/k_^2 ρ_e^2)+ exp(-1/k_^2 ρ_e^2), whence (<ref>) follows immediately.§.§ Approximate dispersion relation of CE ion-temperature-gradient-driven microinstabilities We now explain the method used to characterise microinstabilities driven by the ion-temperature-gradient term. For these, we set the electron-temperature-gradient terms to zero, η_e^T = 0, assumethe ordering ω̃_i∼η_i, and anticipate that such microinstabilities will occur on ion rather than electron scales, i.e., k ρ_i ∼ 1. Under the ordering ω̃_i∼η_i ≪ 1,it follows that ω̃_e∼μ_e^1/2ω̃_i≪ 1;therefore, we can use (<ref>)to quantity the contribution of Maxwellian electrons to the total dielectric tensor.However, since k ρ_i ∼ 1, we must consider the matrix M_e^(0) in the limit k_ρ_e ∼ k_ρ_e ∼μ_e^1/2≪ 1.Asymptotic forms of (<ref>) appropriate for this limit are given by (<ref>), and lead to[As noted in section <ref>, for k_ρ_e ≪ 1, the approximation (M_e)_11≈ω̃_e (M_e^(0))_11in fact breaks down, on account of (M_e^(0))_11 becomingexponentially small in k_ρ_e ≪ 1. However, it turns out that when k_ρ_i ∼ k_ρ_i ∼1, (M_e)_11≪ (M_i)_11, and so this subtlety can be ignored for the CE ion-temperature-gradient-driven instabilities.](M_e^(0))_11 = O[exp(-1/k_^2 ρ_e^2)] , (M_e^(0))_12 ≈-i k/k_ [k_ ρ_e + O(k^3 ρ_e^3)], (M_e^(0))_21 = i k/k_[k_ ρ_e + O(k^3 ρ_e^3)] , (M_e^(0))_22 = i [√() k_^2 ρ_e^2 + O(k_^4 ρ_e^4)] .We now combine (<ref>) with (<ref>) for M_i^(0)and (<ref>) for P_i^(0), and findthe dispersion relation for CE ion-temperature-gradient-driven microinstabilities by substituting the dielectric tensor (<ref>) into (<ref>): [ω̃_i F(k_ρ_i,k_ρ_i) . +. η_i I(k_ρ_i,k_ρ_i)+ i k_^2 d_i^2] × [ω̃_i H(k_ρ_i,k_ρ_i) + η_i K(k_ρ_i,k_ρ_i)+i k^2 d_i^2]+[ω̃_i[G(k_ρ_i,k_ρ_i) + k_ρ_i]+ η_i J(k_ρ_i,k_ρ_i)]^2 = 0,where d_i = c/ω_pi is the ion inertial scale, and we have ordered η_i ∼ 1/β_i ∼ k^2 d_i^2.This dispersion relation isvery similar to (<ref>), save for the addition of one term [the middle term in the third line of (<ref>)] providing a linear coupling between the δE_1 andδE_2 components of the electric fieldperturbation. Similarly to (<ref>), the dispersion relation (<ref>) can be written as a quadratic inω̃_iβ_i, which is then solved to give the followingexpression for the complex frequency:ω = Ω_i/β_i k_ρ_i -B̃_T±√(B̃_T^2 + 4Ã_TC̃_T)/2 Ã_T,where Ã_T= F_i H_i + [G_i+ k_ρ_i]^2 ,B̃_T=η_i β_i [F_i K_i +H_i I_i + 2J_i (G_i+ k_ρ_i)] + i(F_i k^2 ρ_e^2 + H_i k_^2 ρ_e^2) ,C̃_T=(η_i β_i)^2 (I_i K_i + J_i^2) - k^2 k_^2 ρ_e^4 +iη_i β_i(I_i k^2 ρ_e^2 + K_i k_^2 ρ_e^2) .This expression is the one that is used to evaluate the real frequencies and growth rates of ion-scale CET microinstabilities in sections <ref>.§.§.§ Derivation of frequency and growth rate of the parallel CET slow-hydromagnetic-wave instabilityWe obtain the dispersion relation of the parallel slow-wave instability by considering the general dispersionrelation (<ref>) of CE ion-temperature-gradient-driven instabilities in the limit k_⊥→ 0:[ω̃_iβ_i √()exp(-1/k_^2 ρ_i^2). +.η_i β_i √()/2(1/k_^2 ρ_i^2-1/2) exp(-1/k_^2 ρ_i^2) + i k_^2 ρ_i^2]^2 + {ω̃_iβ_i[Z(1/k_ρ_i) + k_ρ_i] . + . η_i β_i [1/2 k_ρ_i + (1/2 k_^2 ρ_i^2 - 1/4)Z(1/k_ρ_i)]}^2 = 0 .As before, this can be factorised to give two roots; for ω̃_i = ϖ̃_i + iγ̃_i [cf. (<ref>)], it follows thatϖ̃_i β_i =η_i β_i (1/2 k_^2 ρ_i^2-1/4)+ k_ ρ_i [Z(1/k_ ρ_i) + k_ ρ_i] (η_i β_i/4 - k_ ρ_i) /[Z(1/k_ ρ_i) + k_ ρ_i]^2 + exp(-2/k_^2 ρ_i^2), γ̃_i β_i =√() k_ ρ_i (η_i β_i/4 - k_ ρ_i) /[Z(1/k_ ρ_i) + k_ ρ_i]^2 exp(1/k_^2 ρ_i^2)+ exp(-1/k_^2 ρ_i^2). These can be rearranged to give (<ref>). §.§.§ Derivation of frequency and growth rate of the CET long-wavelength KAW instabilityIn the limit k_ρ_i ≪1, k_⊥ρ_i ∼ 1, the general dispersion relation (<ref>)of CE ion-temperature-gradient-driven instabilities becomes[ω̃_i (1 -ℱ_i ) - η_i/2𝒢_i]^2 +k_⊥^2 ρ_i^2/β_i[ i√()(ℱ_i+ √(μ_e Z^2/τ)) ω̃_i - 1/β_i + i√()η_i/2( 𝒢_i -1/2ℱ_i) ]= 0,where we remind the reader that ℱ_i = ℱ(k_⊥ρ_i), 𝒢_i = 𝒢(k_⊥ρ_i), with the functions ℱ(α) and 𝒢(α) being defined by (<ref>).Equation (<ref>) for the complex frequency of the CET KAWmodes in the main text is then derived by solving (<ref>) for ω̃_i = ω/k_v_thi.§ METHODOLOGY FOR CHARACTERISING CES MICROINSTABILITIESThis appendix outlines the method used to determine the growth ratesof microinstabilities driven by the CE electron- and ion-shear terms. Onceagain (cf. appendix <ref>), section <ref> presents the generalframework of our approach: determine a simplified algebraic dispersion relation satisfied by the (complex) frequencies ω of CES microinstabilities with parallel and perpendicular wavenumber k_ and k_⊥ under the assumption thatthey are low frequency [viz., ω≪ k_ v_ths; cf. (<ref>)], solve for ω, thencalculate the growth rate γ from its imaginary part (and the real frequencyϖ from its real part). To construct the dispersion relation, we firstneed to know the tensor P_s^(0) for a CE distribution function of the form (<ref>);this result is given in appendix <ref>. Then, inappendix <ref>, we determinean approximate quadratic dispersion relation for CES microinstabilities, show in appendix <ref> how that dispersion relation can be used in certain cases to evaluate the CES instability thresholdssemi-analytically, then demonstrate the significant shortcomings of the quadratic approximation in appendix <ref>. In appendix <ref>, we address these shortcomingsby constructing a revised quartic dispersion relation for CES microinstabilities. This quartic dispersion relationis then used to derive simplified dispersion relations for the various differentCES microinstabilities discussed in the main text: the mirror instability inappendix <ref>, the parallel (CES) whistler instability inappendix <ref>, the transverse instability in appendix <ref>, the electron mirror instability in appendix <ref>, theparallel, oblique and critical-line firehose instabilities in Appendicies<ref>, <ref>, and<ref>, the parallel and oblique electron firehose instabilities inAppendices <ref> and<ref>, the EST instability in appendix<ref>, and the whisper instability in appendix<ref>. Finally, in appendix <ref>, we derive the dispersion relation of the CET ordinary-mode instabilty– the one CES (or CET) microinstability that does not satisfy ω≪ k_ v_thsfor either electrons or ions (see section <ref>) – directly from the hot-plasma dispersionrelation.§.§ Dielectric response of CE shear termsFirst, we evaluate the elements of P_s^(0):(P_s^(0))_11 = ϵ_s k^2/k_^2 W(k_ ρ̃_s,k_ ρ̃_s) , (P_s^(0))_12 = - ϵ_s k/k_ X(k_ ρ̃_s,k_ ρ̃_s) , (P_s^(0))_21 = ϵ_s k/k_X(k_ ρ̃_s,k_ ρ̃_s) , (P_s^(0))_22 = ϵ_s Y(k_ ρ̃_s,k_ ρ̃_s), where the special functions W(x,y), Y(x,y) andX(x,y) are defined by (<ref>). These results are derived in appendix<ref>.§.§ Quadratic approximation to dispersion relation of CE shear-driven microinstabilities §.§.§ DerivationConsidering the relative magnitude of ω̃_i = ω/k_ v_thi andω̃_e = ω/k_ v_the≪ω̃_i, we observe that, unlike CET microinstabilities, CESmicroinstabilities satisfy the low-frequency condition (<ref>) for bothelectrons and ions. This claim holds because any microinstability involving the CE electron-shear term must satisfy ω̃_e∼ϵ_e ≪ (m_e/m_i)^1/2,where the last inequality arises from the scaling relation ϵ_e ∼ (m_e/m_i)^1/2ϵ_igiven by (<ref>d); thus, from the scaling relation (<ref>) with T_e = T_i, it follows that ω̃_i∼ϵ_e (m_i/m_e)^1/2∼ϵ_i ≪ 1.Therefore, it is consistent to expand both the Maxwellian electron and ion termsin ω̃_s≪ 1. We therefore initially approximate 𝔈 as follows:𝔈≈ω̃_e𝔈^(0)= ω_pe^2/ω^2(∑_s ω̃_sμ_s M_s^(0) + ∑_s μ_s P_s^(0)) ,where the expansion of M_s and P_s in ω̃_s, i.e.,M_s(ω̃_s,k) ≈ω̃_sM_s^(0)(k) , P_s(ω̃_s,k) ≈P_s^(0)(k) ,applies to both ion and electron species. By analogy to the derivation presented in section <ref>, this approximationgives rise to a simplified dispersion relation [cf. (<ref>)](ω̃_e𝔈_11^(0)-k^2 c^2/ω^2)(ω̃_e𝔈_22^(0)-k^2 c^2/ω^2)+(ω̃_e𝔈_12^(0))^2= 0 .We emphasise that here each component of 𝔈^(0) has both electron and ion contributions. Expressing ω̃_i = ω̃_eμ_e^-1/2 in (<ref>), (<ref>) can be written as [ω̃_e (M_e^(0) + μ_e^1/2M_i^(0))_11+ (P_e^(0) + μ_e^1/2P_i^(0))_11 - k^2 d_e^2]×[ω̃_e (M_e^(0) + μ_e^1/2M_i^(0))_22 + (P_e^(0) + μ_e^1/2P_i^(0))_22 - k^2 d_e^2] + [ ω̃_e (M_e^(0) + μ_e^1/2M_i^(0))_12 + (P_e^(0) + μ_e^1/2P_i^(0))_12]^2 = 0.Combining the expressions (<ref>) for P_s^(0) with (<ref>) for M_s^(0) and substituting M_s^(0) and P_s^(0) into (<ref>) gives[iω̃_e( F_e + μ_e^1/2 F_i ) +ϵ_e ( W_e + μ_e^1/2 W_i )- k_^2 d_e^2]×[ iω̃_e( H_e + μ_e^1/2 H_i ) + ϵ_e ( Y_e + μ_e^1/2 Y_i )- k^2 d_e^2] +[iω̃_e( G_e + μ_e^1/2 G_i ) +ϵ_e ( X_e + μ_e^1/2 X_i ) ]^2 = 0, where we have used ϵ_i = ϵ_e μ_e^-1/2. For brevity of notation, we have also defined F_s ≡ F(k_ρ̃_s,k_ρ̃_s), G_s ≡ G(k_ρ̃_s,k_ρ̃_s), and so on. Using (<ref>b) for the terms ∝ d_e^2 explicitly introduces a β_e dependence into (<ref>).After some elementary manipulations, we obtain the quadratic A_Sω̃_e^2 β_e^2 + i B_Sω̃_eβ_e - C_S = 0, where A_S = (F_e+μ_e^1/2 F_i)(H_e+μ_e^1/2 H_i) +(G_e+μ_e^1/2 G_i)^2 ,B_S = (H_e+μ_e^1/2 H_i)[k_^2 ρ_e^2 - ϵ_e β_e (W_e+μ_e^1/2 W_i)] - 2ϵ_e β_e (G_e+μ_e^1/2 G_i) (X_e+μ_e^1/2 X_i) +(F_e+μ_e^1/2 F_i)[k^2 ρ_e^2 - ϵ_e β_e (Y_e+μ_e^1/2 Y_i)] ,C_S = [k_^2 ρ_e^2 - ϵ_e β_e (W_e+μ_e^1/2 W_i)] [k^2 ρ_e^2 - ϵ_e β_e (Y_e+μ_e^1/2 Y_i)] + ϵ_e^2 β_e^2 (X_e+μ_e^1/2 X_i)^2. As before, this can be solved explicitly for the complex frequency:ω = Ω_e/β_e k_ρ_e - i B_S±√(-B_S^2 + 4A_SC_S)/2 A_S.From this expression, we can extract the real frequency ϖ and the growthrate γ explicitly. In the case when 4A_SC_S > B_S^2, we have two oppositelypropagating modes with the same growth rate: ϖ= ±Ω_e/β_e k_ ρ_e √(-B_S^2 + 4A_SC_S)/2 A_S, γ= Ω_e/β_e k_ ρ_e B_S/2 A_S. For 4A_SC_S < B_S^2, both modes are non-propagating, with distinct growth rates: γ = Ω_e/β_e k_ρ_e B_S±√(B_S^2 - 4A_SC_S)/2 A_S. §.§.§ Semi-analytic estimates of CES instability thresholds using quadratic approximation In the case of non-propagating modes whose growth rate is given by (<ref>),we can determine semi-analytic formulae for the thresholdsof any instabilities. This is done by noting that, atmarginal stability, ω̃_e = 0. Therefore, it follows from (<ref>) that C_ S =0, or, equivalently, [k_^2 ρ_e^2 - ϵ_e β_e (W_e+μ_e^1/2 W_i)] [k^2 ρ_e^2 - ϵ_e β_e (Y_e+μ_e^1/2 Y_i)] + ϵ_e^2 β_e^2 (X_e+μ_e^1/2 X_i)^2 = 0 .This is a quadratic in ϵ_e β_e which can be solved exactlyto give the threshold value of ϵ_e β_e as a function ofperpendicular and parallel wavenumber:ϵ_e β_e =1/2[(W_e+μ_e^1/2 W_i) (Y_e+μ_e^1/2 Y_i) + (X_e+μ_e^1/2 X_i)^2]^-1 ×(k^2 ρ_e^2 (W_e+μ_e^1/2 W_i) + k_^2 ρ_e^2 (Y_e+μ_e^1/2 Y_i)±{[k^2 ρ_e^2 (W_e+μ_e^1/2 W_i) + k_^2 ρ_e^2 (Y_e+μ_e^1/2 Y_i)]^2- 4 k_^2 k^2 ρ_e^4 [(W_e+μ_e^1/2 W_i) (Y_e+μ_e^1/2 Y_i) + (X_e+μ_e^1/2 X_i)^2]}^1/2) .Expression (<ref>) is used in sections <ref> and <ref> to evaluate the wavevector-dependent thresholds of the CES ion and electron firehose instabilities, respectively. §.§.§ Shortcomings of quadratic approximationIn contrast to quadratic approximations to the dispersion relations ofCET microinstabilities being sufficient to characterise all instabilities of note (see, e.g., appendix <ref>), not all CES microinstabilities are captured by the quadratic dispersion relation (<ref>), because there are important microinstabilities whose correct description requires keeping higher-order terms in the ω̃_s≪ 1 expansion. Themathematical reason for this is that some microinstabilities occur in wavenumber regimes where either k_ρ_i ≪ 1 and/or k_ρ_e ≪ 1.As a result, the issues raised in section <ref> regarding thecommutability of the ω_s≪ 1 and k_ρ_s ≪ 1 limits must be carefullyresolved. In appendix <ref>, it is shown that, if k_ρ_s ≪1/log(1/ω̃_s), then the dominant contributions to (M_s)_xx, (M_s)_xz, and (M_s)_zzarise from the quadratic term in ω̃_s≪ 1 expansion, namely (M_s)_xx ≈ω̃_s^2 (M_s^(1))_xx , (M_s)_xz≈ω̃_s^2 (M_s^(1))_xz , (M_s)_zz ≈ω̃_s^2 (M_s^(1))_zz .If k_ρ_s ≪ k_ρ_s ω̃_s^1/2, then(M_s)_yy≈ω̃_s^2 (M_s^(1))_yy.In the {e_1,e_2,e_3}coordinate frame, this means that the dominant contributions to each componentof M_s are (see appendix <ref>) (M_s)_11 ≈ω̃_s^2 (M_s^(1))_11 = k^2/k_^2 ω_s^2 (M_s^(1))_xx + 2 ω̃_s^2 [k_^2/k^2 + k_/k_ L(k_ ρ̃_s,k_ ρ̃_s )] ,(M_s)_12 ≈ω̃_s (M_s^(0))_12 = k/k_ ω̃_s (M_s^(0))_xy, (M_s)_13 ≈ω̃_s^2 (M_s^(1))_13= -ω̃_s^2 [2 k_ k_/k^2 + L(k_ ρ̃_s,k_ ρ̃_s )] ,(M_s)_22 ≈ω̃_s (M_s^(0))_22 + ω̃_s^2 (M_s^(1))_22 = ω̃_s (M_s^(0))_yy + ω̃_s^2 (M_s^(1))_yy ,(M_s)_23 ≈ω̃_s^2 (M_s^(1))_23 =-k_/k ω̃_s^2 N(k_ ρ̃_s,k_ ρ̃_s ) ,(M_s)_33 ≈ω̃_s^2 (M_s^(1))_33 = 2 k_^2/k^2 ω̃_s^2 ,where the special functions L(x,y) and N(x,y) are given by (<ref>). The quadratic dispersion relation (<ref>) must, therefore, berevised to capture correctly all relevant microinstabilities.§.§ Quartic approximation to dispersion relation of CE shear-driven microinstabilities§.§.§ Derivation of general quartic CES dispersion relationTo assess how the new terms identified in section <ref> change the dispersion relation (<ref>), we now return to the full hot-plasma dispersion relation (<ref>), whichwe write in the form(𝔈_11-k^2 c^2/ω^2 - 𝔈_13^2/𝔈_33)(𝔈_22-k^2 c^2/ω^2+ 𝔈_23^2/𝔈_33)+(𝔈_12-𝔈_13𝔈_23/𝔈_33)^2 = 0.Reminding the reader that, for a two-species plasma,𝔈 = ∑_s 𝔈_s = ω_pe^2/ω^2∑_s μ_s (M_s + P_s ),and also that the electrostatic component of the dielectric tensor isdetermined by the Maxwellian components only (which in turn are equal for electrons and ions when T_i = T_e – see appendix<ref>), viz., 𝔈_33≈ω̃_e^2 𝔈_33^(1) = ω_pe^2/ω^2∑_sμ_s ω̃_s^2 (M_s^(1))_33 = 4 ω_pe^2/ω^2ω̃_e^2 k_^2/k^2,we show in appendix <ref> that, in the limit k_ρ_s ≪ 1, [(M_s)_13]^2/(M_s^(1))_33 ≲(M_s)_11 ,(M_s)_13(M_s)_23/(M_s^(1))_33 ≲ω̃_e (M_s)_12 ≪(M_s)_12 ,[(M_s)_23]^2/(M_s^(1))_33 ≲ω̃_e (M_s)_22 ≪(M_s)_22.On the other hand, the shear-perturbation components P_s satisfy(P_s)_11∼ (P_s)_22≫(P_s)_12.Substituting for M_s and P_s in (<ref>) using (<ref>)and (<ref>b), respectively, and then substituting (<ref>) into(<ref>), we obtain the following quartic dispersion relation: {ω̃_e^2 [(M_e^(1) + M_i^(1))_11 - (M_e^(1) + M_i^(1))_13^2/2 (M_e^(1))_33]+ (P_e^(0) + μ_e^1/2P_i^(0))_11 - k^2 d_e^2} ×{ω̃_e^2 [(M_e^(1) + M_i^(1))_22] + ω̃_e[(M_e^(0) + μ_e^1/2M_i^(0))_22]+ (P_e^(0) + μ_e^1/2P_i^(0))_22 - k^2 d_e^2}+ ω̃_e^2 [(M_e^(0) + μ_e^1/2M_i^(0))_12]^2 = 0.We have assumed k ρ_e ≪ k ρ_i ≪ 1 and so we now have additional quadratic terms for both electrons andions, as explained in section <ref>.We note that the dispersion relation (<ref>) is similar to (<ref>) except for theaddition of two quadratic terms in ω̃_e, and the absence of the linear terms ω̃_e (M_s^(0))_11 and (P_s^(0))_12. This motivates our approach to findingmodes at arbitrary wavevectors: we solve a quartic dispersion relation thatincludes all the terms in (<ref>) and also those linear termswhich were present in (<ref>), but absent in(<ref>). Explicitly, this dispersion relation is {-ω̃_e^2 [4/3 W_e + 4/3 W_i + 1/4(L_e + L_i)^2] + iω̃_e( F_e + μ_e^1/2 F_i ) + ϵ_e ( W_e + μ_e^1/2 W_i )- k_^2 d_e^2}×[-ω̃_e^2 (4/3 Y_i + 4/3 Y_e )+ iω̃_e( H_e + μ_e^1/2 H_i ) + ϵ_e ( Y_e + μ_e^1/2 Y_i )- k^2 d_e^2]+[iω̃_e( G_e + μ_e^1/2 G_i ) +ϵ_e ( X_e + μ_e^1/2 X_i ) ]^2 = 0,where L_s ≡ L(k_ρ̃_s,k_ρ̃_s).The special functions W(x,y) and Y(x,y), defined in (<ref>), appear due to their relationship to the matrix (M_s^(1)) (derived in appendix <ref>):W(k_ ρ̃_s,k_ ρ̃_s) =-3/4 (M_s^(1))_xx,Y(k_ ρ̃_s,k_ ρ̃_s) = -3/4 (M_e^(1))_yy ,combined with the identity(M_e^(1) + M_i^(1))_11 - (M_e^(1) + M_i^(1))_13^2/2 (M_e^(1))_33 = - k^2/k_^2[4/3 W_e + 4/3 W_i + 1/4(L_e + L_i)^2 ] ,proven in appendix <ref>. The dispersion relation (<ref>) recovers all the roots of interest becauseit captures approximate values forall of the roots of the dispersion relations (<ref>) and (<ref>)in their respective wavenumber regions of validity.We note that, in situations when there arefewer than four physical modes (e.g., in the k_ρ_e ≳ 1 regime), solving (<ref>) will also return non-physical modes that are the result of the addition ofhigher-order terms in a regime where such terms are illegitimate. However,by construction, such modes can be distinguished by their large magnitude (ω̃_e∼ 1) ascompared to the others. We acknowledge that our approach does not maintain consistent orderings: indeed,depending on the scale of a particular instability, there may be terms retainedthat are, in fact, smaller than otherterms we have neglected when carrying out the ω̃_i≪ 1expansion. However, unlike the quadratic dispersion relation(<ref>), the quartic dispersion relation (<ref>)always captures the leading order terms for arbitrary wavevectors, and so provides reasonable approximations to the complex frequency of all possible CES microinstabilities. §.§.§ Derivation of frequency and growth rate of the CES mirror instabilityTo derive the CES mirror instability's growth rate when it is close to marginality, we consider the dispersion relation (<ref>) under the orderings(<ref>), viz., k_ρ_i∼ k_^2 ρ_i^2 ∼Γ_i ≪ 1 , ω̃_i = μ_e^-1/2ω̃_e∼Γ_i/β_i,where Γ_i = Δβ_i-1,and Δ = Δ_i + Δ_e = 3(ϵ_i+ϵ_e)/2. Using the asymptotic identities (<ref>) for the special functions F_s, G_s, H_s, L_s, and N_s,and (<ref>) for W_s, X_s, and Y_s,(<ref>) becomes, after dropping terms that are asymptotically small under the ordering (<ref>),i√() k_^2 ρ_i^2 ω̃_i + Δ(k_^2 ρ_i^2 - 1/2 k_^2 ρ_i^2- 3/4 k_^4 ρ_i^4)- k^2 ρ_i^2/β_i = 0 , which in turn can be rearranged to give (<ref>) in section <ref> and the subsequent results. We note that, save for the term ∝ G_e, which cancels to leading order with its ion equivalent, and the term ∝ Y_e, which we retain in order to capture correctly the mirror instability's exact stability threshold, the electron terms in (<ref>) are negligibly small under the ordering (<ref>). We also observe that by assuming frequency ordering(<ref>), we have removed the shear Alfvén wave from the dispersion relation. As we demonstrate when characterising the growth rate offirehose-unstable shear Alfvén waves (see section <ref> and appendix <ref>),a different ordering is required to extract this mode (which is, in any case, stable for Δ_i > 0). To derive the growth rate of long-wavelength (k_ρ_i ∼ k_⊥ρ_i ≪1) mirror modes away from marginality, when Γ_i ≳ 1, we adopt the alternative ordering (<ref>), which is equivalent to ω̃_i∼1/β_i∼Δ≪ 1 .Again using the identities (<ref>) and (<ref>) to evaludate the special functions, the dispersion relation (<ref>) is then i√() k_^2 ρ_i^2 ω̃_i + Δ(k_^2 ρ_i^2 - 1/2 k_^2 ρ_i^2 )- k^2 ρ_i^2/β_i = 0 , which, after some algebraic manipulation, gives (<ref>) in section <ref> and the subsequent results. Finally, the expression (<ref>) for the growth rate of sub-ion-Larmor scale mirror modes is derived by adopting the orderings (<ref>): k_ρ_i ∼ k_⊥ρ_i ∼ (Δ_i β_i)^1/2≫ 1 , ω̃_i∼Δ_i^1/2/β_i^1/2,and then using the asymptotic identities (<ref>)for evaluating F_i, G_i, H_i, L_i, and N_i, (<ref>) for F_e, G_e, H_e, L_e, and N_e, (<ref>) for W_i, X_i, and Y_i, and (<ref>) forW_e, X_e and Y_e. Once again neglecting small terms under the assumed ordering, thedispersion relation (<ref>) simpifies to a quadratic of theform (<ref>): [-Δ_i/22 k_^2 (k_^2- k_^2)/k^4 + k_^2 ρ_i^2/β_i] (-Δ_i k_^2/k^2+ k^2 ρ_i^2/β_i) -ω̃_i^2 k_^2 ρ_i^2 = 0,from which follow (<ref>) and the subsequent results in <ref>. §.§.§ Derivation of frequency and growth rate of the parallel CES whistler instabilityWe derive the expressions (<ref>) for the real frequency and growth rate of the parallelCES whistler instability by adopting the ordering (<ref>), ω̃_e∼Δ_e ∼1/β_e , k_ρ_e ∼ 1 ,and evaluating F_s, G_s, H_s, L_s, and N_s via(<ref>), andW_s, X_s, and Y_s via (<ref>).The special functions with s = i are simplified further by assuming additionally that k_ρ_i ≫ 1. Under these assumptions and simplifications, the dispersion relation (<ref>) becomes {iω̃_e√()[exp(-1/k_^2 ρ_e^2) + μ_e^1/2] +Δ_e [1+1/k_ρ_eZ(1/k_ρ_e) + μ_e^1/2] - k_^2 ρ_e^2/β_e}^2 +{iω̃_eZ(1/k_ρ_e)-Δ_e/k_ρ_e[√()exp(-1/k_^2 ρ_e^2) + μ_e ] }^2 = 0,where we have substituted ρ̃_e = - ρ_e, and the only ion terms that we retain – the terms proportional to μ_e^1/2 or μ_e –are those that we find to affect the dispersion relation qualitatively (as explained in the main text, these terms are formally small under the assumed ordering, but cannot be neglectedin certain subsidiary limits, e.g. k_ρ_e ≪ 1, which we will subsequently wish to explore).(<ref>) can then be factorised to give two complex roots, the real and imaginary parts of which become (<ref>a) and (<ref>b), respectively. §.§.§ Derivation of frequency and growth rate of the CES transverse instabilityTo obtain the growth rate (<ref>) of the two CES transverse modes,we take directly the unmagnetised limit of the full CES dispersion relation(<ref>) under the orderingsk_ρ_e ∼ k_ρ_e ∼(Δ_e β_e)^1/2≫ 1, ω̃_e∼Δ_e ≪ 1,and then employ asymptotic identities (<ref>) for F_s, G_s, H_s, L_s, and N_s, and (<ref>) for W_s, X_s, and Y_s. We then obtain a dispersion relation similar to(<ref>), but with two separable roots: [iω̃_e√()k_^3/k^3 +Δ_e k_^2 (k_^2- k_^2)/k^4 - k_^2 ρ_e^2/β_e] (iω̃_e√()k_/k +Δ_e k_^2/k^2 - k^2 ρ_e^2/β_e) = 0.When rearranged, the first bracket gives expression (<ref>a),and the second bracket gives (<ref>b). §.§.§ Derivation of frequency and growth rate of the CES electron mirror instabilityWhen its marginality parameter Γ_e = Δ_e β_e -1 is small, the growth rate (<ref>) (and zero real frequency) of the CES electron mirror instability's can be derived from the dispersion relation (<ref>) by adopting the ordering (<ref>), viz., k_⊥^2 ρ_e^2 ∼k_ρ_e ∼ω̃_eβ_e ∼Γ_e ≪ 1 ,and assuming that Γ_e ≫μ_e^1/2. This latter inequality implies that 1 ≪ k_ρ_i ≪ k_⊥ρ_i, sowe use the asymptotic identities (<ref>) to simplify F_i, G_i, H_i, L_i, and N_i, (<ref>) to simplify W_i, X_i, and Y_i, (<ref>) for F_e, G_e, H_e, L_e, and N_e, and (<ref>) forW_e, X_e and Y_e. Collecting terms, using the identity Δ_e = (1+Γ_e)/β_e, and keeping only leading-order ones, the dispersion relation simplifies to 3/2 β_e k_^2 ρ_e^2 ( -Γ_e/β_e k_^2 ρ_e^2 + 3/2 β_e k_^2 ρ_e^2 + 3/4 β_e k_^4 ρ_e^4 + i√() k_^2 ρ_e^2 ω̃_e) -ω̃_e^2 k_^2 ρ_e^2 = 0.Because the discriminant of the quadratic (<ref>) is negative, it follows that its solution satisfies ω = iγ, with γ being given by (<ref>). To derive the expression (<ref>) for the complex frequency of long-wavelength electron mirror modes, we adopt the ordering (<ref>), ω̃_e∼k ρ_e/β_e∼Δ_e k ρ_e,and then consider the subsidiary limit k_ρ_e ∼ k_⊥ρ_e ∼μ_e^1/4≪ 1 of thedispersion relation (<ref>). Using the asymptotic identities (<ref>)for F_i, G_i, H_i, L_i, and N_i, (<ref>) for F_e, G_e, H_e, L_e, and N_e, (<ref>) for W_i, X_i, and Y_i, and (<ref>) forW_e, X_e and Y_e, we find that {Δ_e/2[ k_^2 ρ_e^2 - μ_e^1/22 k_^2 (k_^2- k_^2)/k^4]+ k_^2 ρ_e^2/β_e} ×[ Δ_e/2( k_^2 ρ_e^2 - 2 k_^2 ρ_e^2- μ_e^1/22 k_^2/k^2)+ k^2 ρ_e^2/β_e] -ω̃_e^2 k_^2 ρ_e^2 = 0, where both the CE ion- and electron-shear terms are kept on account of their equal size under the assumed ordering. Solving for ω gives (<ref>).§.§.§ Derivation of frequency and growth rate of the parallel CES firehose instabilityThe relevant orderings of parameters to adopt in order to derivethe complex frequency (<ref>) of the parallel CES firehose instability is (<ref>), viz.,ω̃_i∼1/β_i^1/2∼ |Δ_i|^1/2∼ k_ρ_i ≪ 1 , with an additional small wavenumber-angle condition k_ρ_i ≪β_i^-3/4 (which we shall justify a posteriori). Under this ordering, the special functions F_s, G_s, H_s, L_s, and N_s can be simplified using(<ref>), andW_s, X_s, and Y_s using (<ref>), andso the dispersion relation (<ref>) reduces to(ω̃_i^2 - Δ_i/2 - 1/β_i)^2 - ω̃_i^2/4 k_^2 ρ_i^2 = 0, where the only non-negligible electron term is the one ∝ω̃_e G_e. Similarly to the CES mirror instability (see appendix <ref>), this term cancels to leading order with its ion equivalent, and the next-order electron term is much smaller than the equivalent ion term. This dispersion relation can be rearranged to give (<ref>). We also note that, in deriving (<ref>) from (<ref>), we have assumed that the linear term ∝ω̃_eμ_e^1/2 H_i is much smaller than the quadratic term ∝ω̃_e^2 Y_i; theirrelative magnitude is given by ω̃_eμ_e^1/2 H_i/ω̃_e^2 Y_i∼k_^2 ρ_i^2/ω̃_i k_^2 ρ_i^2∼β_i^3/2 k_^2 ρ_i^2 . Thus, this assumption (which it is necessary to make in order for there to be both left-handed and right-handed Alfvén modes in high-β plasma) is only justified if the small-angle condition k_ρ_i ≪β_i^-3/4≪ 1 holds true. §.§.§ Derivation of frequency and growth rate of the oblique CES firehose instabilityTo derive the oblique firehose's growth rate (<ref>), we use the ordering (<ref>), viz.,ω̃_i∼1/β_i^1/2∼ |Δ_i|^1/2∼ k_^2 ρ_i^2 ∼ k_^2 ρ_i^2 ≪ 1 .Simplifying the special functions F_s, G_s, H_s, L_s, and N_svia (<ref>), andW_s, X_s, and Y_s via (<ref>),the dispersion relation (<ref>) becomesi√()(ω̃_i^2 - Δ_i/2 - 1/β_i) k_^2 ρ_i^2 ω̃_i- ω̃_i^2/4(k_^2 ρ_i^2- 3/2 k_^2 ρ_i^2 )^2 = 0, where, in contrast to the quasi-parallel firehose, the linear term ∝ω̃_eμ_e^1/2 H_i in (<ref>) is larger than the quadratic term ∝ω̃_e^2 Y_i. (<ref>) can be solved to give two roots: ω≈ 0, corresponding to the stable slow mode (whose damping rate is asymptotically small under the assumed ordering), and the expression (<ref>) for the complex frequency of the (sometimes firehose-unstable) shear Alfvén mode. §.§.§ Derivation of frequency and growth rate of the critical-line CES firehose instabilityTo characterise the growth of the critical-line firehose when β_i ≫ 10^6, we set k_⊥ = 2k_/3, andorder ω̃_i∼β_i^-3/5∼k_^6 ρ_i^6 ∼|Δ_i + 2/β_i|^1/2.The dispersion relation (<ref>) transforms similarly to(<ref>) in this case, with two important exceptions: first, theterm in (<ref>) ∝ω̃_e G_e + μ_e^1/2ω̃_e G_i is O(k_^5ρ_i^5) on the critical line, rather than O(k_^3 ρ_i^3); secondly, our choice of ordering requires that we retain O(k_^4ρ_i^4). This givesi√()(ω̃_i^2 - 1/2Δ_i- 1/β_i - 5/8Δ_i k_^2ρ_i^2) ω̃_i- 6889/13824ω̃_i^2 k_^6ρ_i^6 = 0.To obtain the expression (<ref>)for the critical-line firehose's growth rate in the limit β_i ≫ 10^6 that is valid under the ordering (<ref>), we consider the subsidiary limit|Δ_i+2/β_i| ≫β_i^-6/5,in which case (<ref>) becomes i√()(ω̃_i^2 - Δ_i/2 - 1/β_i) ω̃_i- 6889/13824ω̃_i^2 k_^6ρ_i^6 = 0.The expression (<ref>) follows from solving (<ref>)for ω (and once again neglecting the ω≈ 0 solution).The expression (<ref>) for the growth of critical-line firehose modes when β_i ≃ -2/Δ_i ≫ 10^6,can be deduced by considering the opposite subsidiary limit to (<ref>), viz., |Δ_i+2/β_i| ≪β_i^-6/5. In this limit, (<ref>) simplifies toi√()(ω̃_i^2 + 5/4 β_i k_^2 ρ_i^2) ω̃_i- 6889/13824ω̃_i^2 k_^6ρ_i^6 = 0. Noting that the quadratic (<ref>) has a negative discriminant, we deduce that ω = iγ;then solving (<ref>) for γ gives (<ref>). When β_i ≪ 10^6, the appropriate ordering to adopt in order to simplify the dispersionrelation of critical-line is no longer (<ref>), but insteadω̃_i∼1/√(β_i logβ_i)∼|Δ_i + 2/β_i|^1/2 ,k_ρ_i ∼1/√(logβ_i) .Under this ordering, the term ∝μ_e^1/2ω̃_eF_i in (<ref>) is retained, while the term ∝ω̃_e G_e + μ_e^1/2ω̃_e G_iis neglected. This gives[ω̃_i^2 + i√()/k_^2 ρ_i^2exp(-1/k_^2 ρ_i^2)ω̃_i - 1/2Δ_i- 1/β_i - 5/8Δ_i k_^2ρ_i^2] ω̃_i = 0. To obtain the expression (<ref>) for the critical-line firehose instability's growth rate in the case when ordering (<ref>) holds – that is, when Δ_i β_i+2| ∼ 1,we consider the appropriate subsidiary limit of (<ref>): |Δ_i+2/β_i| ≫1/β_i logβ_i. In this case, the last term in the square brackets on the LHS of (<ref>) can be neglected,leaving the only non-trivial roots to satisfyω̃_i^2 + i√()/k_^2 ρ_i^2exp(-1/k_^2 ρ_i^2)ω̃_i - Δ_i/2 - 1/β_i= 0,whence (<ref>) follows immediately. The case of growth when Δ_i ≃ -2/β_i can be recovered from the opposite subsidiary limit, |Δ_i+2/β_i| ≪1/β_i logβ_i. In this case, the dispersion relation of the critical-line firehose modes is ω̃_i^2 + i√()/k_^2 ρ_i^2exp(-1/k_^2 ρ_i^2)ω̃_i + 5/4 β_i k_^2 ρ_i^2 = 0, which, when solved for the growth rate γ = -i ω, gives (<ref>).§.§.§ Derivation of frequency and growth rate of the CES parallel electron firehose instabilityThis derivation is identical to that given in appendix<ref> for the frequency and growth rate of the parallel CES whistlerinstability, and the same expressions (<ref>) are used in section <ref>. §.§.§ Derivation of frequency and growth rate of the CES oblique electron firehose instabilityThe complex frequency (<ref>)of the electron-firehose modes with μ_e^1/2≪ k_ρ_e ≪ k_⊥ρ_e ∼ 1 is derived by applying the orderingω̃_e∼ |Δ_e| ∼1/β_eto (<ref>) and using the asymptotic identities (<ref>)for F_i, G_i, H_i, L_i, and N_i, (<ref>) for F_e, G_e, H_e, L_e, and N_e, (<ref>) for W_i, X_i, and Y_i, and (<ref>) forW_e, X_e and Y_e. We obtain the simplified dispersion relation {-Δ_e k_^2/k_^2[1 - exp(-k_^2 ρ_e^2/2) I_0(k_^2 ρ_e^2/2)]- k_^2 ρ_e^2/β_e}×{(i√()ω̃_e + Δ_e ) k_^2 ρ_e^2 exp(-k_^2 ρ_e^2/2) [I_0(k_^2 ρ_e^2/2) - I_1(k_^2 ρ_e^2/2)] - k^2 ρ_e^2/β_e} - k_^2 ρ_e^2 ω̃_e^2 exp(-k_^2 ρ_e^2) [I_0(k_^2 ρ_e^2/2) - I_1(k_^2 ρ_e^2/2)]^2 = 0.Introducing the special functions ℱ(k_⊥ρ_e) and ℋ(k_⊥ρ_e) given by(<ref>), and then rearranging (<ref>),leads to (<ref>).§.§.§ Derivation of frequency and growth rate of the CES EST instabilityTo derive the expression (<ref>) for the growth rate of the EST instability in the limits μ_e^1/2≪ k_ρ_e ≪1 ≪ k_ρ_e ≪β_e^1/7, and Δ_e β_e ≫ 1, we apply the orderings(<ref>), viz., k_ρ_e ∼ (Δ_e β_e)^1/2 , ω̃_e∼Δ_e^5/2β_e^3/2 ,k_ρ_e ∼1/√(log|Δ_e| β_e)≪ 1.to (<ref>). We then use the asymptotic identities (<ref>)for F_i, G_i, H_i, L_i, and N_i, (<ref>) for F_e, G_e, H_e, L_e, and N_e, (<ref>) for W_i, X_i, and Y_i, and (<ref>) forW_e, X_e and Y_e to give iω̃_e/k_ρ_e {iω̃_e/k_^3 ρ_e^3[4 exp(-1/k_^2 ρ_e^2) +√()μ_e^1/2 k_^3 ρ_e^3]- Δ_ek_^2 ρ_e^2/k_^2 ρ_e^2 - k_^2 ρ_e^2/β_e} - k_^2 ρ_e^2 ω̃_e^2/ k_^6 ρ_i^6= 0,where the only ion contribution that is not always small, and thus cannot be neglected, is the term proportional to μ_e^1/2. Solving for the frequency gives ω≈ 0 – corresponding to a damped mode whose frequency is asymptotically small under the assumed ordering (<ref>) – and the EST mode, whosegrowth rate is given by (<ref>).§.§.§ Derivation of frequency and growth rate of the CES whisper instabilityIn the limits μ_e^1/2≪ k_ρ_e ≪1 ≫ k_ρ_e and Δ_e β_e ≫ 1 under the orderings ω̃_e∼1/β_e^2/7∼1/k_^2 ρ_e^2∼1/Δ_e β_e ,k_ρ_e ∼1/√(log|Δ_e| β_e)≪ 1,the dispersion relation (<ref>) becomesiω̃_e/k_ρ_e {k_^2 ρ_e^2/k_^2 ρ_e^24 ω̃_e^2/√() k_ρ_e + i4 ω̃_e/k_^3 ρ_e^3exp(-1/k_^2 ρ_e^2) - Δ_ek_^2 ρ_e^2/k_^2 ρ_e^2 - k_^2 ρ_e^2/β_e} - k_^2 ρ_e^2 ω̃_e^2/ k_^6 ρ_e^6= 0, where we have once again evaluatedF_i, G_i, H_i, L_i, and N_i using (<ref>),F_e, G_e, H_e, L_e, and N_e using (<ref>),W_i, X_i, and Y_i using (<ref>), andW_e, X_e and Y_e using (<ref>), andneglected all terms that are small under the ordering (<ref>).Solving for the non-trivial root of (<ref>) gives the expression (<ref>) for the complex frequency of whisperwaves. §.§.§ Derivation of frequency and growth rate of the CES ordinary-mode instabilityBecause the low-frequency assumption ω̃_e≪ 1 is broken in the regimeof relevance to the CES ordinary-mode instability, the dispersion relation (<ref>) is not valid;to characterise these modes, we must instead return to considering the fullhot-plasma dispersion relation. We choose to categorise the ordinary-mode instability for modes with k_ = 0.In this special case, the plasma dielectric tensor simplifies considerably, and hasthe convenient property that ẑ𝔈 = (ẑ𝔈ẑ) ẑ, if the particle distribution functions have even parity with respect to the parallel velocity v_ <cit.> – a condition satisfied by the CE distribution functions (<ref>). Thus, perturbations whose associated eigenmode satisfies δE = δE_z ẑ decouple from other modes in the plasma. The dispersionrelation for such modes follows from (<ref>):𝔈_zz - c^2 k_^2/ω^2= 0.In terms of matrices M_s and P_s defined by (<ref>), this can bewritten∑_s (M_s)_zz+ ∑_s (P_s)_zz - k_^2 d_e^2 = 0 . For k_ = 0, the matrix components (M_s)_zz and (P_s)_zzare given by [see (<ref>i) and (<ref>i)] (M_s)_zz = - ∑_n = -∞^∞ ω/ω-n Ω̃_s exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) , (P_s)_zz = - 3 ϵ_s/2 ∑_n = -∞^∞exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2) = -Δ_s .Therefore, the dispersion relation (<ref>) becomesk_^2 d_e^2 = -∑_s m_e/m_s[Δ_s + ∑_n = -∞^∞ω/ω-n Ω̃_sexp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2)]= -∑_s m_e/m_s[Δ_s + ∑_n = 1^∞2 ω^2/ω^2-n^2 Ω̃_s^2exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2)].Since the left-hand side of (<ref>) is real, and the imaginarypart of the right-hand side is non-zero if and only if the complex frequency ω hasnon-zero real and imaginary parts, we conclude that all solutions must be either purely propagating, or purely growing modes. Looking for purely growing roots, we substitute ω = iγinto (<ref>), and deduce that∑_s m_e/m_s[∑_n = 1^∞2 γ^2/γ^2+n^2 Ω̃_s^2exp(-k_^2 ρ̃_s^2/2) I_n(k_^2 ρ̃_s^2/2)] = -k_^2 d_e^2 - ∑_s m_e/m_s[ Δ_s + exp(-k_^2 ρ̃_s^2/2) I_0(k_^2 ρ̃_s^2/2) ] .Neglecting the ion contributions (which are smaller than the electron ones by a (m_e/m_i)^1/2 factor) and considering Δ_e < 0, we arrive at(<ref>). jpp | http://arxiv.org/abs/2310.17754v1 | {
"authors": [
"Archie F. A. Bott",
"Steven C. Cowley",
"Alexander A. Schekochihin"
],
"categories": [
"physics.plasm-ph",
"astro-ph.HE"
],
"primary_category": "physics.plasm-ph",
"published": "20231026194801",
"title": "Kinetic stability of Chapman-Enskog plasmas"
} |
[email protected] University of Sydney Sydney [email protected] University of Sydney Sydney [email protected] University of Sydney Sydney [email protected] University of Sydney Sydney AustraliaWe propose a novel resident identification framework to identify residents in a multi-occupant smart environment. The proposed framework employs a feature extraction model based on the concepts of positional encoding. The feature extraction model considers the locations of homes as a graph. We design a novel algorithm to build such graphs from layout maps of smart environments. The Node2Vec algorithm is used to transform the graph into high-dimensional node embeddings. A Long Short-Term Memory (LSTM) model is introduced to predict the identities of residents using temporal sequences of sensor events with the node embeddings. Extensive experiments show that our proposed scheme effectively identifies residents in a multi-occupant environment. Evaluation results on two real-world datasets demonstrate that our proposed approach achieves 94.5% and 87.9% accuracy, respectively. Positional Encoding-based Resident Identification in Multi-resident Smart Homes Athman Bouguettaya=============================================================================== § INTRODUCTION Internet of Things (IoT) refers to physical objects equipped with sensors, software, and computing power that communicate with other systems and devices over the Internet <cit.>. IoT is emerging due to the rapid advancement of underlying technologies such as Radio Frequency Identification (RFID), Near Field Communication (NFC), wired sensor networks, and wireless sensor networks <cit.>. IoT technologies have been the driving force behind prominent applications such as smart campuses, smart offices, smart cities, intelligent transport systems, and smart grids <cit.>. Smart home is another cutting-edge application of IoT. A smart home is any regular home fitted with various IoT devices <cit.>. These IoT devices are attached to everyday “things" to monitor usage patterns. For example, a sensor (i.e., an IoT device) attached to a cup may monitor a resident's tea cup usage patterns. The IoT paradigm brings enormous opportunities to smart homes to make residents' home life more convenient and efficient <cit.>.The advent of intelligent technologies such as artificial intelligence, predictive analytics, and machine learning may enable smart services in the home environment by automating IoT devices <cit.>. Such smart home services may help elderly and disabled people live with less reliance on others with activities of daily living <cit.>. Residents can remotely control these devices and customize IoT-based applications via various tools and platforms such as Samsung SmartThings <cit.>. In addition, IoT-based applications can be developed employing the trigger-action paradigm [https://ifttt.com/home] (e.g., IFTTT). An example of such a rule is, “If the TV is turned on, turn off the light". However, the current services require residents to manually set these rules. Furthermore, residents' preferences typically change and vary over time <cit.>. Consequently, adjustments are required to tune these rules. These cumbersome tasks, such as frequent adjustments, may cause residents to lose confidence in smart home systems and stop using them <cit.>. In this regard, a smart home system may have the ability to learn inhabitants' preferences without their intervention and adjust appliances' settings accordingly <cit.>. This may reduce the human labor and effort in producing services, thus, improving residents' quality of home life.A prerequisite to ensure occupants' convenience is to identify residents first and then provide seamless and personalized services to them. Therefore, this paper focuses on resident identification in multi-resident smart homes. Resident identification is referred to as identifying occupants in smart homes. The resident identification process uses residents' historical activities to determine which resident is responsible for triggering the sensor events. The sensor events are usually collected by sensors installed at different home locations. Their activities are usually sorted by time (i.e., temporal sequence of events). Resident identification in the home environment, however, is a difficult task due to the following challenges: * The first challenge is the interaction effect in the multi-occupant home. Decisions made by one resident may affect the decisions of another. For example, in a confined area, a resident's movement may block the movement of the other. This interaction is difficult to predict <cit.>. Meanwhile, when multiple occupants stay close to each other, interpreting the sensor events is complicated. In this scenario, a single sensor may detect multiple residents' activities in a short period of time. It is hard to distinguish which resident is the direct cause of each sensor event detected that sensor. * The second challenge is the lack of positional information on the temporal sequence of events. The movement patterns of residents are often both temporal and spatial. It is easy to represent activity events either in a temporal sequence or spatial sequence separately. However, it is difficult to integrate them together. For a sequence only containing spatial sequence, the model might not be able to predict residents' activity based on their temporal habits. In contrast, with only temporal information included, it is hard for models to understand activities having spatial dependencies. Some events are unlikely to happen spatially by multiple residents. For example, the left side of Fig. <ref> represents a room layout map. Four motion sensors (M1, M2, M3, M4) are installed at each corner, and an obstacle is at the center of the room. The right side of the figure is an annotated sensor event log. Considering two residents (R1 and R2) at the M1 position at time T1. At T2, R1 moves from M1 to M2, and at T3, R2 moves from M1 to M4. For a model without positional information knowledge, it is possible to consider the sequence M1 → M2 → M4 are triggered by the same resident. However, it is impossible spatially since residents must either pass M1 or M3 first before moving from M2 to M4 because of the center obstacle. A new way of encoding spatial information about the home to be included in the model might fill this gap.We propose a location-aware framework for resident identification in multi-occupant smart home environments. It utilizes non-intrusive sensors (e.g., motion sensors and door sensors) as location indicators of residents <cit.>. This type of sensor is only triggered when activities happen nearby, making them perfect location indicators of entities. We define entities as anyone that is movable by themselves. Of particular interest in this research is residents/occupants in smart homes. Usually, a layout map with the precise location of obstacles and sensors is obtainable in a confined area <cit.>. Topological information could be extracted from those existing data. We consider fixed locations of sensors as a graph. This graph contains sensors' locations as vertices and residents' possible direct pathways between sensors as edges. Residents' movement is interpreted as movement inside the graph. The topological information of such a graph might be beneficial in the smart home environment since it may reflect the home's structure.We consider locations as context-sensitive since they are dependent on each other. A movement to a location would not be possible without passing its neighbor. For instance, Fig. <ref> is an example of the context of positions. In this example, V1, V2, V3, V6, B are neighbors of A; A, V3, V4, V5, V6 are neighbors of B. The neighborhood (i.e., surroundings of nodes A and B) of center nodes represents the context of location. The contextual information reflects the connectivity and distance between locations. In the example, the center locations, A and B, depend on their connected neighbors. In order to have any entity move the center node A, it must pass one of the center node's neighboring nodes.However, the naive representation of graphs is not suitable to be used in machine learning models. There is no simple way to directly integrate the naive graph representation into the temporal sequence of the event log. In order to make such graphs useful, the graph needs to be transformed into machine learning-friendly forms. The algorithm will transform the graph into a position encoding that retains the topological information of the original graph. Positional encoding is a way of encoding information about locations into finite-dimensional embeddings <cit.>. The encoding usually includes information about locations. Node embeddings are a type of positional encoding where a multi-dimensional vector represents each node of the graph. In the proposed framework, the constructed graphs are transformed into node embeddings. Vectors representing the node contain topological and connectivities information of the original graph. Context of locations is included in such positional encoding. The Node2Vec algorithm is applied to help transform graphs into vector form so that a network of locations can be encoded <cit.>. The final output of Node2Vec is node embeddings, where each sensor's location, connectivity, and topology are encoded into a vector representing each sensor. The vector contains local connectivity and topological information about locations. When including such information in the sensor event log, we will be able to consider the sequence of events both temporally and spatially. After encoding, the node embeddings preserve such contextual information in the representation. We then concatenated it into the temporal sequences of sensor events and used it to train a Recurrent Neural Network (RNN) classifier that identifies residents. RNN models are implemented for resident identification tasks. The resident identification process requires a model making sequences of predictions on residents' identity using temporal sequences of activities as input. The process could be considered sequence-to-sequence mapping. The input sequences of events are mapped into an output sequence of residents' identities, where the Seq2Seq model may be helpful. Especially the Long Short-Term Memory (LSTM) is suitable for this task <cit.>. LSTM is one of the proven Seq2Seq models and is widely used in other domains for Seq2Seq problems, such as time series analysis <cit.>. LSTM was invented to resolve the major issue that traditional RNNs have, such as the vanishing gradient problem, where the model tends to be unable to handle events with long dependencies. The main contributions of this work are:* A novel feature extraction technique that extracts high-dimensional, topological information from low-dimensional naive positional encoding. It is achieved by transforming layout maps into an accessibility graph and applying it to the Node2Vec algorithm. * A novel resident identification model that employs Long Short-Term Memory (LSTM). The extracted positional encoding is integrated with the temporal sequences of sensor events as input of the LSTM model. To the best of our knowledge, it is the first attempt to integrate positional information with the temporal sequence of sensor events in resident identification. * Design and execution of a smart environment setup and evaluated the proposed approach with real-world datasets. Experimental evaluation exhibits the efficiency and effectiveness of the proposed approach. Section <ref> discusses the existing works that use intrusive sensors and non-intrusive sensors. It also discusses existing positional encoding methods and Seq2Seq models in other domains. In section <ref>, some concepts are formally defined for better explanation and understanding. In section <ref>, the proposed method is explained in detail. After that, in section <ref>, the smart environment setup and data collection process are introduced along with experiments. In section <ref>, the effect of hyper-parameters of the proposed framework is evaluated, and the framework's performance is compared and discussed. Section <ref> concludes the paper with future works. § RELATED WORK We briefly introduce related works on (i) resident identification in multi-occupant smart homes, (ii) positional encoding technique and its possible applications motivating us to construct and utilize a graph to include topological information, and (iii) Seq2Seq models used for converting sequences of events to the identification of the residents inspiring us to use Seq2Seq models to perform resident identification. §.§ Resident Identification Existing works on resident identification usually employ intrusive sensors such as cameras and microphones. Therefore, visual-based solutions are proposed for recognizing residents' identities <cit.>. These solutions process images by computer vision algorithms. Residents' body postures are extracted and used to identify residents <cit.>. Grey-scale cameras are installed at multiple places to track residents' locations. Some resident identification systems exist based on voice recognition using microphones. Residents are identified by their voice biometrics <cit.>. Another category of intrusive sensors is wearable devices such as smartwatches and smart bands. Bluetooth packets emitted by wearable devices are utilized to fingerprint the residents <cit.>. Some other similar approaches use wearable tags for the identification of residents <cit.>. However, wearable devices are generally considered uncomfortable and inconvenient <cit.>. Naive Bayes and Hidden Markov Models (HMM) are popular techniques that utilize temporal sequences of non-intrusive sensor events. These statistical models trace users' activities for the task of resident identification <cit.>. However, the Markov assumption addresses that the current state only depends on the previous state, while it is not true for events in smart environments. Events are usually recorded in sequences, and adjacent events have correlations. The current event not only depends on the current event but also on multiple previous events. Thus, models that could consider more historical events are more suitable for this task. Pattern mining algorithms are also developed to extract residents' activity traces from sensor logs <cit.>. The significant Correlation Pattern Miner (SCPM) algorithm is designed to extract usage patterns and correlations from appliance usage logs <cit.>. SCPM is developed based on a generic pattern mining algorithm, PrefixSpan <cit.>. It extracts usage patterns from temporal sequences of sensor activities that may identify residents in a multi-occupant environment.A bag of events is a way to categorize events into activities that residents are currently performing. When considering all activities being performed in a short time interval, the pattern can be used to distinguish residents' identities during the time <cit.>. The authors of this work consider events over a period of time as a way to detect residents' identities. This approach does not preserve The order of events, and the reported sensor logs are considered vectors. The vectors represent how many events are recorded by sensors in that period of time. All sensor events are labeled with residents' activities triggering the event. The authors believe there are patterns in the vector when different residents perform the same tasks. Resident identification is possible by using this pattern. A supervised Bayes Network is employed to identify residents. It requires annotation with the specific activities the residents are currently performing in addition to the resident's identity to work.§.§ Positional Encoding The positional encoding technique allows machine learning models to focus on specific data subsets instead of the entire dataset. The positional encoding can be used to encode positional or topological information originally in formats that are not useful in machine learning algorithms into vector forms. The specific subset of data may reveal a strong correlation with the intended task of the model <cit.>. Positional encoding techniques used in other domains (e.g., natural language processing (NLP) and image processing) are inspiring us for the resident identification problem. For example, in NLP domain, the transformer model uses positional encoding to associate the position of words in the original sequence of the sentence <cit.>. For the transformer model, the words are processed simultaneously. The positions of words in sentences are unknown to the transformer model. Positional encoding inserts the positional information of the word into the sentence. The positional encoding might be used as biases to boost the performance of a Generative Adversarial Network (GAN) <cit.>. The GAN is used for image processing where incorporating positional encoding increases the weight of important areas of images. Multiple positional encoding techniques exist that transform graphs into positional embeddings <cit.>. These methods usually mine out structural patterns from graphs and convert vertices, edges, or subgraphs into low-dimensional embeddings. These patterns are then applied to standard machine learning models. In smart environments, positional encoding techniques usually extract locations of interests and structures into embeddings. The embeddings can be concatenated to input parameters, such as sensor events arranged in temporal sequences, to provide location awareness to the sequence. A similar technique could be used to address the resident identification problem, much like the application of positional encoding mentioned above. Twomey, N. et al. proposed a way of learning topologies from sensor event sequences into graphs <cit.>. However, they did not consider integrating temporal information with the learned positional information. Those ideas, which convert graphs into vector forms, motivate us to use the Node2Vec algorithm to transform the graph into node embeddings that are easily integrated into machine learning models.§.§ Seq2Seq Models The wide usage of Seq2Seq models in other domains motivates us to adopt them for resident identification problems. We use Seq2Seq models as a tool to build the resident identification model. Seq2Seq models usually have an architecture that consists of an encoder and a decoder. The encoder encodes the input into intermediate hidden states, and the decoder decodes the hidden states into desired output form <cit.>. RNN is widely used for this type in such architecture. LSTM and Gated Recurrent Unit (GRU) are two popular RNN cells <cit.>. On the one hand, LSTM can remember long-term patterns. Besides, it also can forget events that occurred too long ago and are not revised by the model. On the other hand, GRU can be considered a simplified version of LSTM since it does not have the cell state output that LSTM has. It has fewer training parameters than LSTM. However, due to the smaller parameter space, its performance on larger datasets or complex problems may be outperformed by LSTM. LSTM is widely used to analyze time series data, such as predicting power fluctuations and stock prices <cit.>, predicting driver's identity <cit.>. In the smart home domain, LSTM is used to extract residents' activities from various data sources <cit.>. The encoder and decoder architecture allows Seq2Seq models to map input data into output vectors. When considering the Seq2Seq models, the sequences of events generated by residents can be mapped into residents' identities by the model. The benefits of Seq2Seq models inspired us to use them in resident identification problems. The resident identification problems shared similarities to other problems where Seq2Seq models could be applied. In summary, the positional encoding technique could also be applied to the resident identification problem. Resident identification models may know more about residents' activity patterns when positional information is included. Meanwhile, intrusive sensors such as cameras and wearable devices are widely used in the existing literature. However, they are sensitive for installation in home environments <cit.>. Besides, it is uncomfortable and inconvenient for residents to have wearable devices always carried on. Furthermore, visual or sound-based methods are usually considered to be computationally expensive. Existing works for resident identification, including Naive Bayes and Hidden Markov Chain, usually only analyze temporal sequences of events. Positional information about the home is ignored. In this work, non-intrusive sensors, such as motion and door sensors, are used. These non-intrusive sensors do not have direct interference with residents. To the best of our knowledge, no previous work has been done on integrating positional information with temporal sequences. Therefore, this research emphasizes the importance of positional encoding of smart homes and their integration with the temporal sequence of events. The added topological information about the smart environment increases the performance of the resident identification model. § PRELIMINARIES Positional encoding is defined as a way to encode positional information <cit.>. In this context, the term position refers to the sensors' locations. In a smart environment, sensors are usually used for monitoring entities' movement. Entities are anything that is capable of moving by themselves — for example, residents, pets, or robots. Sensors like door and motion sensors will only get triggered by nearby activities of entities. In this research, we only focus on one type of entity: the resident. Residents' activities are both temporal and spatial<cit.>. In addition to temporal patterns during their activities, activities usually have spatial movement patterns.For example, a resident studies in the study room and decides to go for lunch. The door between the study room and corridor and the door between the corridor and dining hall must be passed by this particular resident. In this example, the system may recognize a resident's activities: stay stationary at study, move to the dining hall, pass through the corridor, and keep stationary at the dining hall. If this sequence of events happens repeatedly, the system may associate future sensor events with similarity being triggered by the same resident in sequences. For instance, it may predict that the resident is more likely to transition from working in the study to having dinner in the dining hall when observing such events. The sequence of locations of a resident's movement can be used for the analysis of the activities of residents. This information might be helpful for resident identification problems. The added context awareness also allows for distinguishing between similar patterns. The following example (Fig. <ref>) consists of 4 POIs (P1, P2, P3, P4) forming a rectangle. There are 2 entities/residents (A and B). In this context, entity A moves clockwise, whereas entity B moves anticlockwise. The model performs recognition considering the residents' movement patterns since the order of the POI passed differs between the two entities. It is difficult for models to distinguish differences in a system with only the knowledge of each event's timestamp. However, with positional encoding, the context of positions could help models distinguish the difference.One of the core contributions of this research is a novel feature extraction technique that extracts high-dimensional, topological information from low-dimensional naive positional encoding, typically a set of coordinates. We consider positional encoding of Point of Interest (POI) in a vector form. GPS-encoded coordinates usually represent a POI in a 3-tuple of latitude, longitude, and altitude, forming a vector of size 1× 3. This type of positional encoding is considered low-dimensional positional embeddings. It may have limited information about connectivities between nodes. In this regard, the proposed technique transforms such low-dimension positional embeddings into high-dimensional embeddings. These high-dimensional embeddings include topological and contextual information about POIs and knowledge about POIs' connectivity. The feature extraction technique first builds an accessibility graph with knowledge about the coordinates of POIs and obstacles. Then, the graph is transformed into an APG that finally trains the Node2Vec node embeddings (definition <ref>). Finally, the generated node embeddings are concatenated into temporal sequences of user activities. This adds spatial contextual information to the original data. It allows further machine learning models to better understand residents' activity patterns by integrating both temporal and spatial information.Entity.An entity is any object usually able to move by itself in the home environment. Entities are not necessarily limited to residents. Pets or robots can also be regarded as entities. Any stationary object may not be considered as an entity. For example, furniture and appliances are seldom moved, and their movements can only happen when assisted by entities. Point of interest (POI).A point of interest refers to a location where entities could interact with, present at, or an event could happen. Those are locations where this work is trying to encode. They are generally accessible by residents. For example, it could be a junction of corridors or appliances like refrigerators. Accessibility graph (AG).An accessibility graph G = {P, E} is a connected, weighted, undirected graph, of which, vertices P = {p_1, p_2, ⋯, p_n} being POI. Edges E = {e_1, e_2, ⋯, e_m} are connecting 2 POIs representing the only straight possible pathway, without any obstacles and other POIs, on the edge route. The edges are weighted by the distance between 2 POIs. Accessibility probability graph (APG).An accessibility probability graph is similar to the accessibility graph. However, instead of weighing the edges based on the distance between 2 POIs, it weighs the edges by probabilities of an entity moving between the two ends of the edge. Fig. <ref> is a simple example of an AG. It does not contain any edges cut by obstacles. The figure shows that p_4 and p_5 are not connected due to the obstacle in between. Fig. <ref> is an example of an accessibility probability graph, and typically, the APG could be transformed from a corresponding AG by Algorithm <ref>. The edges of APGs are weighted by the probability of entities transiting between nodes. For example, P_e_1 is the probability for entity transit from POI p_1 to p_2. And it can be calculated byP_e1 = 1/e_1+1/1/e_1 + 1 + 1/e_2 + 1 + 1/e_3 + 1= 0.408Node embeddings.Node embeddings are a type of positional encoding that encodes graphs into vector form. It is similar to the concept of word vector embeddings (i.e., using the vector of numbers trained by some methods to represent the word). The trained embeddings include information about the word learned from the corpus. Typically, such information is extracted from the surrounding words of the central word. We can understand it as using contextual information of words in a corpus to represent the word in a vector form. The same idea applies to the graph as well. The graph's nodes could be represented in vector form by using similar algorithms that use the contextual information of the graph to represent the node. § METHODOLOGY The general architecture of the proposed framework is shown in Fig. <ref>. The framework accepts two data components as input. An annotated temporal sequence of sensor event logs (i.e., the temporal sequence of events) and coordinates of installed sensors and obstacles in the home environments (i.e., home layout map). At first, time components of timestamps, namely day-of-year, weekday, and second-of-day, are transformed into a 1× 2 vector by Equation (<ref>). This converts the linear representation of time into a chronic form. Then, the layout maps are transformed into a machine learning-friendly form. Coordinates of sensors and obstacles in the smart environment are transformed into an AG (Definition <ref>) by Algorithm <ref>. Then it has been further transformed into an APG (Definition <ref>) by Algorithm <ref>. The access probability graph is used as input for the Node2Vec algorithm, where nodes of the graph are further transformed into node embeddings. The Node2Vec embeddings contain topological information, including the connectivity between nodes and their relative distance into the final output form.Finally, a supervised LSTM model is trained, validated, and evaluated with data generated from previous steps. The data being used for training is a temporal sequence of sensor events with extra parameters, which are node embeddings. They add positional information and structural information about the home into the model.§.§ Transformation of TimeThe time transformation is applied as a data pre-processing step. The transformation converts the timestamp in the scalar into a 1 × 2 vector. The vector is chronically compared to its linear form. The reason for such conversion is that it is usually chronically against time when considering users' habits. Residents' activities tend to have correlations with time points of days. However, the naive linear way of encoding time in timestamp does not preserve such property. Considering the two dates, January 1 and December 31, they are very different if encoded in a linear timestamp. At the same time, residents' activity patterns may still share great similarities. For instance, they would significantly differ if we encode the two dates into seconds since epoch time (i.e., 1970-01-01). The value between the two dates will have a 31536000-second difference. In this regard, we use a triangle transformation on time components (i.e., days and seconds) to convert them into vectors with two elements. Equation (<ref>) is used for such transformation <cit.>. Fig. <ref> shows the chronic property of encoded time after transformation. t_sin = sin2π t/t_max,t_cos = cos2π t/t_max In Equation (<ref>), t is an integer representing a time component started with 0, and t_max is the max possible value of the time component. For example, if we are interested in hours as one of the time components, consider three hours of a day h_1 = 00:00, h_2 = 23:00 and h_3 = 15:00. Since there are a maximum of 24 hours per day, we have t_max = 24. h_1 is the first hour-of-day, t_1 = 0; h_2 is the last hour-of-day, t_2 = 23; h_3 is the 15th hour-of-day, t_3 = 15. This example is visualized in Fig. <ref>. Cosine distance is used to calculate the distance between two components of time. For example, the distance between t_1 and t_2 is 0.03407 while the distance between t_1 and t_3 is 1.7071. However, if we encode these three hours of a day linearly, the distance between h_1 and h_3 is smaller than the distance between h_1 and h_2.We select and encode three components of time by this method. They are (i) day-of-year, (ii) day-of-week, and (iii) second-of-day. Those three components are chosen because residents usually have chronic living patterns within those time frames. The seasonal effects on residents' habits will be reflected by the day of the year since there is a chronic pattern in different years on the same day of the year. Another example is that workdays and weekends may have different living habits, i.e., the resident may wake up late on weekends. This workday/weekend variable may reflect these patterns. To apply the algorithm, take the timestampas an example. We firstly separate it into 3-time components: (i) 24th August, the 236th day of the year, and there is a total of 365 days; (ii) Thursday, the 3rd day of the week; (iii) 15:00:00, the 54000th seconds of the day. We can then apply Equation (<ref>) to transform these time components. [We count starting from 0 and consider Monday as the first day of a week] Later, those vectors will be used as replacements for timestamps to include the chronic property of time to model. T_day = { t_sin = 236 × 2π/365 = -0.796 t_cos = 236 × 2π/365 = -0.605., T_weekday = { t_sin = 3 × 2π/7 = 0.434 t_cos = 3 × 2π/7 = -0.901 ., T_second = { t_sin = -0.707 t_cos = -0.707.§.§ Building Accessibility Graph from Coordinates We design an algorithm (Algorithm <ref>) to help the construction of AGs from coordinates of POIs and coordinates of obstacles. We name the algorithm complete graph prune. As its name suggests, the algorithm constructs the AG by building a fully connected graph of all POIs. Vertices of the graph are within the same coordinates system and have their coordinates labeled. The algorithm considers obstacles as two coordinates forming a line segment, being in the same coordinates system with the nodes of the fully connected graph. After that, a line segment collision detection method is applied. The line segment collisions are applied to all edges over all obstacles. Once a collision has been detected, the edge connecting the two nodes is removed. The line segment detection function used in the algorithm is at constant time complexity. It can handle line segments consisting of multi-dimensional coordinates. The two line segments L_1 = (P_1, P_2) and L_2 = (P_3, P_4) are collided if and only if the P_1 and P_2 are on the different side of L_2, and P_3 and P_4 are on the different side of L_1.We can use cross-products of subtraction between coordinates to detect if two points of a line segment are on different sides of another line segment. If among the four points of the two lines, no three points are on the same line, we only need to make sure the above property holds for the four points. The cross-product taken between the four points is used to determine if the four points are all on both sides of each other. Each of the cross products determines if the given point is on the right-hand side of the line segment. For example, v_1 = (C_1 - C_3) × (C_4 - C_3) indicates C_1 is on the right-hand side of line segment L_2 = (C_3, C_4) when v_1 < 0. Since we need to make sure the two points of L_1 on both sides of L_2, the sign of v_1 and v_2 should be different. Thus v_1 v̇_2 < 0 is used to perform this check. When one of the points is not on the side of the line segment, it must be in the line holding the segment. In this scenario, we need to determine if the point is in the middle of the line segment. Theis doing this check; it checks if P is in the middle of line segment L. The pseudo-code of the proposed algorithm for building the AG, including the line segment detection function, is defined in Algorithm <ref>.Fig. <ref> shows three intermediate states of Algorithm <ref>. Fig. <ref> shows the initial obstacles in red and the motion sensors in green. Fig. <ref> shows a complete graph of edges connecting all motion sensors, with green edges being the result and orange edges being the pruned edges due to collision with obstacles. The last figure (Fig. <ref>) shows the result of the algorithm. We can find that the algorithm prunes all edges that collide with obstacles.The algorithm uses a naive approach that requires a nested loop of all edges of a fully connected graph and all obstacles. Given there are n POIs and m obstacles represented by line segments, and the line segments collision detection algorithm used has a constant time complexity of O(1), the time complexity for this algorithm is O(n(n-1)m/2) = O(mn^2). Optimization of this algorithm is possible by splitting the graph into small partitions. If the graph is handled separately in f partitions, and all partitions have a similar number of POIs and obstacles. That is, POIs for each partition are close to n/f, and the number of obstacles is close to m/f. The time complexity of the algorithm would be O(f * mn/f^2(n/f - 1)/2) = O(mn^2/f^2) if this optimization technique is applied. Therefore, if groups of large chunks of sensors have few connections in between, it is suggested to apply the algorithm individually and then manually add the connection between groups. This could reduce the amount of time required by the algorithm.Some edges are omitted for the aesthetic purpose§.§ Accessibility Graph to Accessibility Probability GraphThe built accessibility graph cannot be directly applied to the Node2Vec algorithm. The reason is that the edges of accessibility graphs are weighted by distance. In this case, the random walk process of the Node2Vec algorithm makes the close neighbors of sensors less likely to be visited. The random walk process is biased on distant nodes because of their higher weight. This is different for scenarios where residents move within homes. Therefore, the original accessibility graph needs to be transformed into an APG. Edges are weighted by the probability of transition between nodes instead of the absolute distance. In this work, We assume the average normal movement speed of residents is similar. This assumption is made considering the following scenario in Fig. <ref>. Considering there are two residents R_1 at A and R_2 at B, and there are two sensor events e_a, e_b both at time point t_0 reporting their presence at those two POIs. if there is a new event being recorded at t_1 at POI V4, since we are assuming the residents' movement speeds are similar, it is more likely that R_2 is making such movement. This assumption is made since it is not convenient to collect residents' real-life transition probability between POIs in smart environments; the resident identification model using the APG construct based on this assumption could observe satisfactory improvements; and the assumption is only relevant when used to transform the AG into APG, it is not used in the following resident identification models.To construct APG from AG, a mathematical function is applied to transform the distance into a probability with the above assumption. The accessibility graph, AG(V, E), is represented as an adjacent matrix, M_g. This M_g is the input of the Algorithm <ref>. We then calculate the transition probabilities between locations. We use inverse function f(M) = 1/M + 1 to calculate the weight because of the assumption mentioned above: the resident who is closer to the location where the last event occurred is more likely to be accounted for the new event. Intuitively, this function makes distant locations less likely to be visited, while nearer locations are more likely to be visited. The inverse function is applied to all matrix entries with a value greater than 0. The function is not applied to 0-value entries since the probability of transiting between two disconnected locations is 0. After calculating the probability weight, we normalize the matrix into a transition matrix so that the summation of a row becomes 1. After normalization, we can obtain an APG with all its diagonals being 0, as in the following example (i.e., M_g matrix). It is also a transition matrix of a Markov chain. However, this APG requires further adjustments to fit into our model. M_g = [ [0100; 0.460 0.31 0.23;00.400.6;0 0.33 0.670;]] In a home environment, residents may stay in a position for a long period of time. For example, sitting on a sofa to watch TV. During this period, their location will be fixed. However, by the current APG, the probability of staying stationary at a location (i.e., the self-loop probability) is zero. This scenario differs from other common scenarios where residents move within the home environment. When we are constructing the AG from the layout map, the diagonal of the matrix is considered to be 0 since the distance between the point and itself is non-existence. Therefore, we need to adjust the diagonal weight of APG. This adjustment is necessary since it is possible for residents to be detected and reported by the same sensor for a sequence of continuous events. Without the weight adjustment, the topology generated by the APG will consider a resident to be stationary, being impossible over time. To perform such adjustment, a diagonal matrix of wI/1-w is added to the calculated matrix to counter this issue. It makes the self-loop probability into w. The last step normalizes the matrix again due to the added weight on diagonals. The following example shows the algorithm's input and output states, where M_g is a 4× 4 AG as input; M_out is the output APG. The output APG uses a diagonal weight of w = 0.5. M_g = [ [ 0 1 0 0; 1 0 2 3; 0 2 0 1; 0 3 1 0; ]] M_out = [ [0.50.500; 0.230.5 0.15 0.12;00.20.50.3;0 0.17 0.330.5;]] This algorithm produces a transition probability matrix of a finite state Markov chain, with each state being the POI of the home environment. The probability of the resident moving from one POI to any other POI is represented by the transition probabilities of the matrix. This transition probability matrix could also be considered as an adjacent matrix of an APG, where it is essential for the next step to train the Node2Vec node embeddings. §.§ Transforming APG into Node Embeddings In this step, the Node2Vec algorithm introduced by Aditya and Jure is used to transform accessibility probability graphs into node embeddings <cit.>. It is a model that wraps the SkipGram model, which is usually used for generating word embeddings<cit.>. The method requires random walks on the given graph to generate sequences of vertices. The random walk process will at least be applied to all vertices as the beginning vertex. The trained APG will be used to perform the random walks. For example, if we consider the simple APG in Equation (<ref>). If we are performing a random walk process starting from the first node, we can get the probability of the next node by looking up the first row of the APG matrix. The next node would have 50% probability being node 1 or 50% being node 2. A random process will be applied to determine which node is to be selected using the given weight. The random walk process will be repeated several times until the sequence length limit is met. After the random walk process, the algorithm adopts the concept from the Word2Vec algorithm <cit.>. The sequences of vertices are treated as sentences, and vertices are considered words for the Word2Vec models. It then trains graph nodes as vocabulary into word embeddings. The underlying SkipGram model is a Word2Vec algorithm attempting to use the center word to learn the information of its contextual words. For node embeddings, like word embeddings, where the trained vector representation of words contains the contextual information about words in a document, the trained vector representation of nodes in a graph contains contextual information about its neighborhood nodes. The list of node sequences by the random walk process is then applied to the underlying SkipGram model of Node2Vec. After that, the model outputs node embeddings, with each node having an associated vector representing it. The trained vector includes contextual and topological information about the node's position and connectivity in the original graph. Such node embeddings are easily added to the temporal sequences of sensor activity data. The node embeddings are later concatenated as extra features to the temporal sensor events logs.§.§ The LSTM Model A Seq2Seq, LSTM tagger is used for identifying residents. The model consists of a one-layer, bidirectional LSTM layer, an optional dropout layer, and a linear layer to transform the LSTM state into a tag space of 1 × n vector, where n is the number of residents to identify. The dropout layer is helpful when a class imbalance exists. The structure of the LSTM model is presented in Fig. <ref>. An LSTM unit consists of multiple memory cells that could hold temporal and spatial information about residents' activity patterns. Within the memory cell, there are two special activation functions, namely sigmoid (σ) and tanh function as defined in Equations (<ref>) and (<ref>).σ(x) = 1/1+e^-x tanh(x) = e^x - e^-x/e^x + e^-xIt achieves a short-time memory by a series of gates in its recurrent units. Three gate units are defined for each LSTM unit. They are the input gate (i_t), forget gate (f_t), and output gate (o_t). Those three gates utilize the sigmoid activation function to enable their ability to bias the weight into 0 as represented in Equation (<ref>). i_t= σ(w_i[h_t-1, x_t] + b_i) f_t= σ(w_f[h_t-1, x_t] + b_f) o_t= σ(w_o[h_t-1, x_t] + b_o) The following equation calculates the cell output: c̃_̃t̃ = tanh(w_c[h_t-1, x_t] + b_c) c_t= f_t c_t-1 + i_t c̃_̃t̃h_t= o_t tanh(c^t)The cell states and the hidden states are fed into the following cells at timestamp t + 1. The forget gate in the next cell controls if that information should be kept. It can either penalize or award the history by adjusting the weight based on the previous timestamp's cell states, hidden state, and current cell's input states. In the resident identification problem, if the residents' historical movement and activity patterns are not reviewed frequently, the forget gate tends to decrease the weight of those histories until they are negligible. If the pattern repeats frequently, the forget gate will fit the weight with a higher value. Thus, patterns that repeat frequently are preserved by the LSTM model, in contrast to Markov's assumptions that HMM models are based on that the current states only depend on the previous state. The LSTM's hidden state contains not only the state information about the previous timestamp but also all historical states. This enables the model to learn a longer sequence of historical activities by the LSTM model. It also has more model parameters that could be trained compared to statistical models; thus, more information could be learned. Because of this property, the LSTM model is selected to perform the resident identification task. The bi-directional LSTM is chosen since it can consider more contextual information about the sequence of events. Both forward and backward contexts will be considered by the LSTM model. The bi-directional LSTM allows the model to consider the sequences in both directions. The added contextual information will provide a better performance than the unidirectional LSTM. The existence of backward relationships in the model could also allow it to make corrections when future events are added to the sequences. The future events included in the sequences give the model have better understanding and prediction of residents' spatial and temporal movement.The data used to train and evaluate the LSTM model is the concatenation of the trained node embeddings and transformed time vectors in previous steps. Originally, each sensor event log record consisted of a sensor ID of the sensor triggered and the timestamp of this event recorded. After the data pre-processing, the timestamp is replaced by the time vector, and the node embedding vector of this sensor is concatenated to the end of the feature vector. Later, those features are fed into the LSTM model for training. The dataset used for this resident identification framework is a variable-length contiguous time series dataset. It is too long as a single input for the LSTM model. Meanwhile, LSTM models only handle fixed-length sequences as input. Therefore, the data needs to be split into fixed-sized chunks first. Intuitively, a chunk size that splits the long sequences into shorter ones approximates the average number of events per day may be a suitable choice. While the optimal number may differ from this strategy, it is evaluated and discussed in later sections.§ EXPERIMENT & DATA We have used two real-world datasets to evaluate the proposed method. One publicly available dataset, known as the TWOR dataset, is collected from the Center for Advanced Studies in Adaptive Systems (CASAS) project <cit.>; the other dataset is collected from our smart office experiment setup. The datasets are separated into two partitions, 25%for testing purposes and 75% for training purposes. It is essential to recognize the limitations of using a two-way split, as it may lead to overfitting the testing set during model selection. To mitigate this risk, we have employed techniques like cross-validation, allowing us to use the entire dataset for training and testing while still providing an unbiased performance estimate.A randomized 10-fold cross-validation is applied. Models are compared with their average test performance of cross-validations. §.§ TWOR Dataset TWOR2009 dataset is collected at a sensor-rich smart home testbed known as Kyoto [Source of TWOR dataset: http://casas.wsu.edu/datasets/twor.2009.ziphttp://casas.wsu.edu/datasets/twor.2009.zip]. This testbed is a part of the CASAS project, and data were collected for a two-month period consisting of 138000 records. The testbed held two residents. It is a 2-level apartment with two bedrooms on the second floor, with a kitchenette, storage room, living room, and entrance on the first floor. Each room has multiple motion sensors installed; all doors are connected with door sensors, and all appliances can report their usage status. Each record of the dataset is a 4-tuple, including the timestamp of the event, name of the sensor, reading of the sensors respectably, and the annotation, indicating the identity and type of activities. Table <ref> is a snippet of the dataset that shows the attributes. The residents' activity events are labeled with their identity and activity. §.§ Smart Office Setup & Data Collection The data collected from our smart office setup consists of 9 motion sensors and 3 test subjects. There is one sensor for each subject installed on their respective office cubicle. The rest of the sensors are installed in corridors connecting these cubicles and at the office entrance. The experiment was conducted for a contiguous two weeks only on weekdays, and a total of 10.5 days of sensor events logs were recorded. The data and annotation are arranged similarly to the TWOR dataset. Two thermal infrared cameras were utilized only for data annotations (i.e., creating ground truth). Annotation is done manually by observing the infrared camera recordings. The layout map of the office, the position of motion sensors, and assigned cubicles can be found in Fig. <ref>. The three subjects' cubicles are shown as red circles; the yellow border indicates the boundary of the experiment; green shapes in the layout map illustrate the approximate detection area of each motion sensor; the blue network shows the generated AG. Two subjects (R1, R2) have adjacent cubicles assigned; the other subject (R3) is at the opposite cubicle of R2. There are other residents other than the three subjects in the office. Events caused by them are treated as noise and removed during the annotations process. In this experiment setup, we are using the center of the detection area of each motion sensor as POI for constructing the AG. Due to the confinement of the experiment, we cannot include a sensor to cover the corridor above 4E25 in Fig. <ref>. The constructed AG by the proposed algorithm contains two subgraphs. To enable connectivity between the two sections of the office, an edge is manually added, crossing the barrier between the topmost cubicles on the right (4E25 and 4E26). The motion sensors we used are consumer-graded, off-the-shelf sensors. They were imposed with a detection interval, believed to be an artificial limitation by sensor vendors for longer battery life and ease of automation setups for average home users. Those motion sensors cannot report another event within a certain period of time. For instance, if a motion sensor has a 20-second detection interval. If such a sensor detects a movement for person A at T+00:00, it will report such an event once the movement is detected instantly. However, it will not report any event between T+00:00 to T+00:20 regardless. At T+00:20, the motion sensor will make another attempt to report its status. If movements happened within its range in the previous 20 seconds, the motion sensor would report a successful detection of movement at T+00:20. It only reports not detecting any movement after no movement has happened for the last 20 seconds. Fig. <ref> demonstrates the movement timelines and how off-the-shelves motion sensors report activities. There are three scenarios showing the concept of detection interval. E1, E2, E3 are events made by residents within the detection range of motion sensors; the triangles below the timeline show when the motion sensor will report a detection; The dashed line shows the detection interval. The detection interval is considered one of the experiment's limitations. One common scenario is if one resident moves past a sensor, followed by another resident, within 20 seconds, only one detection would be reported. It is also possible that a person moves past a motion sensor and moves past the motion sensor again in the opposite direction during the detection interval. In this case, only one event is reported. Thus, resident identification models may expect the resident at the new location, although the resident has returned to the initial location. We believe our method would perform better without this limitation. A down-sampling and up-sampling process is applied to the dataset before training the model. Testing subjects often stayed stationary, usually being in their own cubicle for a long period each day. In this experiment, stationary events contribute 90% to the recorded data. In this context, the trained model is biased toward stationary events. This imbalance of data made models tend to predict all sensor events belonging to one subject being stationary at their cubicle. The down-sampling process introduced a rate limit for continuous stationary sequences of events for subjects staying at their home sensor. A home sensor is defined as the sensor installed in the subjects' respective cubicles. The down-sampling process attempts to limit the number of continuous sequences of home sensor events. It is defined as, within a time interval T, if a home sensor detects more than one continuous event, only the first event is kept, and all other events made by such home sensors are removed. Occasionally, subjects will wander into the office or go out of the office. The down-sampling process reduces the weights on stationary data and adds more weight to movement data. where positional encoding proposed in this paper could benefit the most. After the down-sampling process, there are 200 sensor events on average per day. An up-sampling process is applied to the training dataset for a larger amount of data used for training. The up-sampling duplicates the data after chunking and train-test split of the data. For this particular dataset we collected, the training dataset is duplicated by 8 to increase the information to be trained per epoch by the LSTM model. With more data included in the training dataset, we can observe increases in performance with fewer epochs. §.§ Research Questions For the two datasets, multiple training parameters are selected for evaluation. This includes the volume of sensor events, with or without down/up-sampling, and multiple hyper-parameters related to the Node2Vec embeddings, which may impact the performance of node embeddings. A baseline model has been set up for reference. For both datasets, two baseline models are included, the one without any positional encoding information included and the one with coordinates as positional encoding. For the TWOR dataset, an extra baseline model, which uses room numbers of sensors as positional encoding, is included. For the dataset we collected, the experiment was split into four blocks. The first two blocks use only half of the data available, i.e., five days in our experiment. The other two blocks use the entire available dataset. Within each of the two blocks, one is conducted with down/up-sampling of the events, and the other is without. The objective is to test whether the up/down-sampling process benefits the model and how much it does. The hyper-parameters of the Node2Vec algorithm are tested on the TWOR dataset. The effect of how the volume of data affects the models' performance is evaluated by comparing the model trained by the full dataset and by subsets of data on the TWOR data. Models' performance is also prepared between the TWOR dataset and the dataset collected by our experiment. To address the above problems, we design, evaluate, and discuss experiments and come out with the following research questions:* RQ1: How do the sequence length and the number of random walks of the Node2Vec algorithm influence the model's performance? * RQ2: What is the impact of the dimension size of the Node2Vec algorithm with the model? Does a larger dimension size lead to extra computation time? * RQ3: What is the effect of window size of Node2Vec algorithm in the resident identification problem? what is the optimal value of this parameter? * RQ4: How does the chunk size of the LSTM model affect the overall performance? What is the optimal chunk size? * RQ5: What is the impact on the volume of data used for training? Does a smaller data volume still produce acceptable performance?§ RESULTS & DISCUSSIONS Four parameters of the Node2Vec algorithm may affect the node embeddings' performance on the resident identification approach. They are the dimension size, the sequence length, the number of random walks, and the window size of the Node2Vec algorithm. The dimension size is the vector length of the output node embeddings. Generally, the larger the dimension size is, the more information is included by the embeddings. However, a large dimension size might have a negative impact on the performance of machine learning models. The number of random walks and the length of random walks control the volumes of contextual and topological information to be included. The sample size of the underlying Word2Vec model is controlled by the number of random walks of the Node2Vec algorithm. The random walk of the graph is a process of sampling the graph to generate the required sequence that is required by Node2Vec's underlying Word2Vec algorithm. If there is insufficient sequence length and the number of random walks, it is possible that some nodes are not traversed, and some graph sequence patterns are not learned. The resulting node embeddings are likely to be low-performing. However, using large numbers on these two parameters might be too computationally difficult. The window size of the Word2Vec algorithm controls how the model tries to associate neighbors of words with itself. In the Node2Vec algorithm, it is the parameter controlling how many neighbors should be considered as contextual to positions. The effect of this parameter is discussed in a later sub-section.During the evaluation process, accuracy, precision, recall, and F1 score are selected for comparing the models' performance. Mainly, the accuracy score is used. The epoch selects the best model with the lowest validation loss. All experiment setups are run for at least ten repeats to eliminate the probability of the randomness effect. Statistical tests are employed when necessary. The source codes are available in the GitHub repository to evaluate the correctness of the approach and for easy reproducibility. [GitHub repository for the experiment: https://github.com/Song-Zhiyi/Resident-Identification-with-Positional-Encodinghttps://github.com/Song-Zhiyi/Resident-Identification-with-Positional-Encoding].§.§ Effectiveness of the Proposed Model We observed overall increases in accuracy scores compared to baseline models. Fig. <ref> shows the comparison of the model performance of all experiments. For the TWOR dataset, the accuracy score of the first baseline model without positional encoding is 89.0%. By using coordinates as positional encoding, the second baseline model provides 89.7% accuracy. The second baseline model shows a statistically significant improvement over the first baseline model. However, when using the room number of sensors as positional encoding in the third baseline model, the model is not observed with performance improvement over the model without positional information included, nor is it statistically significant. This method does not provide improvement as the feature space may be too small to include enough positional information. The proposed method, with the Node2Vec node embeddings included, the best model using our proposed method could reach 94.5% accuracy on the TWOR dataset. It is statistically significant. Node embeddings with other complexity have performance varying based on the characteristics of 4 hyper-parameters of the Node2Vec algorithm. The detailed effect of those parameters is discussed in a later section.For our dataset, the model performed poorly without any positional information; the baseline model is 45% for data without down/up-sampling and 50% for sampled data. We only observed significant model performance improvement on the sampled dataset for the proposed method. This suggests that the down-sampling process effectively reduces the imbalance between stationary and movement data. The best node embeddings with a random walk size of 200, a sequence length of 200, a dimension size of 128, and a window size of 1 have an accuracy score of 88.4%. However, surprisingly, the model with coordinates as positional information performs exceptionally well on this dataset, reaching an accuracy of 87.9%. This is not within our initial expectations. The model using Node2Vec embeddings only shows slight improvement over the coordinates-based positional encoding. It may be the confinements of the experiment, where the experiment is conducted in a small confined area with off-the-shelf devices. The home structure is simple; thus, a simple positional encoding, like coordinates, could improve the performance of the model. Another reason for this phenomenon is the extra dimensions of the positional encoding of the proposed method compared to the simple one. Considering the topology of our setup is simple, the extra dimensions produced by the proposed method provide a limited improvement to the resident identification model. Coordinates may work better in this scenario because the lower dimension is because of the simple topology and lesser dimensional included. However, we believe this phoneme only applies to very simple typologies. With complex topologies, the proposed method of generating positional encoding will help the resident identification model better understand of the residents' movements. §.§ Complexity of Node Embedding The performance impact of hyper-parameters is tested and evaluated by controlled experiments. The model with 700 random walks of the APG, 1000 sequence lengths of the random, 256 as dimension size, and 5 as window size of the Node2Vec model are selected as the baseline model for evaluation of node embeddings' hyper-parameters.§.§.§ Sequence Length and Number of Random Walks (RQ1)Fig. <ref> shows the performance comparison of the two parameters. Both variables seem to control the volume of information about the original graph included in node embeddings. However, a lower limit on sequence length may exist to produce node embeddings with acceptable performance. By observing two node embeddings training setup, with the number of random walks and sequence lengths being 700, 5, and 5, 700 respectively, the embeddings with larger sequence lengths outperformed significantly their counterparts. For another two embeddings with two parameters being 700, 50, and 50, 700, respectively, no significant difference could be observed. By contrasting the two pairs, we conclude a minimal sequence length, at least at the scale of the number of sensors in the AG, is required. Sequence length less than this minimal length may include not enough information about the connectivities of the AG, thus producing poorly performed node embeddings. The influence of these two parameters on models' performance is within expectation. They have a positive correlation with performance and training time. However, it is unlikely for the sensors' location to change after installation, and retraining of the embeddings may not be frequent. The training time for node embeddings is far less than for fitting and evaluation of resident identification models. In this regard, a higher value on these two parameters is suggested to maximize the information extracted from the AG.§.§.§ Dimension Size (RQ2) The dimension size of Node2Vec controls the size of output vectors representing POI in an AG. It controls the volume of information to be included in the generated node embeddings. However, a large dimension size will add too much noise to the resident identification models. For the TWOR dataset, a dimension size of 256 seems to be the best selection, as shown in Fig. <ref>. As for dimension sizes, it is statistically better than 16, 32, 64, and 128. For this dataset, which contains 62 sensors, a dimension size of 256 is about 4 times the number of the nodes in the AG, which has the best performance overall for all other selections on dimension size. Dimension size lower than 128 seems to be too small for the node embeddings to hold enough topological information for the resident identification task. The dimension of 512 seems to have a negative effect on the resident identification model. This may be due to the high dimension size of the input vector to the model.§.§.§ Window Size (RQ3) From Fig. <ref>, we conclude a negative correlation between window size and the performance of node embeddings. Node embeddings with smaller window sizes have better performance. The window size parameter controls how many contextual nodes should be considered relevant to the center node. In Word2Vec embeddings, a larger window size makes the model believe more neighboring words have similarities; in Node2Vec models, more neighboring nodes are considered relative to the center node. This is not ideal for identifying movement patterns. Since the vertices of the AG are connected by possible pathways between POIs, relating non-adjacent POI to center POI would add noise to the node embeddings. Thus, we are observing a decreased accuracy score on resident identification. For resident identification tasks, the lower window size leads to better performance. However, considering occasional malfunctioning is common in consumer-graded motion sensors, the residents may pass a few sensors without triggering them. A slightly larger window size could add robustness to counter this issue. For example, using 5 as window size, the Node2Vec embeddings will associate at most 5 sensors away from the central sensor. The system could tolerate at most 5 continuous sensor failures in this case. The window size selection should depend on the home's complexity and quality of sensors. In ideal setups, a window size of 1 is optimal. §.§ Chunk Size of LSTM Model (RQ4) From Fig. <ref>, we can observe a chunk size of 200 has the lowest performance. The tendency of model performance is first increased by chunk size and then decreases, which is within the initial assumption. Statistical testing is performed to distinguish the performance since the accuracy of the two groups is close. Table <ref> is a TukeyHSD pair-wise test table. It shows the mean difference and p-value of hypothesis testing between groups. The statistic test suggests that a chunk size of 1000 performs best over other selections with an accuracy score of 95%. A chunk size of 1000 is approximately 1/3 of average sensor events per day. However, we cannot find any statistical difference between chunk sizes of 500, 1000, and 2000. This may indicate that the fine-tuning of chunk size for datasets depends on the dataset density of sensor events, and the number of overall sensor events may be required to gain the best model performance of the proposed method. In general, chunk size at the scale of a day is recommended. §.§ Volume of the Dataset (RQ5) This research question is designed to study how the volume of data affects the performance of the proposed framework. The proposed method is evaluated with a limited set of data. Since the current approach still relies on the annotation of data, we want to find the minimum amount of annotated data that would produce satisfactory results. Both datasets were evaluated to better understand how the volume of data affects the proposed method. For the TWOR dataset, two setups are tested. One setup splits the original data into two halves, each including one month of data (February, March); another setup splits the data into three 10-day chunks (10d-01, 10d-02, 10d-03). The resident identification models are trained with the same node embeddings. The result is shown in Fig. <ref>. The February and March data are not statistically different from the full data. Among the three 10-day chunked data, 10d-01 and 10d-03 have similar performance, whereas 10d-02 has an abnormally low performance. It may suggest significant differences in daily patterns during the 10-day period. For our dataset, there are two setups of experiments. The result is shown in Fig. <ref>. The first set of experiments uses the full dataset (i.e., Myexp full), and the other uses half of the dataset (i.e., Myexp half-a, Myexp half-b). It suggests the same result as the TWOR2009 dataset, where the experiment with fewer data does not show significantly worse results over full data. This is within the expectation. Residents usually have weekly patterns of activities. Therefore, if a dataset could include at least a week span of data, it should have acceptable performance if the daily pattern between weeks does not have significant differences. However, this may not be true for weeks apart (i.e., a week in summer and a week in winter). Overall, with 10 days of annotated data in both experiments, the performances are not significantly different from the full-length models. However, it still requires manual labor to annotate the data, and it may not be feasible in all real-world scenarios. It is one of the limitations of our proposed method, and so are all supervised learning methods. Nevertheless, the proposed framework still has its potential for the resident's identification problems. Since the APGs do not depend on the annotations and are not difficult to construct, they can be utilized in unsupervised models to increase the topological connection between continuous events. Experiments on both datasets yield acceptable performances. Significant performance improvement is observable with the proposed resident identification framework. The best model for the TWOR dataset achieves 94.5% accuracy in predicting residents' identity in a 2-resident home. And for our dataset, the best model has 88.4% accuracy in a 3-resident office. It is also found that a larger sequence size, random walk, and dimension size of the Node2Vec algorithm leads to better performance. But for dimension size, being too large may introduce noise to the model. Thus, a reasonable proportion to the number of POIs is recommended to increase performance and save computational efforts. A window size of 1 is optimal for the resident identification task. However, a larger number adds robustness to the proposed framework, considering the possible malfunctioning of sensors. Multiple reasons could contribute to the gap in performance between the TWOR dataset and our experiment. The TWOR dataset has a denser sensor setup than our experiment. This could reduce blind areas that motion sensors could not cover; each sensor could be responsible for a smaller region, which adds more details to the AG. Their experiments have 50 motion sensors installed compared to 9 motion sensors in our experiment. Since their dataset is believed to be collected by motion sensors not having detection intervals, they are not suffering from problems introduced by detection intervals. A third factor might be the TWOR dataset is only for two residents, whereas ours is for three subjects.§ CONCLUSION Positional encoding is crucial in smart home research where location awareness is required. The model's performance on the resident identification task improved because of the additional encoded topological information and structural information of smart environments. We proposed a novel way of encoding the positions of interest of homes into machine learning-friendly node embeddings. The method uses available information from the smart environment setup, typically a layout map. An algorithm is designed to build an Accessibility Graph and later transform the graph into an Accessibility Probability Graph (APG). The generated APG is used for the Node2Vec algorithm to produce node embeddings that encode POIs into vector form. The LSTM model was used for resident identification on the process data. The proposed methods are evaluated against two datasets: a dataset publicly available and a dataset collected from our own testbed. Significant performance improvement is observed in proposed positional encoding over baseline models.To solve the problem that all supervised learning has, the need for labeled data, we are trying to explore techniques like transfer learning, in which we can train generalized models and fine-tune them in individual smart environments. The pre-training of models has become a trend in many areas. The same concept could be used in smart home problems. It is also possible to include some reinforcement learning into the model so that that model could adjust itself based on positive and negative feedback from residents. In the future, the Seq2Seq model could be adjusted from an N-to-N Seq2Seq model into an N-to-1 model. So that real-time prediction of residents' identity would be possible. To predict which residents are responsible for an event captured by an environmental sensor, we can feed sequences of sensor events till the latest event into the LSTM model. The N-to-1 LSTM model will predict the identity of the residents of the latest event by considering all historical temporal and spatial traces of residents provided by the sensor event sequences. This model will take a sequence of N events where the last event is the latest event captured in real-time as input and produce a single output to indicate the resident's identity. The current approach does not consider guests. Since the movement of guests captured by the environmental sensors may be similar to the residents, it may be considered as one of the residents' movement traces. There is no good solution for a current supervised approach. This problem should be addressed in future models, where the habit patterns of residents could be extracted. For those models, the proposed positional encoding would be useful to boost their effectiveness because of the included positional information.The proposed method that builds the node embeddings representing the home's structure may be used in areas other than resident identification. In the future, it is possible to generalize to other problems that require topological awareness of locations and connectivities of locations in homes and the structure of homes (i.e. habit pattern extraction, etc.). For those questions, similar approaches to constructing node embeddings could be applied. During the experiment, we only encoded the location of sensors using this method. While the method is not limited to the location of sensors, all POIs could be encoded in such a way. For other problems that require topological or structural information on locations in homes, the proposed positional encoding method might be beneficial as well. Because of the added information about possible pathways of homes and how location is connected, the models' performance might be increased. ACM-Reference-Format | http://arxiv.org/abs/2310.17836v1 | {
"authors": [
"Zhiyi Song",
"Dipankar Chaki",
"Abdallah Lakhdari",
"Athman Bouguettaya"
],
"categories": [
"cs.LG",
"cs.CR"
],
"primary_category": "cs.LG",
"published": "20231027012941",
"title": "Positional Encoding-based Resident Identification in Multi-resident Smart Homes"
} |
The zero forcing numbers and propagation times of gear graphs and helm graphs Sara Anderton, Rilee Burden, McKenzie Fontenot, Noah Fredrickson, Alexandria Kwon, Sydney Le, Kanno Mizozoe, Erin Raign, August Sangalli, Houston Schuerger, and Andrew Schwartz May 15, 2023 ====================================================================================================================================================================================The zero forcing numbers and propagation times of gear graphs and helm graphs The zero forcing numbers and propagation times of gear graphs and helm graphs Zero forcing is a dynamic coloring process on graphs. Initially, each vertex of a graph is assigned a color of either blue or white, and then a process begins by which blue vertices force white vertices to become blue.The zero forcing number is the cardinality of the smallest set of initially blue vertices which can force the entire graph to become blue, and the propagation time is the minimum number of steps in such a zero forcing process.In this paper we will determine the zero forcing numbers and propagation times of two infinite classes of graphs called gear graphs and helm graphs. § INTRODUCTION Graph theory is classified as a branch of discrete mathematics. It studies mathematical objects called graphs, which use dots (called vertices) to represent locations or elements of a set and edges between them to visualize the relationships between the objects. Due to the adaptability of the structure, graph theory offers tools for modeling and analyzing pairwise relationships between objects in both mathematics and applications outside of mathematics.In 2008, a graph theory concept, zero forcing, was introduced by the AIM Minimum Rank - Special Graphs Work Group <cit.>.Zero forcing is a graph coloring game where the goal is to color all vertices blue using the fewest initial blue vertices on a graph where each vertex is initially colored blue or white. The color change rule allows a blue vertex to force its only white neighbor to become blue, and the game ends when no more white vertices can be colored. This mathematical technique for analyzing graphs is used to study a variety of graph parameters, such as the maximum nullity and minimum rank of graphs. Besides graph theory, it was independently introduced in two fields: in physics as quantum control theory <cit.>, and in the monitoring of power grids as power domination <cit.> (with the role of zero forcing evident in <cit.>).The propagation time of a zero forcing set, which describes the number of steps needed to color all vertices of a graph blue, is one of the indicators to evaluate the performance of zero forcing. This concept is relatively new, having been proposed only eleven years ago <cit.>.This paper focuses on two infinite classes of graphs, gear graphs and helm graphs.While the definitions we use for each class of graphs is slightly more general than those standardly used, in both cases the graph class we consider contains the more traditional version of the graph class as a subclass.In section 2 we determine the zero forcing number and propagation time of gear graphs, and in section 3 we determine the zero forcing number and propagation time of helm graphs.However, first we will need a collection of general graph theoretical definitions as well as certain concepts central to the topics of zero forcing and propagation time.A graph, G, consists of a set of vertices V(G) and a set E(G) of pairs of vertices called edges.To distinguish between sets of vertices of size 2 and edges, given two vertices u,v ∈ V(G), the set containing only the vertices u and v will be denoted {u,v} and an edge between u and v will be denoted uv.If uv ∈ E(G), then we say that u and v are adjacent (or neighbors) and that the edge uv is incident to the vertices u and v. Given a vertex v ∈ V(G), the neighborhood of v, denoted N_G(v), is the set of vertices N_G(v)={u:uv ∈ E(G)} and the degree of v, denoted (v), is the number (v)=N_G(v). The minimum degree of G, denoted δ(G), is the smallest degree among all vertices in G.If (v)=1, then v is said to be a pendant vertex.If G and H are graphs such that V(H)⊆ V(G) and E(H) ⊆ E(G), then H is said to be a subgraph of G. If, in addition, for each pair of vertices u,v ∈ V(H) we have that uv ∈ E(H) if and only if uv ∈ E(G), then H is said to be an induced subgraph of G.A path is a sequence of vertices (v_0,v_1,…,v_k) such that for each i with 0 ≤ i ≤ k-1 we have that v_iv_i+1∈ E(G).Similarly, a cycle is a sequence of vertices (v_0,v_1,…,v_k,v_0) such that v_0v_k ∈ E(G) and for each i with 0 ≤ i ≤ k-1 we have that v_iv_i+1∈ E(G).Paths can also be viewed as graphs in and of themselves.The path graph on n vertices, denoted P_n, is the graph with vertex set V(P_n)={v_i}_i=1^n and edge set such that for the distinct pair i,j ∈{1,2,…,n}, v_iv_j ∈ E(P_n) if and only if i-j=1.A path cover 𝒬 of a graph G is a collection of vertex-induced path subgraphs Q_1,Q_2,…,Q_n such that {V(Q_i)}_i=1^n is a partition of V(G). The path cover number of G, denoted (G), is the minimum cardinality of a path cover of G.A path cover 𝒬 of a graph G is said to be minimum if 𝒬=(G). Zero forcing is a dynamic coloring process on graphs.Let G be a graph. Prior to beginning the process, the vertices of G are all colored either blue or white.Once this initial coloring is chosen, the zero forcing process begins.To understand the process, one must first consider the zero forcing color change rule. [0.87cm]0cm Zero forcing color change rule: If u is blue and v is the only white neighbor of u, then u can force v to be colored blue. If a vertex u forces v, then we denote this by u → v. A zero forcing process may include multiple steps, called time-steps, and during each time-step multiple white vertices may become blue.Once a vertex is blue, it will remain blue for the remainder of the zero forcing process.During the process, the color change rule is applied repeatedly until either there are no white vertices remaining or no blue vertex has a unique white neighbor. If B is the set of vertices initially chosen to be blue and after a sufficient number of applications of the color change rule every vertex in G is blue, then B is a zero forcing set of G, otherwise B is a failed zero forcing set of G.The zero forcing number of G, denoted (G), is the minimum cardinality of a zero forcing set of G.If B is a zero forcing set of G and B=(G), then we say that B is a minimum zero forcing set.During a zero forcing process, when there is more than one vertex which can force v, say u_1 and u_2, one must choose which vertex does the forcing.Since multiple vertices may become blue during a single time-step it is sometimes convenient to say that a set of vertices U forces another set of vertices W, denoted U → W. Rigorously, this means that for all vertices w ∈ W there is a vertex u ∈ U such that u → w.In this paper we will be discussing multiple types of zero forcing processes.The first and primary type of process is called a propagating family of forces.A propagating family of forces ℱ is an ordered collection of sets of forces {F^(k)}_k=1^K such that at each time-step k every white vertex that can become blue, according to the criteria outlined in the zero forcing color change rule, will become blue.Let B be a zero forcing set of a graph G and ℱ be a propagating family of forces of B on G.Define B^[0]=B, and for each integer k ≥ 1, let B^(k) denote the set of vertices which are forced during time-step k and define B^[k]=B^(k)∪ B^[k-1].The propagation time of B in G, denoted (G,B), is the least k such that B^[k]=V(G).The propagation time of G is given by (G)=min{(G,B):B=(G)}.If B is a minimum zero forcing set of G with (G,B)=(G), then B is an efficient zero forcing set.The second type of process is a relaxed chronology of forces.To define this second type of process, introduced in <cit.>, first define S(G,B) to be the collection of forces u → v such that u → v is a valid force according to the zero forcing color change rule when B is the set of vertices which is currently blue.A relaxed chronology of forces is an ordered collection of sets of forces, denoted ℱ={F^(k)}_k=1^K, such that at each time-step k if E_ℱ^[k-1] denotes the set of vertices which are currently blue, then F^(k)⊆ S(G,E_ℱ^[k-1]) such that for a given white vertex v at most one force u → v ∈ F^(k), and E_ℱ^[K]=V(G).The completion time of a relaxed chronology of forces ℱ={F^(k)}_k=1^K is given by (ℱ)=K.For a zero forcing set B and a relaxed chronology of forces ℱ, the sequence of sets {E_ℱ^[k]}_k=0^K with B=E_ℱ^[0]⊆ E_ℱ^[1]⊆…⊆ E_ℱ^[K-1]⊆ E_ℱ^[K]=V(G), is called the expansion sequence of B induced by ℱ and for each time-step k, E_ℱ^[k] is called the k-th expansion of B induced by ℱ. It is worth noting that if at each time-step F^(k) is chosen to be maximal, then the relaxed chronology of forces ℱ will be a propagating family of forces. Since the construction of a relaxed chronology allows for great flexibility in choosing the subset F^(k)⊆ S(G,E_ℱ^[k-1]) at each time-step, it is often more convenient to utilize relaxed chronologies of forces during a proof.However, since the aim of this paper will be to establish the propagation times of different graphs, it is important to note that for each minimum zero forcing set B of G and each relaxed chronology of forces ℱ of B on G, (G) ≤(ℱ) and for at least one choice of B and ℱ, (G)=(ℱ).For a given zero forcing set B and a relaxed chronology of forces ℱ={F^(k)}_k=1^K of B on G, a forcing chain induced by ℱ is a sequence of vertices (v_0,v_1,…, v_N) of G such that v_0 ∈ B, v_N does not force during ℱ, and for each i with 0 ≤ i ≤ N-1, v_i → v_i+1∈ F^(k) for some k with 0 ≤ k ≤ K. The collection of forcing chains induced by ℱ is a chain set.Since B is a zero forcing set of G, each vertex of G is contained in some forcing chain.Furthermore, since each vertex is forced by at most one other vertex and in turn performs at most one force during a relaxed chronology of forces, it follows that a chain set forms a path cover, providing the following result.<cit.> Let G be a graph. Then (G) ≤(G).Vertices which are members of B are said to be initial and vertices which do not perform a force during ℱ are said to be terminal.If a terminal vertex v is initial, then v is said to be passively terminal.The set of vertices which are terminal are called the terminus of ℱ, denoted (ℱ).The next lemma concerning the terminus appeared in <cit.> as a restatement of <cit.> and a generalization of <cit.>, and will be of use later.<cit.> Let G be a graph, B be a zero forcing set of G, and ℱ be a relaxed chronology of forces of B on G.Then (ℱ) is a zero forcing set of G and (G,(ℱ)) ≤(G, B).One of the most natural examples of an infinite class of finite graphs is the wheel graph.The wheel graph on n+1 vertices,denoted W_n+1, is the graph with vertex set V(W_n+1)={v_i}_i=0^n and edge set such that v_0v_i ∈ E(W_n+1) for each i ∈{1,2,…,n}, v_iv_i+1∈ E(W_n_+1) for each i ∈{1,2,…,n-1}, and v_1v_n ∈ E(W_n+1).Other classes of graphs may be built upon the structure of wheel graphs, such as helm graphs and gear graphs, and thus considered modifications of the wheel graph.In the next two sections we will consider the zero forcing numbers and propagation times of gear graphs and helm graphs.For the purposes of this paper, the definitions we use for helm graphs and gear graphs are more general than the traditional definitions.However, in our calculations the zero forcing numbers and propagation times of the more traditional versions of these graph classes will also be addressed.§ GEAR GRAPHSWe first address gear graphs, and have included their definition below: The generalized gear graph, denoted Gr(m,r), is the graph with vertex set V(Gr(m,r))={v_i}_i=0^m(r+1)such that for v_i,v_j ∈ V(Gr(m,r)) distinct, v_iv_j ∈ E(Gr(m,r)) if and only if one of the following is true: * i=0 and (r+1)|j. * i,j ≠0 and i-j=1 * {i,j}={1,m(r+1)}. Note that in the special case when r=1, the above definition for the gear graph becomes the traditional gear graph, denoted Gr_n.We would now like to establish the zero forcing number of our more generalized gear graph, denoted (Gr(m,r)).Before preceding it is helpful to note that it was shown in <cit.> that<cit.> Let G be a graph, B be a zero forcing set of G, ℱ be a relaxed chronology of forces of B on G.If v ∈ V(G) such that v→ u ∈ F^(1) for some vertex u ∈ V(G), then at least (v)-1 neighbors of v are in B.Furthermore, (G) ≥δ(G). For the purpose of the following proofs, when discussing these graphs, we will sometimes refer to v_0 as the center vertex, any v_i such that (r+1)|i as a spoke vertex, and any vertex which is neither a center vertex nor a spoke vertex as an intermediate vertex. Even though the intermediate vertices and spoke vertices are only indexed up to m(r+1), for certain arguments it will allow the discussion to remain more direct and streamlined if it is understood that for a vertex v_M, with M > m(r+1), the writers are referencing a vertex v_N, where N = M(mr+m). (Gr(m,r)) = 3. Since δ(Gr(m,r)) =2, (Gr(m,r))≥ 2 by Theorem <ref>.Next, we would like to show ( Gr(m,r) ) ≠ 2. To this end, suppose by way of contradiction that there exists a zero forcing set of Gr(m,r), say B, such that B =2. When finding possible zero forcing sets, it is important to consider the degree of each vertex. In this case, (v_0) = m, (v_(r+1)k) = 3 where k∈{1,2,...,m}, and (v_i) = 2 where (r+1)|̸ i. It is clear that each vertex has degree greater than or equal to 2. We know then, that in order for any forcing to occur with the vertices in B, the two vertices must be adjacent. There are only three possible configurations which satisfy this requirement. Configuration 1: We consider B= {v_0,v_(r+1)k}. Note, (v_0) = m and (v_(r+1)k) = 3. Therefore, since v_0 and v_(r+1)k each have degree of at least three and thus two white neighbors, no forcing can occur and B is a failed zero forcing set. Configuration 2: Up to isomorphism, we consider B = {v_(r+1)k, v_(r+1)k-1} for some k. Note that (v_(r+1)k-1) = 2 and one of its adjacent neighbors, namely v_(r+1)k, is in B. Thus on time-step one, v_(r+1)k-1→ v_(r+1)k-2. After time-step r, however, no forcing can occur as (v_(r+1)k) = (v_(r+1)(k-1)) = 3 and each have two white neighbors. In particular, v_(r+1)k is neighbored by v_(r+1)k+1 and v_0 which are both white, and v_(r+1)(k-1) is neighbored by v_(r+1)(k-1)-1 and v_0 which are both white.Thus B is a failed zero forcing set. Configuration 3: We consider B= {v_i,v_i+1}, where r+1 divides neither i nor i+1. Note that since (v_i)=(v_i+1)=2, on time-step one, v_i→ v_i-1 and v_i+1→ v_i+2.This process can continue until the spoke vertices v_(r+1)k and v_(r+1)(k+1) are blue. However, since (v_(r+1)(k+1))=(v_(r+1)k)=3, and they each only have one blue neighbor no more forcing can occur.Thus B is a failed zero forcing set. Since Gr(m,r) has no possible zero forcing sets of size 2, we have established that (Gr(m,r))≥3. To show that (Gr(m,r)) = 3 we must find a zero forcing set of Gr(m,r) of size 3. We will now construct such a zero forcing set. Let B = {v_0, v_m(r+1), v_1}. We define a relaxed chronology of forces ℱ by requiring v_0 to remain passively terminal and allowing v_m(r+1) and v_1 to initiate a forcing process along the cycle of spoke and intermediate vertices; in particular, on time-step k, v_k → v_k+1 and v_m(r+1)-k+1→ v_m(r+1)-k. This process continues until every vertex of Gr(m,r) is blue, and thus we have that(Gr(m,r)) = 3. We now look to the propagation time of gear graphs. * In the case that m is odd and r=1, (Gr(m,r))=m-1.* Otherwise, (Gr(m,r))=⌈ m (r+1)/2⌉ -2. To determine the propagation time of the generalized gear graph, we will consider all possible zero forcing sets of size 3 and show that one of them produces the most efficient propagation time. Due to the degree of all the vertices being at least two, we need at least two vertices in our zero forcing set to be adjacent. There are only three possible ways to have two adjacent vertices on the gear graph: a spoke vertex and an intermediate vertex, two intermediate vertices, and a spoke vertex and acenter vertex. This leaves one vertex left to be chosen for the zero forcing set. Depending on the chosen vertex, either forcing will fail after at most r time-steps after the second spoke vertex has been reached, or up to isomorphism, we will get two families of zero forcing sets: B_1={v_0,v_i,v_i+1} and B_2={v_1,v_i,v_i+1} where i∈ S= {(m-1)(r+1),(m-1)(r+1)+1,...,m(r+1)-1}. Since we only have three elements in our zero forcing set, on any given time-step we can force at most three vertices. Since two of our forcing chains will be passing along the cycle of spoke and intermediate vertices, we can only force three vertices when the center vertex is being forced or when the center vertex is forcing. Thus, this can happen at most twice. Thus, if we can find a relaxed chronology of forces where three vertices are forced twice, and there are no time-steps where only a single vertex is forced, then we will have found our propagation time. When m is odd and r is even we have an even number of vertices in our graph, so due to parity we will only be able to force three vertices once.Since |V(Gr(m,r))|=m(r+1)+1, in either case we have that(Gr(m,r))≥⌈m(r+1)+1-3-6/2⌉+2=⌈m(r+1)/2⌉-2. We will show that this bound is attained by a zero forcing set anytime that either r>1 or m is even.However, in the special case where r=1 and m is odd, this will require proof by exhaustion. m is even. Let our zero forcing set be B={v_1,v_m(r+1),v_m(r+1)-1}. On time-step 1, v_m(r+1)→ v_0, v_1 → v_2, and v_m(r+1)-1→ v_m(r+1)-2. On time-step k, v_k → v_k+1 and v_m(r+1)-k→ v_m(r+1)-k-1. Since we have taken v_m(r+1) in our zero forcing set, and we force an even number of spoke vertices on each time-step, on the final time-step we are left with a single unforced spoke vertex. Because of this, on the final time-step, three vertices are able to force: v_0 → v_m/2(r+1), v_m/2(r+1)-2→ v_m/2(r+1)-1, and v_m/2(r+1)+2→ v_m/2(r+1)+1. Since we have taken three vertices in our zero forcing set, we were able to force three vertices on each of two time-steps, and two vertices on the remaining time-steps, we have that(Gr(m,r))=(Gr(m,r),B)=m(r+1)+1-3-6/2+2=m(r+1)/2-2=⌈m(r+1)/2⌉-2. m is odd, r is evenLet our zero forcing set be B={v_1,v_m(r+1),v_m(r+1)-1}. On time-step 1, v_m(r+1)→ v_0, v_1 → v_2, and v_m(r+1)-1→ v_m(r+1)-2. After the first time-step, there will be an even number of vertices left, so it will not be possible to have another time-step where three vertices are forced without having a time-step where only a single vertex is forced. At this point, all forcing will be done along the cycle of spoke and intermediate vertices with two vertices being forced at each time-step. Since we have taken three vertices in our zero forcing set, we were able to force three vertices on one time-step, and two on each remaining time-step, we have that (Gr(m,r))=(Gr(m,r),B)=m(r+1)+1-3-3/2+1=m(r+1)+1/2-2=⌈m(r+1)/2⌉-2.m is odd, r is odd[0.2in]0in r>1Let our zero forcing set be B={v_1,v_2,v_m(r+1)-1}. On time-step one, v_1→ v_m(r+1) and v_2→ v_3. On time-step two, v_3→ v_4, v_m(r+1)→ v_0, and v_m(r+1)-1→ v_m(r+1)-2. On time-step k, v_k+1→ v_k+2 and v_m(r+1)-k+1→ v_m(r+1)-k. On time-step m-1/2(r+1)-2, the second to last spoke vertex, v_m-1/2(r+1), is forced. Thus, on time-step m-1/2(r+1)-1, v_0→ v_m+1/2(r+1), v_m-1/2(r+1)→ v_m-1/2(r+1)+1, and v_m+1/2(r+1)+2→ v_m+1/2(r+1)+1. On each remaining time-step, since r is odd, two vertices are forced. Since we have taken three vertices in our zero forcing set, there are two time-steps on which we force three vertices, and two vertices are forced on every remaining time-step, we have that (Gr(m,r))=(Gr(m,r),B)=m(r+1)+1-3-6/2+2=m(r+1)/2-2=⌈m(r+1)/2⌉-2.[0.2in]0inr=1First note that since we begin the process with an even number of white vertices, this is enough to guarantee that on average at most two vertices are forced during each time-step, and since |V(G(m,r))|=2m+1 it follows that(Gr(m,1))≥2m+1-3/2=m-1.Up to isomorphism, there are only three zero forcing sets of Gr(m,1): B_1={v_0, v_1,v_2m}, B_2={v_1,v_2m,v_2m-1}, and B_3={v_1,v_3,v_2m}. Note that if B_3 is chosen as the zero forcing set, then on time-step one only a single vertex is forced, namely v_2. Next, note that if B_1 is chosen as the zero forcing set, then v_0 is already in our zero forcing set and thus there cannot be two time-steps where three vertices are forced.We will now show that B_2 can force three vertices on the first time-step, one vertex on the final time-step, and two on every remaining time-step, and thus is an efficient zero forcing set. When B_2 is chosen as the zero forcing set, on time-step 1, v_1→ v_2, v_2m→ v_0, and v_2m-1→ v_2m-2. At this point, all forcing will be done along the cycle of spoke and intermediate vertices with two vertices being forced at each time-step, except the final time-step when there is only a single vertex remaining. Since we have taken three vertices in our zero forcing set, we were able to force three vertices on time-step one, one vertex on the final time-step, and two vertices on each remaining time-step we have that (Gr(m,r))=(Gr(m,1),B_2)=2m+1-3-3-1/2+2=m-1.§ HELM GRAPHS We now consider helm graphs, for which a definition is provided below. The helm graph, denoted H(m,s), is the graph with vertex set V(H(m,s))={v_i}_i=0^m ∪{p_i,j}_i=1,^m_j=1^s such that for u_1,u_2 ∈ V(H(m,s)) distinct, u_1u_2 ∈ E(H(m,s)) if and only if one of the following is true: * u_1 = v_0 and u_2 = v_i * u_1=v_i_1 and u_2=v_i_2 and in addition one of the following is also true: * i_1-i_2=1 * {i_1,i_2}={1,m} * u_1=v_i and u_2=p_i,j for some i ∈{1,2,...m} and some j ∈{1,...,s}. For the duration of this paper we will take the convention that sets of vertices of the form {v_f(j)}_j=n^m, where m<n and f is some function on the natural numbers, will be considered empty.As with gear graphs, we will sometimes refer to v_0 as the center vertex, and the remaining vertices which are neither pendant vertices nor the center vertex will be referred to as spoke vertices.Again, even though spoke vertices are only indexed up to m, for certain arguments it will allow the discussion to remain more direct and streamlined if it is understood that for a vertex v_M, with M > m, the writers are referencing a vertex v_N, where N = Mm.Due to the nature of helm graphs, pendant vertices play a key role in zero forcing, so it will be beneficial to have extra terminology concerning them. Let m ≥ 3 be an integer and B be the set of vertices of H_m initially chosen to be blue. Two pendant vertices p_i and p_j, are consecutive if their indices, i and j, are consecutive natural numbers (modulo m). Pendant vertices that are members of B are forcing pendants, and those which are not are terminal pendants.Furthermore, if p_j, p_j+1, and p_j+2 are consecutive pendant vertices, but only p_j+1 is a forcing pendant, then p_j+1 is an isolated forcing pendant.Maximal collections of consecutive forcing pendants are groups of forcing pendants, that is if {p_i}_i=j^k is a set of consecutive forcing pendants, but p_j-1 and p_k+1 are terminal pendants, then {p_i}_i=j^k is a group of forcing pendants.Similarly, maximal collections of consecutive terminal pendants are groups of terminal pendants. Note that in the special case where s=1, the definition we provide for the helm graph becomes the traditional helm graph, denoted H_n. As we will see later, the case where s>1 is simpler to address for both (H(m,s)) and (H(m,s)).For now we establish the results for the case when s=1. It can be confirmed via exhaustion that (H_3)=3, (H_4)=3, (H_3)=2, and (H_4)=3.For m ≥ 4, (H_m) ≥⌈m/2⌉.Let 𝒬 be a path cover of H_m.Note that m is the number of pendant vertices of H_m. Since each path Q ∈𝒬 may contain at most two pendant vertices, we have ⌈m/2⌉≤𝒬.Since 𝒬 was an arbitrary path cover, it follows that ⌈m/2⌉≤(H_m).Finally, by Theorem <ref>, (H_m) ≥(H_m) ≥⌈m/2⌉. Since ⌈m/2⌉≤(H_m), to establish the value of (H_m) it remains to show that for m>4, (H_m) ≤⌈m/2⌉. To prove that (H_m) ≤⌈m/2⌉, we need only to find one set of vertices B such that B= ⌈m/2⌉, and demonstrate a relaxed chronology of forces ℱ={F^(k)}_k=1^K of B on H_m. Furthermore, this will also give us an upper bound for the propagation time of H_m, namely (H_m) ≤ K. Let m ≥ 5, then (H_m)=⌈m/2⌉ and (H_m) ≤ 6if m4 ≡ 0 4if m4 ≡ 1 5if m4 ≡ 2orm4 ≡ 3. Let m ≥ 5. Due to the nature of the helm graph, the propagation times are dependent on the congruence class m4, so we consider the following cases: m4 ≡ 0 Let B = { p_i }_i=1^3 ∪{p_6+4j}_j=0^m-12/4∪{p_7+4j}_j=0^m-12/4∪{p_m-3}. Note since m is even, ⌈m/2⌉ = m/2 andB= 3 + 2( m-12/4+1 ) +1 = m-12/2 + 6 = m-12/2 + 12/2 = m/2. Therefore, we know that if B is a zero forcing set, then it is of minimum size. We will now construct a relaxed chronology of forces ℱ={F^(k)}_k=1^6. On time-step 1, the pendant vertices in B perform forces as follows: {p_i}_i=1^3 →{v_i}_i=1^3, {p_6+4j}_j=0^m-12/4→{v_6+4j}_j=0^m-12/4, {p_7+4j}_j=0^m-12/4→{v_7+4j}_j=0^m-12/4, and p_m-3→ v_m-3. On time-step 2, v_2 → v_0 since v_0 was its only white neighbor (p_2 ∈ B and v_1,v_3 were forced on time-step 1). On time-step 3, v_1 → v_m since v_2, p_1, andv_0 have been forced and are the only neighbors of v_1 besides v_m. Similarly, on time-step 3, v_3 → v_4, and continuing in this manner, {v_6+4j}_j=0^m-12/4→{v_5+4j}_j=0^m-12/4 since the only neighbors of {v_6+4j}_j=0^m-12/4 are {p_6+4j}_j=0^m-12/4, {v_7+4j}_j=0^m-12/4, and v_0 all of which are in B or have been forced already. In addition {v_7+4j}_j=0^m-12/4→{v_8+4j}_j=0^m-12/4 since the only neighbors of {v_7+4j}_j=0^m-12/4 are {p_7+4j}_j=0^m-12/4, {v_6+4j}_j=0^m-12/4, and v_0 all of which are in B or have been forced already. At this point, all vertices along the cycle of spoke vertices have been forced except for v_m-2 and v_m-1. On time-step 4, v_m-3→ v_m-2 since its only other neighbors are p_m-3 (which was in B), v_m-4 (forced on time-step 3 by v_m-5), and v_0 (forced on time-step 2). Now all spoke vertices have been forced except for v_m-1 which can be forced on time-step 5 by v_0 (since all but one of v_0's neighbors are blue). Finally, the only white vertices remaining are pendant vertices. Therefore, on time-step 6, their corresponding spoke vertices may force them blue, thus completing the forcing. m4 ≡ 1 Let B = { p_i }_i=1^3 ∪{p_6+4j}_j=0^m-9/4∪{p_7+4j}_j=0^m-9/4. Note since m is odd, ⌈m/2⌉ = m+1/2 and B= 3 + 2( m-9/4+1) = m-9/2 + 5 = m-9/2 + 10/2 = m+1/2. We force in a manner similar to Case 1. On the first time-step { p_i}_i=1^3 →{v_i}_i=1^3, {p_6+4j}_j=0^m-9/4→{v_6+4j}_j=0^m-9/4, {p_7+4j}_j=0^m-9/4→{v_7+4j}_j=0^m-9/4. On time-step 2, v_2 → v_0. On time-step 3, v_1 → v_m, v_3 → v_4, {v_6+4j}_j=0^m-9/4→{v_5+4j}_j=0^m-9/4, and {v_7+4j}_j=0^m-9/4→{v_8+4j}_j=0^m-9/4. At this point, all vertices in the graph have been forced except {p_5+4j}_j=0^m-9/4, {p_8+4j}_j=0^m-9/4, p_4, and p_m. These can all be forced on time-step 4 since they are the only white neighbors of their respective spoke vertices, {v_5+4j}_j=0^m-9/4, {v_8+4j}_j=0^m-9/4, v_4 and v_m. m4 ≡ 2 Let B = { p_i }_i=1^3 ∪{p_6+4j}_j=0^m-10/4∪{p_7+4j}_j=0^m-10/4. Note since m is even, ⌈m/2⌉ = m/2 and B= 3 + 2( m-10/4+1) = m-10/2 + 5 = m-10/2 + 10/2 = m/2. The first three time-steps are again similar to Case 1. After time-step 3, the only white spoke vertex is v_m-1, which can be forced on time-step 4 by v_0. On time-step 5 the only remaining white vertices are pendant vertices which can be forced by their respective spoke vertices. m4 ≡ 3 Let B= { p_i }_i=1^3 ∪{p_6+4j}_j=0^m-11/4∪{p_7+4j}_j=0^m-11/4∪{p_m-1}. Note since m is odd, ⌈m/2⌉ = m+1/2 andB= 3 + 2( m-11/4+1 ) +1 = m-11/2 + 6 = m-11/2 + 12/2 = m+1/2. Again, the first three time-steps are similar. At this point, all vertices along the cycle of spoke vertices have been forced except for v_m-2, which can be forced on time-step 4 by v_0. Finally, the only white vertices remaining are pendant vertices. Therefore, on time-step 5, their corresponding spoke vertices may force them blue, thus completing the forcing. In each case, we have constructed a zero forcing set B and a corresponding relaxed chronology of forces ℱ, thus showing (H_m) ≤⌈m/2⌉ and (H_m) ≤(ℱ) ≤ 6if m4 ≡ 0 4if m4 ≡ 1 5if m4 ≡ 2orm4 ≡ 3. Furthermore, combining this with Lemma <ref>, we have shown (H_m) = ⌈m/2⌉. Our next goal will be to determine lower bounds for (H_m), thus establishing the propagation time.However, we must first provide some preliminary results to be used in said theorem. Let B be a minimum zero forcing set of H_m, ℱ be a relaxed chronology of forces of B on H_m, and 𝒞 be the chain set induced by ℱ.If m is even, then each forcing chain in 𝒞 will contain two pendant vertices. If m is odd, then all but one forcing chain in 𝒞 will contain two pendant vertices with the remaining forcing chain containing one pendant vertex. We know this observation to be true since a forcing chain, being a vertex induced path, can have at most two pendant vertices of H_m. If m is even, there will be m/2 chains. Since there are m pendant vertices, each forcing chain must contain two pendant vertices. If m is odd, there will be m+1/2 chains. Since there are m pendant vertices, all but one chain must contain two pendant vertices, the remaining chain will have exactly one pendant vertex. If m is even, then there can not be two distinct sets of three consecutive pendant vertices in a minimum zero forcing set of H_m. However, any minimum zero forcing set must contain a group of 3 forcing pendants. Suppose by way of contradiction that B is a minimum zero forcing set of H_m that contains two sets of three consecutive pendant vertices, {p_m(1,i)}_i=1^3 and {p_m(2,i)}_i=1^3, with m(j,i)+1=m(j,i+1). Every vertex in B must force in order for its forcing chain to contain two pendant vertices. In particular, each such pendant vertex can only force its neighboring spoke vertex, so for each pair (j,i)∈{1,2}×{1,2,3}, p_m(j,i)→ v_m(j,i). Since v_m(1,2) and v_m(2,2) each only have one remaining white neighbor, that being the center vertex, and the center vertex can only be a member of one forcing chain, either the forcing chain beginning at p_m(1,2) or the forcing chain beginning at p_m(2,2) contain a single pendant vertex which contradicts Observation <ref>. Now suppose B is a minimum zero forcing set of H_m but B does not contain three consecutive pendant vertices. After each pendant vertex in B forces its neighboring spoke vertex, each such spoke vertex will have at least one white neighbor on the cycle of spoke vertices. Since each of the spoke vertices are adjacent to the center vertex and the center vertex has not been forced, these vertices have two white neighbors and thus no further forcing can occur.If m is odd, then there cannot be three sets of three consecutive pendant vertices in a minimum zero forcing set of H_m. In addition, if every vertex in a minimum zero forcing set B of H_m is a pendant vertex, then B must contain three consecutive pendant vertices.Suppose by way of contradiction that we have a minimum zero forcing set B that contains three sets of three consecutive pendant vertices, {p_m(1,i)}_i=1^3, {p_m(2,i)}_i=1^3, and {p_m(3,i)}_i=1^3, with m(j,i)+1=m(j,i+1). Due to Observation <ref>, all but one vertex in B has to force, otherwise it can not be the case that all but one of their forcing chains contain two pendant vertices. In particular, each of those pendant vertices can force at most their neighboring spoke vertex, so for each pair (j,i)∈{1,2,3}×{1,2,3} where forcing occurs, p_m(j,i)→ v_m(j,i). Then at most one forcing chain beginning at a pendant vertex p_m(j,i)) will not contain the neighboring vertex v_m(j,i). Due to this, for at least two of the v_m(j,2), every path of white vertices beginning at a neighbor of v_m(j,2) and terminating at a pendant vertex will pass through the center vertex. However, the center vertex can only be a member of one forcing chain, leaving a total of two forcing chains each of which only contain a single pendant vertex, thus contradicting Observation <ref>. Suppose now we have a minimum zero forcing set B in which every vertex is a pendant vertex, but B does not contain a set of three consecutive pendant vertices. As in Lemma <ref>, the forcing process cannot go further than the spoke vertices neighboring the members of B, since each of the spoke vertices will have two white neighbors, specifically the center vertex and a spoke vertex.For m ≥ 5(H_m) = 6if m4 ≡ 0 4if m4 ≡ 1 5if m4 ≡ 25if m4 ≡ 3. Let B be an arbitrary efficient zero forcing set of H_m, ℱ={F^(k)}_k=1^(H_m) be a propagating family of forces of B on H_m, and 𝒞 be the chain set induced by ℱ.We will prove this theorem by proving the subsequent four cases below. m4 ≡ 0Since m is even, by Observation <ref>, each forcing chain in 𝒞 must start and end at a pendant vertex. Furthermore, by Lemma <ref>, B must contain a consecutive set of three pendant vertices but cannot contain a consecutive set of four pendant vertices (as it would then contain two distinct consecutive sets of three pendant vertices).Furthermore, since m4 ≡ 0, it follows that B=m/2 is even and so B must contain another group of forcing pendants of odd length. By Lemma <ref>, this must be an isolated forcing pendant, q. During F^(1), each pendant vertex in B forces its only neighbor. During F^(2), only the center vertex can be forced, since every spoke vertex is adjacent to the center vertex.Let u be the spoke vertex adjacent to q, and note that neither of the pendant vertices which are consecutive with q are members of B, so there exist spoke vertices w,w' ∈ V(G) ∖ E_ℱ^[2] such that w,w' ∈ N_G(u).Due to this, neither u nor v_0 can force on the third time-step. Since v_0 and u will need to force in order for their respective forcing chains to end at a pendant vertex, we know that after the third time-step there will be at least two white spoke vertices, say w_u and w_0 (either of which may be equal to w or w') such that u → w_u and v_0 → w_0 during ℱ. Furthermore, during the fourth time-step, u might force w_u, but v_0 cannot yet force because it still has at least two white neighbors, specifically w_u and w_0. During the fifth time-step, v_0 may force w_0, allowing any remaining white pendant vertices to be forced on the sixth time-step, completing ℱ.Thus (H_m)≥6, for m4 ≡ 0. m4 ≡ 1 By Observation <ref>, all but one forcing chain in 𝒞 must contain two pendant vertices with the remaining forcing chain containing one pendant vertex. First suppose B contains a vertex which is not a pendant vertex. By Theorem <ref>, we know that (G,(ℱ)) ≤(G, B). Furthermore by Observation <ref>, we knowthat every vertex in (ℱ) will be a pendant vertex. Due to this, in determining a lower bound on (G), we can assume every vertex in our zero forcing set is a pendant vertex.Now, suppose without loss of generality that every vertex in B is a pendant vertex. During the first time-step, at most each pendant vertex in B forces its only neighbor.During the second time-step, at most the center vertex v_0 will be forced, since every spoke vertex is adjacent to v_0.Now that v_0 is blue, forcing may occur along the cycle of spoke vertices on the third time-step. If the entire cycle of spoke vertices is now blue, then on the fourth time-step, the remaining pendant vertices may be forced, completing ℱ. Thus (H_m) ≥ 4, for m4 ≡ 1. m4 ≡ 2 As in Case 1, since m is even, 𝒞 will have m/2 forcing chains, each of which start and end at a pendant vertex. During the first time-step, at most each pendant vertex in B forces its only neighbor. At most, during the second time-step, the center vertex will be forced, since every spoke vertex is adjacent to the center vertex.Now that the center vertex has been forced, forcing may occur along the cycle of spoke vertices on the third time-step. However, since m is even, every forcing chain in 𝒞 must begin and end at a pendant vertex. Thus there must remain a spoke vertex for the center vertex to force which can not happen until the fourth time-step. Now that the entire cycle of spoke vertices is blue, at the fifth time-step the remaining pendant vertices may be forced, completing ℱ. Thus (H_m) ≥ 5, for m4 ≡ 2. m4 ≡ 3 As in Case 2, we can assume that every vertex in B is a pendant vertex. So suppose without loss of generality that B is an efficient zero forcing set such that every vertex in B is a pendant vertex. During the first time-step, at most each pendant vertex in B forces its only neighbor.Since every spoke vertex is adjacent to the center vertex, during the second time-step, the center vertex v_0 will be the only vertex forced.By Lemma <ref>, B must contain at least one set of three consecutive forcing pendants.Let b_0 be the vertex which forces v_0 during ℱ and note that b_0 must be adjacent to the middle vertex of one such set of three consecutive forcing pendants.Now that v_0 has been forced, forcing may occur along the cycle of spoke vertices on the third time-step. However, note that since v_0 cannot force until only a single white spoke vertex remains, v_0 cannot force until at least time-step 4. We will now prove that V(G) ∖ E_ℱ^[3] contains spoke vertices.Note that in this case, m4 ≡ 3 so we have that m = 4k + 3 where k is an integer. Since m is odd, we know that the size of the zero forcing set is m+1/2 which is equivalent to 2k + 2. This shows that the size of the zero forcing set will always be even. In particular, because m - (2k + 2) = 2k + 1, we know that we have 2k + 2 forcing pendants and 2k + 1 terminal pendants.We now consider two cases: [0.2in]0in B contains an isolated forcing pendant q.Since b_0 was used to force v_0 during the second time-step, we are left with 2k + 1 spoke vertices which can potentially be used to force during the third time-step. This means in order for E_ℱ^[3] to contain every spoke vertex, every spoke vertex that is adjacent to a forcing pendant, except b_0, must force during time-step 3.However, the spoke vertex u which is adjacent to the isolated forcing pendant q will have two white neighbors w and w', and thus cannot force until after either w or w' become blue.However, by Observation <ref> only one forcing chain in 𝒞 can contain a single pendant vertex and thus either v_0 or u must force during ℱ. Finally, since the only neighbors of v_0 and u which are still white after the second time-step are spoke vertices, V(G)∖ E_ℱ^[3] contains spoke vertices. B contains no isolated forcing pendant. Note that in order for two groups of forcing pendants to be distinct, there must be a group of terminal pendants between them. Since the groups of pendant vertices are arranged along a cycle, there must be the same number of groups of forcing pendants as there are groups of terminal pendants. Because the size of the zero forcing set must be even, we know, by Lemma <ref>, that we need two consecutive sets of three forcing pendants, and thus either a group of forcing pendants of size at least four or two groups of forcing pendants of size three. Since there are an equal number of groups of forcing pendants as there are groups of terminal pendants, there is one less terminal pendant than forcing pendant, and there are no isolated forcing pendants, there must be at least one group of at least three terminal pendants.Let w be the spoke vertex adjacent to the middle pendant vertex of a group of three terminal pendants. Since, prior to the third time-step, v_0 is the only blue neighbor of w and it cannot force on the third time-step, w ∈ V(G)∖ E_ℱ^[3].In both sub-cases, V(G) ∖ E_ℱ^[3] must contain spoke vertices. If every spoke vertex is blue after the fourth time-step, then on the fifth time-step, the remaining pendant vertices may be forced, completing ℱ. Thus (H_m) ≥ 5, for m4 ≡ 3.Finally, by Theorem <ref>,(H_m) = 6if m4 ≡ 04if m4 ≡ 15if m4 ≡ 2orm4 ≡ 3.We now provide the following theorem that identifies the zero forcing number and propagation time for our generalized helm graph for the case where s>1. For s>1, (H(m,s))= m(s-1)+ 1 and (H(m,s)) = 2. First, we would like to establish m(s-1)+ 1 as a lower bound for the zero forcing number. We will do this using two smaller claims. We claim if B is a zero forcing set of H(m,s), then for each i, |{p_i,j}_j=1^s ∩ B |≥ s-1. To show this, suppose by way of contradiction there exists a zero forcing set B of H(m,s) for which there exists i_0 such that |{p_i_0,j}_j=1^s ∩ B |≤ s-2. Let j_1,j_2 ∈{ 1,2, …,s} be distinct such that p_i_0,j_1, p_i_0,j_2∉ B. Since v_i_0 is the only neighbor of p_i_0,j_1 and p_i_0,j_2 it is the only vertex which could force them. It can be seen v_i_0 can do no forcing since p_i_0,j_1, p_i_0,j_2∉ B and are neighbors of v_i_0. Therefore, B is not a zero forcing set of H(m,s), and in particular, any zero forcing set of H(m,s) must be such that for each i, |{p_i,j}_j=1^s ∩ Z |≥ s-1. Our second claim states (H(m,s))≥ m(s-1)+1. Let B be a zero forcing set of H(m,s). By our first claim we can assume that up to isomorphism {p_i,j}_i=1,^m_j=1^s-1⊆ B. Then |B| ≥|{p_i,j}_i=1,^m_j=1^s-1| = m(s-1). Suppose by way of contradiction |B| = m(s-1). We know in this case that without loss of generality B = {p_i,j}_i=1,^m_j=1^s-1. For each i, p_i,1 can force v_i. At this point, each v_i has two white neighbors specifically p_i,s and v_0. Since no other vertex is adjacent to p_i,s or v_0, neither ever get forced. Hence B can not be a zero forcing set. So for any zero forcing set B, |B| > m(s-1). Next, we will show (H(m,s))≤ m(s-1) +1, establishing an upper bound for the zero forcing number (and thus completing our proof of the zero forcing number) while simultaneously establishing an upper bound on the propagation time by keeping track of the forced vertices. To this end, we want to take a specific zero forcing set of size m(s-1)+ 1 and show that it can force the generalized helm graph. Let B = {v_0}∪{p_i,j}_i=1,^m_j=1^s-1. We claim B is a zero forcing set and construct a relaxed chronology of forces ℱ of B on H(m,s). Note for each i, p_i,1→ v_i since v_i is the only neighbor of p_i,1. So E_ℱ^[1] = B ∪{v_i}_i=1^m. For each i, v_i → p_i,s since p_i,s is the only neighbor of v_i not in E_ℱ^[1]. Thus E_ℱ^[2] = E_ℱ^[1]∪{p_i,s}_i=1^m = V(H(m,s)). Therefore H(m,s) has been forced with a zero forcing set of size m(s-1)+ 1 in two time-steps, thus (H(m,s))≤ m(s-1)+ 1 and (H(m,s))≤ 2. Since there is only one vertex in B not in {p_i,j}_i=1,^m_j=1^s-1, there is at most one spoke vertex in B, say v_i_0. Then for any p_i,s, with i ≠ i_0, not in B, there is a distance of at least 2 from p_i,s to an element of B. Thus (H(m,s))≥ 2.Finally we have that the zero forcing number of H(m,s) must be m(s-1)+ 1 and the propagation time must be 2. § CONCLUSION This concludes our study of gear graphs and helm graphs. A natural follow up question is how combining the two classes of graphs by starting with a wheel graph and adding both pendant vertices at spoke vertices as well as intermediate vertices between spoke vertices might affect the zero forcing numbers and propagation times of the resulting class of graphs.This question forms a basis for research currently being pursued by our team. plain | http://arxiv.org/abs/2310.18513v1 | {
"authors": [
"Sara Anderton",
"Rilee Burden",
"McKenzie Fontenot",
"Noah Fredrickson",
"Alexandria Kwon",
"Sydney Le",
"Kanno Mizozoe",
"Erin Raign",
"August Sangalli",
"Houston Schuerger",
"Andrew Schwartz"
],
"categories": [
"math.CO",
"05"
],
"primary_category": "math.CO",
"published": "20231027221006",
"title": "The zero forcing numbers and propagation times of gear graphs and helm graphs"
} |
OT1pzcmit | http://arxiv.org/abs/2310.18061v1 | {
"authors": [
"Hamed Pejhan"
],
"categories": [
"math-ph",
"math.MP"
],
"primary_category": "math-ph",
"published": "20231027112245",
"title": "de Sitter Relativity Group"
} |
example.epsgsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore #1\#1 Bib..Instituto Tecnológico de Aeronáutica,DCTA, 12228-900 São José dos Campos, Brazil Federal University of ABC, Center of Mathematics, Santo André, 09580-210, Brazil e1e-mail: [email protected]:[email protected]:[email protected]:[email protected] Stars in D_3-D_7 Holographic ModelM. Aleixo e1,addr1 C.H. Lenzi e2,addr1 W. de Paula e3,addr1 R. da Rocha e4,addr2 January 14, 2024 ==================================================================================== This work investigates static and dynamical quark star properties within a D_3-D_7 holographic model. We solve the Tolman-Oppenheimer-Volkoff equations for the quark matter equation of state obtained from the brane configuration and determine the range of model parameters in which the quark star family mass-radius diagram are compatible with recent NICER observational data for the pulsars PSR J0030+0451 and PSR J0740+6620. We show that the model supports stable configurations with maximum masses higher than 2 Solar masses, in line with the inferred masses of the pulsars PSR J1614-2230, PSR J0348+0432 and PSR J0740+6620. Furthermore, we show that there is a parametrization in which the tidal deformability parameter obtained for each component of the binary star system is consistent with the GW170817 event detected by the LIGO-Virgo collaboration.§INTRODUCTION The detection of the gravitational waves (GW) <cit.> and Gamma-ray burst (GRB) <cit.> from a binary neutron star (NS) merger, the GW170817 event, brought new valuable information for the description of compact star properties. In particular, the details of the NS structure become more relevant as the separation between each binary companion decreases <cit.>. In this context, the tidal deformability extracted from the GW170817 data <cit.> gives new dynamical constraints for NS models. Understanding the composition of the NS interior is an important astrophysical open problem <cit.>. In their inner core, which is believed to achieve very high densities, few times the nuclear saturation density, theoretical models predict the existence of hyperons <cit.> or deconfined quark matter <cit.>. Indeed, there are also indirect observational shreds of evidence that open the possibility of forming stable compact stars only with quark matter, known as quark stars (QS), which can play the role of laboratories to investigate the very fundamental physics underlying systems at supranuclear densities, under strong gravitational fields <cit.>. Therefore, exploring the possibility of a description of a NS with exotic content, or being a core of quark matter in hybrid stars <cit.> or QS <cit.> is an active area of study.The AdS/CFT correspondence allows to treat strongly-coupled quantum systems in terms of gravitational duals <cit.>. There are applications of such proposal in many areas, from condensed matter systems <cit.> to the description of the quark-gluon plasma (QGP) produced in experiments of heavy-ions collisions <cit.>. In particular, it is worth mentioning how close to experimental data <cit.> is the prediction of the shear viscosity-to-entropy ratio of the QGP from holographic models, which attains the lowest value among any kind of matter in Nature, the nearest to the Kovtun-Son-Starinets limit <cit.>. The original duality maps the generating functional of the correlation functions of 𝒩 = 4 super Yang-Mills (SYM) theory in 4D flat space to partition functions of type IIB string theory in AdS_5× S^5<cit.>. Within the holographic concept, there are many attempts to incorporate some features of quantum chromodynamics (QCD), such as confinement, chiral symmetry breaking, and the hadronic spectrum, besides the phase structure at large baryon-chemical potentials, and the equation of state governing high-density regimes, as the ones expected to take place in the quarkyonic matter core of NS <cit.>. Here we are mainly interested in the description of dense QCD matter for the analysis of the QS properties. For this end, we focus on the D_3-D_7 system <cit.>, where a configuration of N_cD_3 branes and N_fD_7 probe branes are considered[N_c and N_f are the number of colors and flavors, respectively.]. By taking the 't Hooft limit, N_c→∞, g_s → 0 with λ = g^2_s N_c fixed and large, in the near-horizon limit of D_3 branes, one obtains AdS_5 × S^5 with the N_fD_7-branes wrapping AdS_5 × S^3<cit.>. The presence of the D_7 probe brane generates new degrees of freedom, whose low-energy dynamics are described by the Dirac-Born-Infeld (DBI) one, in AdS_5 × S^3, where the time component of the U(1) gauge field is dual to the chemical potential μ. These degrees of freedom correspond to open string fluctuations on the D_7-brane. The asymptotic distance between the D_3 and D_7-branes is a mass parameter m, which, in this context, is interpreted as the constituent quark mass <cit.>. This ulterior open-open string duality maps operators of mesonic type, in the conformal field theory, to D_7-brane fluctuations, on the gravitational sector, additionally to the original AdS/CFT, whose gravity is regulated by the near-horizon geometry of D_3-branes.Gauge-invariant field theory bilinear operators are, in this way, dual objects mapped to fluctuations of the D_7 probe brane living in theAdS_5 × S^5 compactified space.Considering the grand canonical ensemble, one can study the thermodynamic properties of the model, as implemented in Refs. <cit.>. The proposal regards obtaining the equation of state (EOS) for zero temperature of such holographic model and, with the use of the Tolman-Oppenheimer-Volkoff (TOV) equation for the hydrostatic equilibrium, to analyze static and dynamical properties of QS. There is a vast literature where holographic concepts were used to discuss compact stars, as reported by Refs. <cit.> andreferences therein.In what follows, we will obtain the free energy of the flavor fields,decoupled from the adjoint fields. After determining the holographic EOS for the quark matter, we calculate the mass distribution profile and the mass-radius diagram in terms of the constituent quark mass m. By varying the parameter m, we compare the results with the observational data analysis of the Neutron Star Interior Composition Explorer (NICER) on the values of mass and radius of the massive pulsars PSR J0030+0451<cit.> and PSR J0740+6620<cit.>. Finally, we consider an NS merger and compare the tidal deformability obtained in the holographic model with the data that comes from the LIGO-VIRGO Collaboration on the event GW170817 <cit.>.§ THE HOLOGRAPHIC MODEL In the adopted framework, one considers the 't Hooft limit for the D_3-D_7 system, obtaining an AdS_5× S^5 with the D_7-branes wrapping the AdS_5× S^3 space <cit.>. The metric reads ds^2 = u^2 ℛ^2η_μν dx^μ dx^ν + ℛ^2u^2(dρ̅^2 + ρ̅^2 dΩ_3^2 + dy^2 + dz^2) ,where η_μν is the Minkowski metric in 4 dimensions and ℛ is the AdS radius. The holographic coordinate u is written as u^2 = ρ̅^2 + y^2 + z^2 and the coordinates ρ̅ and Ω_3 belong to the D_7 brane world volume.The DBI action has the formS_D_7 = - N_fT_D_7 ∫ d^8ξe^- ϕ √(-(g + 2 π α' F)),where T_D_7 is the tension of the D_7-brane, g is the induced metric on the D_7 worldvolume, the AdS radius was set to one, ϕ is the dilaton field, α' is the inverse of the string tension and F is the field strength of a U(1) gauge field A^μ, whose only non-vanishing component is the temporal one A_t(ρ̅). Since we are dealing with a supersymmetric intersection, the DBI Lagrangian can be written as ℒ_DBI = - 𝒩 ρ̅^3√(1+z'^2 -A_t'^2),where 𝒩 = π^22N_fT_D_7. The variation of the Lagrangian with respect to z and A_t is zero. Therefore, one has two conserved quantities, c and d, respectively given byc =-1/𝒩 ∂ℒ_DBI/∂ z' =ρ̅^3z'/√(1 + z'^2 - A_t'^2), d = 1/𝒩 ∂ℒ_DBI/∂ A_t' =ρ̅^3A_t'/√(1 + z'^2 - A_t'^2) .The holographic dictionary relates the constituent quark mass and the chemical potential μ_q with the asymptotic boundary of the fields A_t and z, specifically, one has A_t (ρ̅→∞) = μ_q and z(ρ̅→∞) =m. After this identification, one can show that the conserved quantities c and d are related to the physical quantities μ_q and m<cit.>. At zero temperature, the thermodynamic potential in the grand canonical ensemble can be obtained from the regulated on-shell action <cit.>. When the chemical potential is greater than the constituent quark mass, the free energy density can be written as <cit.> ℱ = ℱ_𝒩=4 + ℱ_flavor.The first part of the r.h.s. in Eq. (<ref>) is associated with the color charge and vanishes in the zero temperature limit <cit.>. In this case, the flavor contribution reads <cit.> ℱ_flavor = - 3 4π^2 (μ_q^2 - m^2)^2,where the number of colors and flavors are three and the 't Hooft coupling constant λ was chosen to reproduce the Stefan-Boltzmann expression for large density. §HOLOGRAPHIC COMPACT STARSConsidering the thermodynamic relation between the pressure and the free energy, p = - ℱ_flavor, together with the expression ε = μ_q ∂ p/∂μ_q - p, where ε is the energy density and the label q is associated to the quark, one obtains the EOS of the holographic model as <cit.> ε = 3 p + 2 √(3)m^2/π√(p) ,where p is the pressure. To verify that causality is respected in the model, it is useful to write the explicit expression of the sound velocity v_s, which is given by v_s = √(∂ p/∂ε)= √(π√(p)/√(3)m^2 + 3 π√(p)). To ensure the hydrostatic equilibrium for a spherically symmetric distribution of mass, one has to solve the TOV equations, written in natural units (G=c=1), given by dp (r)/dr =- M (r)ρ(r)/r^2(1 + 4 π r^3 p(r)/M (r)) (1 + p(r)/ε(r))× (1 - 2 M (r)/r)^-1,d M (r)/dr =4 π r^2ρ(r),where the M(r) is the Misner-Sharp mass inside the radius r and ρ(r) is the mass density. § TIDAL DEFORMABILITYThe LIGO-Virgo collaboration detected GW<cit.> and GRB from a binary NS merger <cit.>, the GW170817 event. This system provides valuable information concerning the deformations due to the gravitational interaction between the two involved neutron stars <cit.>, which can be given, to linear order, in terms of the dimensionless tidal deformability parameter Λ<cit.>, reading Λ = Q_ij/ε_ij,where Q_ij is the quadrupole momentum and ε_ij is the tidal field. The induced quadrupole moment is associated with the deformation of a spherically symmetrical object with respect to the flattening of the poles. In terms of the second Love number k_2, we haveΛ = 2 3k_2 C^-5 ,where C = M/R is the compactness. On a quasi-static regime, the second Love number is given by <cit.> k_2 =8C^55 (1 - 2 C)^2 (2 + 2C(y_R - 1) -y_R)×{ 2C ( 6 - 3 y_R + 3 C (5 y_R - 8)) + 4 C^3 (13 - 11 y_R + C (3 y_R - 2) + 2C^2 (1+y_R)) + 3 (1-2C)^2 (2 - y_R + 2 C (y_R - 1) )ln(1-2C)}^-1,where y_R = y(R). The function y(r) is a solution of the differential equation r (dy/dr) + y^2 + y F(r) + r^2 Q(r) = 0, withF(r)= 1 - 4 π r^2 (ε(r) - p(r) )g(r),G(r)= 4 π g(r)(5 ε(r) + 9 p(r) + ε(r) + p(r)v_s^2(r) - 64 π r^2) -4 ( m(r) + 4 π r^3 p(r)r^2g(r))^2 , g(r)=1 - 2 M (r)/r.In addition, we define the chirp mass parameter ℳ asℳ≡(m_1^3 m_2^3/m_1 + m_2)^1/5,which is a function of the masses of the two NS companions, m_1 and m_2. This parameter is relevant to describe the rate of energy transferred away through the gravitational waves. Indeed, the tidal deformability analysis from the observational data of the GW170817 data from LIGO-Virgo is made for a specific value of the system chirp mass <cit.>.§ RESULTSAn important parameter to be analyzed is the speed of sound corresponding to the model. With this information, it is possible to check whether the model does not violate the causality principle (∂ p /∂ε < 1). Fig. <ref> presents the speed of sound curves, v_s^2, as a function of energy density ε. As can be seen, all models do not violate the causality principle.The solutions of the differential equations system given by Eqs. (<ref>), (<ref>) and (<ref>) has been obtained for constituent quark masses ranging from m = 300 MeV to m = 360 MeV. The initial conditions used are p(0)=p_c and M(0) = 0, where p_c is the central pressure. The radius R of the star is defined by p(R)=0. The outcome is the M(R) sequences of compact stars compatible with the adopted model. The rationale behind the choice of the range of values for m is the following: since m is interpreted as the constituent quark mass, a typical value can be obtained from the infrared value of the quark mass function <cit.>, which value of 345 MeV was obtained with lattice QCD calculations for the quark propagator <cit.>. The proposal of this work is to explore a range of values around this number in order to see if the model is able to describe observational data of statical and dynamical properties of NS. It will be shown that for m=300 MeV the maximum mass reaches 2 M_⊙, whereas for m=360 MeV the model can describe the deformability parameter of the binary star system for the GW170817 event. Figs. <ref> and <ref>present the radial profiles for the maximum star mass of each parametrization. Fig. <ref> shows that the maximum central pressure is obtained for m = 360 MeV, while the minimum is attained for m = 300 MeV. Fig. <ref> illustrates that the radius of the maximum star mass decreases monotonically with the constituent quark mass.Fig. <ref> shows the mass-radius sequences of QS using the D_3-D_7 holographic EOS. Each sequence of stars was obtained with a particular value of the constituent quark mass, ranging from m=300 MeV to m=360 MeV. In this figure is clear that increasing the constituent quark mass makes the value of the maximum stellar mass decrease. Note that within this framework it is even possible to achieve masses higher than 2 Solar masses (for m ≤ 300 MeV), which is in agreement with data reported in Refs. <cit.>. In addition, our computations have been compared with recent observational data analysis from NICER. The millisecond pulsars considered are the PSR J0030+0451<cit.> and PSR J0740+6620<cit.>. Independent analysis for PSR J0030+0451 gives the inferred mass of 1.34^+0.15_-0.16 M_⊙<cit.> and 1.44^+0.15_-0.14 M_⊙<cit.>, while theradius estimates are12.71^+1.14_-1.19 km <cit.> and 13.02^+1.24_-1.06 km <cit.>. For PSR J0030+0451, NICER reported the value of 2.072^+0.0067_-0.066 M_⊙<cit.> for the mass, while the radius estimates are13.7^+2.6_-1.5 km <cit.> and 12.39^+1.30_-0.98 km <cit.>. Those range of values are represented by the blue (PSR J0030+0451) and red (PSR J0030+0451) regions of Fig. <ref>. One can see that the model is compatible with the observational data.The region of stability of the compact stars sequence can be obtained from Fig. <ref>. The maximum mass for each parametrization is shown by a circle. All the stars to the left of this point are stable, since ∂ M/∂ε_c >0<cit.>. Here the static stability criterion is employed, as long as the compact stars under consideration have only one phase.For each parametrization, one can solve the TOV equations taking into account the holographic EOS. We use those solutions for ϵ(r) and p(r) to calculate the relativistic tidal deformability. For this end, we use Eqs. (<ref>) and (<ref>), performing the integration from the center (r = 0) to the star's surface (r = R). The outcomes are represented in Fig. <ref>. For the constituent quark masses of 360 MeV, the tidal deformability obtained is consistent with the GW170817 event.Fig. <ref> presents the dimensionless tidal deformability parameters, Λ_1 - Λ_2, for the components of the binary compact star mergers, obtained with the chirp mass of the GW170817 event, ℳ = 1.188^+0.004_-0.002 M_⊙. The outcomes are compared against the LIGO-Virgo confidence curves of50% and 90% levels in the low-spin prior scenario <cit.>. For the constituent quark masses of 360 MeV, the model reproduces the observational data of the GW170817 event regarding tidal deformability.§ SUMMARY AND CONCLUDING REMARKS In this work, we analyzed both static and dynamical QS properties within a holographic description. The mass-radius relation and the tidal deformability parameter were compared against recent observational data. We solved the TOV equations using the EOS of the D_3-D_7 holographic model for describing the quark matter. In this framework, one has AdS_5 × S^5 with the N_fD_7-branes wrapping AdS_5 × S^3<cit.> and the constituent quark mass is the only adjusted parameter of the EOS. We study the properties of the system for a range of values from m=300 MeV to m=360 MeV.We obtained the M(R) sequence of compact stars, highlighting the regions of stability, see Fig. <ref>. It is shown that the holographic description is compatible with NICER observations for the pulsars PSR J0030+0451 and PSR J0030+0451. Decreasing the constituent quark mass value gives a higher maximum stellar mass, the last stable compact star. In particular, for m = 300 MeV, the holographic model can achieve the observed value of two Solar masses <cit.>. In addition, we showed that the tidal deformability parameter for the constituent quark masses of m=360 MeV is compatible with the values associated with the GW170817 event observed by the LIGO-Virgo collaboration (see Figs. <ref> and <ref>). The maximum mass for this parametrization is 1.4 M_⊙ and belongs to a region of NICER data (blue region of Fig. <ref>). On the other hand, our exploratory study suggests that this holographic model is not able to reproduce simultaneously the tidal deformability of GW170817 event and a stellar mass of 2 M_⊙. It indicates that further improvements should be implemented as, for example, considering a possible contribution of strange quarks for the equation of state <cit.>.QS can describe realistic astrophysical objects, whose quarkyonic matter in the core may carry effects of quantum gravity in AdS/CFT, as reported in Ref. <cit.>. The conformal traceless tensor fields, the decay rate of sound waves, the bulk viscosity, the pressure, and the energy density of the QGP were shown to support meaningful quantum corrections due to a functional measure, also encoding the instability of the QGP. Within this framework, the results in Secs. <ref>–<ref>may beslightly refined when very high-energy processes set in, making the thermodynamic variables acquire these quantum gravity effects. For instance,quantum gravity effects account for Eq. (<ref>) in Sec. <ref> and the functions F(r) and G(r) in Sec. <ref> to be corrected up to ∼0.86%, when compared to the standard QS without quantum gravity corrections in AdS/CFT. These effects will not significantly change the results obtained in our work, on the scale of energy here studied. Finally,the stability ofQS, in particular displayed in Fig. <ref>, can be alternatively probed by information entropy methods, including the configurational entropy <cit.> and the holographic entanglement entropy in QCD <cit.>. The authors thank Niko Jokela and Carlos Hoyos for fruitful discussions. M.A. acknowledges the partial support of the National Council for Scientific and Technological Development CNPq (Grant No. 400879/2019-0). C. H. Lenzi is thankful to the São Paulo Research Foundation FAPESP (Grant No. 2020/05238-9). W.d.P. acknowledges the partial support of CNPq (Grant No.313030/2021-9) and the Coordination for the Improvement of Higher Education Personnel CAPES (Grant No. 88881.309870/2018-01). R.d.R. is grateful to FAPESP (Grant No. 2021/01089-1 and No. 2022/01734-7), CNPq (Grant No. 303390/2019-0), and CAPES-PrInt (Grant No. 88887.897177/2023-00), for partial financial support; and to Prof. Jorge Noronha and the Illinois Center for Advanced Studies of the Universe, University of Illinois at Urbana-Champaign, for the hospitality. 99 LIGOScientific:2017ync B. P. Abbott et al. “Multi-messenger Observations of a Binary Neutron Star Merger,” Astrophys. J. Lett. 848 (2017) no.2, L12LIGOScientific:2017zic B. P. Abbott et al. [LIGO Scientific, Virgo, Fermi-GBM and INTEGRAL], “Gravitational Waves and Gamma-rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A,” Astrophys. J. Lett. 848 (2017) no.2, L13LIGOScientific:2017vwq B. P. Abbott et al. [LIGO Scientific and Virgo], “GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral,” Phys. Rev. Lett. 119 (2017) no.16, 161101LIGOScientific:2018cki B. P. Abbott et al. [LIGO Scientific and Virgo], “GW170817: Measurements of neutron star radii and equation of state,” Phys. Rev. Lett. 121 (2018) no.16, 161101LIGOScientific:2018mvr B. P. Abbott et al. [LIGO Scientific and Virgo], “GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs,” Phys. Rev. X 9 (2019) no.3, 031040Lattimer:2015nhk J. M. Lattimer, M. Prakash, “The Equation of State of Hot, Dense Matter and Neutron Stars,” Phys. Rept. 621 (2016), 127-164Glendenning:1991es N. K. Glendenning and S. A. Moszkowski, “Reconciliation of neutron star masses and binding of the lambda in hypernuclei,” Phys. Rev. Lett. 67 (1991), 2414-2417 Bombaci:2008wg I. Bombaci, P. K. Panda, C. Providencia and I. Vidana, “Metastability of hadronic compact stars,” Phys. Rev. D 77 (2008), 083002Dexheimer:2008ax V. Dexheimer and S. Schramm, “Proto-Neutron and Neutron Stars in a Chiral SU(3) Model,” Astrophys. J. 683 (2008), 943-948Bodmer:1971we A. R. Bodmer, “Collapsed nuclei,” Phys. Rev. D 4 (1971), 1601-1606 Witten:1984rs E. Witten, “Cosmic Separation of Phases,” Phys. Rev. D 30 (1984), 272-285 Terazawa:1978ni H. Terazawa, K. Akama and Y. Chikashige, “How to Liberate Quarks From Chromodynamical Confinement,” Prog. Theor. Phys. 60 (1978), 1521 Lattimer:2006xb J. M. Lattimer, M. Prakash, “Neutron Star Observations: Prognosis for Equation of State Constraints,” Phys. Rept. 442 (2007), 109-165Bombaci:1997zz I. Bombaci, “Observational evidence for strange matter in compact objects from the x-ray burster U-4 1820-30,” Phys. Rev. C 55 (1997), 1587-1590 Cheng:1998qc K. S. Cheng, Z. G. Dai, D. M. Wei and T. Lu, “Is GRO J1744-28 a strange star?,” Science 280 (1998), 407Li:1999wt X. D. Li, I. Bombaci, M. Dey, J. Dey, E. P. van den Heuvel, “Is SAX J1808.4-3658 a strange star?,” Phys. Rev. Lett. 83 (1999) 3776Li:1999mk X. D. Li, S. Ray, J. Dey, M. Dey, I. Bombaci, “On the Nature of the compact star in 4u 1728-34,” Astrophys. J. Lett. 527 (1999) L51 Burgio:2011wt G. F. Burgio, H. J. Schulze and A. Li, “Hyperon stars at finite temperature in the Brueckner theory,” Phys. Rev. C 83 (2011), 025804Alford:2004pf M. Alford, M. Braby, M. W. Paris and S. Reddy, “Hybrid stars that masquerade as neutron stars,” Astrophys. J. 629 (2005), 969-978Pereira:2017rmp J. P. Pereira, C. V. Flores and G. Lugones, “Phase transition effects on the dynamical stability of hybrid neutron stars,” Astrophys. J. 860 (2018) no.1, 12Blaschke:2022egm D. Blaschke, U. Shukla, O. Ivanytskyi and S. Liebing, “Effect of color superconductivity on the mass of hybrid neutron stars in an effective model with perturbative QCD asymptotics,” Phys. Rev. D 107 (2023) no.6, 063034Lobato:2020fxt R. Lobato, O. Lourenço, P. H. R. S. Moraes, C. H. Lenzi, M. de Avellar, W. de Paula, M. Dutra and M. Malheiro, “Neutron stars in f(ℛ,𝒯)) gravity using realistic equations of state in the light of massive pulsars and GW170817,” JCAP 12 (2020), 039Lenzi:2022ypb C. H. Lenzi, M. Dutra, O. Lourenço, L. L. Lopes and D. P. Menezes, “Dark matter effects on hybrid star properties,” Eur. Phys. J. C 83 (2023) no.3, 266Haensel:1986qb P. Haensel, J. L. Zdunik and R. Schaeffer, “Strange quark stars,” Astron. Astrophys. 160 (1986), 121-128Xu:2003xe R. X. Xu, “Solid quark matter?,” Astrophys. J. Lett. 596 (2003), L59-L62Lugones:2015bya G. Lugones, “From quark drops to quark stars: some aspects of the role of quark matter in compact stars,” Eur. Phys. J. A 52 (2016) no.3, 53Lourenco:2021lpn O. Lourenço, C. H. Lenzi, M. Dutra, E. J. Ferrer, V. de la Incera, L. Paulucci and J. E. Horvath, “Tidal deformability of strange stars and the GW170817 event,” Phys. Rev. D 103 (2021) no.10, 103010Chu:2023rty P. C. Chu, X. H. Li, H. Liu, M. Ju and Y. Zhou, “Properties of isospin asymmetric quark matter in quark stars,” Phys. Rev. C 108 (2023) no.2, 025808 Maldacena:1997re J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity,” Adv. Theor. Math. Phys. 2 (1998) 231Sachdev:2010ch S. Sachdev, “Condensed Matter and AdS/CFT,” Lect. Notes Phys. 828 (2011) 273 Policastro:2001yc G. Policastro, D. T. Son and A. O. Starinets, “The Shear viscosity of strongly coupled N=4 supersymmetric Yang-Mills plasma,” Phys. Rev. Lett. 87 (2001) 081601Brambilla:2014jmp N. Brambilla, S. Eidelman, P. Foka, S. Gardner, A. S. Kronfeld, M. G. Alford, R. Alkofer, M. Butenschoen, T. D. Cohen and J. Erdmenger, et al.“QCD and Strongly Coupled Gauge Theories: Challenges and Perspectives,” Eur. Phys. J. C 74 (2014) no.10, 2981Bernhard:2019bmu J. E. Bernhard, J. S. Moreland and S. A. Bass, “Bayesian estimation of the specific shear and bulk viscosity of quark–gluon plasma,” Nature Phys. 15 (2019) 1113 Kovtun:2004de P. Kovtun, D. T. Son and A. O. Starinets, “Viscosity in strongly interacting quantum field theories from black hole physics,” Phys. Rev. Lett. 94 (2005) 111601Witten:1998qj E. Witten, “Anti-de Sitter space and holography,” Adv. Theor. Math. Phys. 2 (1998), 253-291Klebanov:2000hb I. R. Klebanov and M. J. Strassler, “Supergravity and a confining gauge theory: Duality cascades and chi SB resolution of naked singularities,” JHEP 08 (2000), 052Klebanov:2000nc I. R. Klebanov, A. A. Tseytlin, “Gravity duals of supersymmetric SU(N) x SU(N+M) gauge theories,” Nucl. Phys. B 578 (2000) 123 Maldacena:2000yy J. M. Maldacena and C. Nunez, “Towards the large N limit of pure N=1 superYang-Mills,” Phys. Rev. Lett. 86 (2001), 588-591Karch:2006pv A. Karch, E. Katz, D. T. Son and M. A. Stephanov, “Linear confinement and AdS/QCD,” Phys. Rev. D 74 (2006), 015005dePaula:2008fp W. de Paula, T. Frederico, H. Forkel and M. Beyer, “Dynamical AdS/QCD with area-law confinement and linear Regge trajectories,” Phys. Rev. D 79 (2009), 075019Bianchi:2010cy M. Bianchi and W. de Paula, “On Exact Symmetries and Massless Vectors in Holographic Flows and other Flux Vacua,” JHEP 04 (2010), 113dePaula:2009za W. de Paula and T. Frederico, “Scalar mesons within a dynamical holographic QCD model,” Phys. Lett. B 693 (2010), 287-291Ballon-Bayona:2023zal A. Ballon-Bayona, T. Frederico, L. A. H. Mamani and W. de Paula, “Dynamical holographic QCD model for spontaneous chiral symmetry breaking and confinement,” Phys. Rev. D 108 (2023) no.10, 106016Karch:2002sh A. Karch and E. Katz, “Adding flavor to AdS / CFT,” JHEP 06 (2002), 043Karch:2007br A. Karch and A. O'Bannon, “Holographic thermodynamics at finite baryon density: Some exact results,” JHEP 11 (2007), 074Hoyos:2021uff C. Hoyos, N. Jokela and A. Vuorinen, “Holographic approach to compact stars and their binary mergers,” Prog. Part. Nucl. Phys. 126 (2022) 103972Mateos:2006nu D. Mateos, R. C. Myers, R. M. Thomson, “Holographic phase transitions with fundamental matter,” Phys. Rev. Lett. 97 (2006) 091601Kobayashi:2006sb S. Kobayashi, D. Mateos, S. Matsuura, R. C. Myers and R. M. Thomson, “Holographic phase transitions at finite baryon density,” JHEP 02 (2007), 016Mateos:2007vn D. Mateos, R. C. Myers and R. M. Thomson, “Thermodynamics of the brane,” JHEP 05 (2007), 067Karch:2008fa A. Karch, D. T. Son, A. Starinets, “Zero Sound from Holography,”Nakamura:2007nx S. Nakamura, Y. Seo, S. J. Sin and K. P. Yogendran, “Baryon-charge Chemical Potential in AdS/CFT,” Prog. Theor. Phys. 120 (2008) 51 Erdmenger:2008yj J. Erdmenger, M. Kaminski, P. Kerner and F. Rust, “Finite baryon and isospin chemical potential in AdS/CFT with flavor,” JHEP 11 (2008) 031Ammon:2008fc M. Ammon, J. Erdmenger, M. Kaminski and P. Kerner, “Superconductivity from gauge/gravity duality with flavor,” Phys. Lett. B 680 (2009) 516 Basu:2008bh P. Basu, J. He, A. Mukherjee and H. H. Shieh, “Superconductivity from D_3/D_7: Holographic Pion Superfluid,” JHEP 11 (2009), 070Hoyos:2016zke C. Hoyos, D. Rodríguez Fernández, N. Jokela and A. Vuorinen, “Holographic quark matter and neutron stars,” Phys. Rev. Lett. 117 (2016) 032501Annala:2017tqz E. Annala, C. Ecker, C. Hoyos, N. Jokela, D. Rodríguez Fernández and A. Vuorinen, “Holographic compact stars meet gravitational wave constraints,” JHEP 12 (2018) 078BitaghsirFadafan:2019ofb K. Bitaghsir Fadafan, J. Cruz Rojas and N. Evans, “Deconfined, Massive Quark Phase at High Density and Compact Stars: A Holographic Study,” Phys. Rev. D 101 (2020) no.12, 126005BitaghsirFadafan:2020otb K. Bitaghsir Fadafan, J. Cruz Rojas and N. Evans, “Holographic quark matter with colour superconductivity and a stiff equation of state for compact stars,” Phys. Rev. D 103 (2021) no.2, 026012Mamani:2020pks L. A. H. Mamani, C. V. Flores and V. T. Zanchin, “Phase diagram and compact stars in a holographic QCD model,” Phys. Rev. D 102 (2020) no.6, 066006daRocha:2017cxu R. da Rocha, “Dark SU(N) glueball stars on fluid branes,” Phys. Rev. D 95 (2017)124017 Meert:2020sqv P. Meert and R. da Rocha, “Probing the minimal geometric deformation with trace and Weyl anomalies,” Nucl. Phys. B 967 (2021) 115420 daRocha:2021aww R. da Rocha, “Gravitational decoupling and superfluid stars,” Eur. Phys. J. C 81 (2021) 845Most:2021zvc E. R. Most, S. P. Harris, C. Plumberg, M. G. Alford, J. Noronha, J. Noronha-Hostler, F. Pretorius, H. Witek and N. Yunes, “Projecting the likely importance of weak-interaction-driven bulk viscosity in neutron star mergers,” Mon. Not. Roy. Astron. Soc. 509 (2021) 1096Kovensky:2021kzl N. Kovensky, A. Poole and A. Schmitt, “Building a realistic neutron star from holography,” Phys. Rev. D 105 (2022) no.3, 034022Demircik:2021zll T. Demircik, C. Ecker and M. Järvinen, “Dense and Hot QCD at Strong Coupling,” Phys. Rev. X 12 (2022) no.4, 041012Riley:2019yda T. E. Riley, A. L. Watts, S. Bogdanov, P. S. Ray, R. M. Ludlam, S. Guillot, Z. Arzoumanian, C. L. Baker, A. V. Bilous and D. Chakrabarty, et al.“A NICER View of PSR J0030+0451: Millisecond Pulsar Parameter Estimation,” Astrophys. J. Lett. 887 (2019) no.1, L21Miller:2019cac M. C. Miller, F. K. Lamb, A. J. Dittmann, S. Bogdanov, Z. Arzoumanian, K. C. Gendreau, S. Guillot, A. K. Harding, W. C. G. Ho and J. M. Lattimer, et al.“PSR J0030+0451 Mass and Radius from NICER Data and Implications for the Properties of Neutron Star Matter,” Astrophys. J. Lett. 887 (2019) no.1, L24Miller:2021qha M. C. Miller, F. K. Lamb, A. J. Dittmann, S. Bogdanov, Z. Arzoumanian, K. C. Gendreau, S. Guillot, W. C. G. Ho, J. M. Lattimer, et al.“The Radius of PSR J0740+6620 from NICER and XMM-Newton Data,” Astrophys. J. Lett. 918 (2021) no.2, L28Riley:2021pdl T. E. Riley, A. L. Watts, P. S. Ray, S. Bogdanov, S. Guillot, S. M. Morsink, A. V. Bilous, Z. Arzoumanian, D. Choudhury and J. S. Deneva, et al.“A NICER View of the Massive Pulsar PSR J0740+6620 Informed by Radio Timing and XMM-Newton Spectroscopy,” Astrophys. J. Lett. 918 (2021) no.2, L27Damour:2009vw T. Damour and A. Nagar, “Relativistic tidal properties of neutron stars,” Phys. Rev. D 80 (2009), 084035Hinderer:2007mb T. Hinderer, “Tidal Love numbers of neutron stars,” Astrophys. J. 677 (2008) 1216Castro:2023bij A. Castro, W. de Paula, T. Frederico and G. Salmè, “Exploring the 0 bound state with dressed quarks in Minkowski space,” Phys. Lett. B 845 (2023), 138159Duarte:2022yur D. C. Duarte, T. Frederico, W. de Paula and E. Ydrefors, “Dynamical mass generation in Minkowski space at QCD scale,” Phys. Rev. D 105 (2022) no.11, 114055Oliveira:2018lln O. Oliveira, P. J. Silva, J. I. Skullerud and A. Sternbeck, “Quark propagator with two flavors of O(a)-improved Wilson fermions,” Phys. Rev. D 99 (2019) no.9, 094506Demorest:2010bx P. Demorest, T. Pennucci, S. Ransom, M. Roberts and J. Hessels, “Shapiro Delay Measurement of A Two Solar Mass Neutron Star,” Nature 467 (2010) 1081Antoniadis:2013pzd J. Antoniadis, P. C. C. Freire, N. Wex, T. M. Tauris, R. S. Lynch, M. H. van Kerkwijk, M. Kramer, C. Bassa, V. S. Dhillon and T. Driebe, et al.“A Massive Pulsar in a Compact Relativistic Binary,” Science 340 (2013) 6131NANOGrav:2019jur H. T. Cromartie et al. [NANOGrav], “Relativistic Shapiro delay measurements of an extremely massive millisecond pulsar,” Nature Astron. 4 (2019) no.1, 72 Shapiro S. L. Shapiro, S. A. Teukolsky, “Black holes, white dwarfs, and neutron stars: the physics of compact objects”, first edn. (Wiley, 1983) Kuntz:2022kcw I. Kuntz and R. da Rocha, “Transport coefficients in AdS/CFT and quantum gravity corrections due to a functional measure,” Nucl. Phys. B 993 (2023) 116258 daRocha:2021jzn R. da Rocha, “AdS graviton stars and differential configurational entropy,” Phys. Lett. B 823 (2021) 136729Casadio:2022pla R. Casadio, R. da Rocha, P. Meert, L. Tabarroni and W. Barreto, “Configurational entropy of black hole quantum cores,” Class. Quant. Grav. 40 (2023) 075014 daRocha:2021xwq R. da Rocha, “Holographic entanglement entropy, deformed black branes, and deconfinement in AdS/QCD,” Phys. Rev. D 105 (2022) no.2, 026014 | http://arxiv.org/abs/2310.17719v2 | {
"authors": [
"M. Aleixo",
"C. H. Lenzi",
"W. de Paula",
"R. da Rocha"
],
"categories": [
"hep-ph",
"astro-ph.HE",
"nucl-th"
],
"primary_category": "hep-ph",
"published": "20231026182604",
"title": "Quark Stars in $D_3$-$D_7$ Holographic Model"
} |
Journal ofClass Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE JournalsSoft Wrist Exosuit Actuated by Fabric Pneumatic Artificial Muscles Katalin Schäffer, Yasemin Ozkan-Aydin, and Margaret M. Coad This work was supported by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NKTA funding scheme (project no. TKP2021-NKTA-66). Katalin Schäffer is with the Department of Aerospace and Mechanical Engineering, University of Notre Dame, Notre Dame IN 46556, USA, and also with theFaculty of Information Technology and Bionics, Pázmány Péter Catholic University, 1083 Budapest, Hungary (e-mail: [email protected]). Yasemin Ozkan-Aydin is with the Department of Electrical Engineering, University of Notre Dame, Notre Dame IN 46556, USA (e-mail: [email protected]). Margaret M. Coad is with the Department of Aerospace and Mechanical Engineering, University of Notre Dame, Notre Dame IN 46556, USA (e-mail: [email protected]). January 14, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Recently, soft actuator-based exosuits have gained interest, due to their high strength-to-weight ratio, inherent safety, and low cost. We present a novel wrist exosuit actuated by fabric pneumatic artificial muscles that can move the wrist in flexion/extension and ulnar/radial deviation. We derive a model representing the torque exerted by the exosuit and introduce a model-based optimization methodology for the selection of placement parameters of the exosuit muscles. We evaluate the accuracy of the model by measuring the exosuit torques throughout the full range of wrist flexion/extension. When accounting for the displacement of the mounting points, the model predicts the exosuit torque with a mean absolute error of 0.279 Nm, which is 26.1% of the average measured torque. To explore the capabilities of the exosuit to move the human body, we measure its range of motion on a passive human wrist; the exosuit is able to achieve 55.0% of the active biological range in flexion, 69.1% in extension, 68.6% in ulnar deviation, and 68.4% in radial deviation. Finally, we demonstrate the device controlling the passive human wrist to move to a desired orientation in the flexion/extension plane and along a two-degree-of-freedom trajectory.Wrist exosuit, pneumatic artificial muscle, fabric actuators, exosuit torque, model-based design optimization. § INTRODUCTIONUpper limb wearable assistive devices can be useful in a variety of scenarios. Healthy people can benefit from physical assistance to avoid fatigue and muscle strain during repetitive tasks or to augment their natural physical abilities <cit.>, and the physically impaired can benefit from assistance for activities of daily living <cit.>, or assistance and resistance during rehabilitation exercises <cit.>. Especially over the last decade, soft wearable assistive devices, also known as soft exosuits, have attracted much interest from the research community <cit.>. As compared to their rigid counterparts, soft exosuits have the potential to solve several design and control challenges, including achieving a high strength-to-weight ratio, inherent safety, comfort, and low cost, as well as avoiding joint misalignment between the human body and the wearable device <cit.>.The wrist joint serves as an advantageous foundation for upper limb exosuit design. The human wrist can move in two degrees of freedom (flexion/extension, and ulnar/radial deviation), with a potential third degree of freedom of forearm pronation/supination depending on how close to the elbow an exosuit may be attached <cit.>. Also, the range of motion is different along each direction. Devices developed for the wrist could potentially be adapted for the elbow, shoulder, or hand to make a complete upper limb exosuit as needed by the user. Various soft wrist exosuits have been developed in recent years, targeting various degrees of freedom and using various actuation technologies. While not strictly soft, the two-degree-of-freedom wrist exoskeleton presented in <cit.> contains some embedded compliance that reduces joint misalignment compared to completely rigid devices, but it lacks other benefits of a fully soft design, such as garment-based anchoring and low profile wearable components. The cable-driven wrist exosuits presented in <cit.> are low profile, but the friction buildup in the cables makes control challenging. The shape memory alloy-actuated wrist exosuit presented in <cit.> is also low profile, but it is limited in its torque and range of motion. The McKibben pneumatic artificial muscle-based exosuits presented in <cit.> and the elastomeric pneumatic actuator-based exosuits presented in <cit.> are lightweight but require somewhat bulky actuators. The textile pneumatic actuator-based exosuits presented in <cit.> only consider the pronation-supination degree of freedom.Fabric pneumatic artificial muscles (fPAMs) <cit.> are a recently developed class of linear contractile actuator based on textile technology <cit.> with significant potential for use in soft exosuit applications. These actuators are made of a single layer of woven, bias-cut airtight fabric formed into a tube shape. The bias refers to the tilted orientation of the non-stretchable fibers along the tube. When these actuators are inflated, they expand radially and contract in length due to the stretchability of the fabric along the bias. Due to its simple structure, the fPAM is low-cost and easy to fabricate. Compared to McKibben actuators, fPAMs are lightweight and fully foldable, and they have a near-linear force-contraction relationship, an absence of hysteresis, a high fatigue life, and a quick response to dynamic inputs <cit.>. Exploring the advantages and limitations of the use of this actuator for exosuits allows us to determine the optimal scenarios and use cases for its implementation. In this paper, we present a novel soft wrist exosuit actuated by fabric pneumatic artificial muscles (Fig. <ref>). Our research focuses on assessing the capability of this lightweight, low-cost, and soft actuator within the context of an exosuit prototype. We aim to determine its effectiveness in generating sufficient torque similar to biological joints and in moving the wrist along a large range of motion in the two degrees of freedom of flexion/extension and ulnar/radial deviation. Also, the device is designed with a mechanical stop that prevents the wrist from going beyond the natural joint limits, making the device inherently safe. This stop is based on the fact that the fPAM actuator cannot exert force beyond its maximum contraction ratio and has an inherent limit in stretchability. These two properties, however, impose limitations on the exosuit-assisted range of motion and on the applied torque at wrist angles where the actuator is close to its fully contracted state.We present a model-based methodology for the selection of parameters governing the design of fPAMs and their positioning on the body aiming to attain a prescribed torque profile that spans the entire range of motion of the wrist. Our optimization process selects placement parameters with the objective of closely approximating the peak biological flexion torques exhibited by the human wrist. We also provide experimental validation of the modeled torques throughout the range of wrist flexion/extension, and we compare the biological and exosuit-assisted range of motion in both flexion/extension and ulnar/radial deviation. Finally, we present a demonstration of the device actively controlled to move the wrist to a desired flexion/extension angle with a pair of antagonistic artificial muscles and to follow a desired two-degree-of-freedom trajectory by coordinating the movement of both pairs of actuators.§ DESIGN AND FABRICATION This section discusses the considerations which led to the current design of the exosuit. After an overview of the design concept, we introduce the exosuit prototype in detail. We describe the wearable elements, including the mounting methods to connect fPAMs, as well as the off-board components, including the electronics and the parts of the pneumatic system.§.§ Design Overview The main goals of our study are to design a lightweight, wearable, soft device to assist the mobility of the human wrist and to explore the capabilities of fPAMs for actuating soft exosuits. While there are several criteria for designing wearable devices, such as safety, ergonomics, autonomy, and cost <cit.>, here, we focus on creating the largest possible range of wrist motion and torque application. Because of the properties of the actuator, two other design criteria, safety and low cost of the wearable parts, are automatically satisfied.Our wrist exosuit design (Fig. <ref>) consists of four pneumatic artificial muscles arranged radially around the wrist at 90^∘ to each other. When each muscle is contracted upon applying positive pressure, it promotes one of the following wrist movements: flexion, extension, ulnar deviation, and radial deviation of the wrist. Due to the described placement of the fPAMs, there are antagonistic pairs of actuators in the sagittal and frontal planes. This actuator configuration allows movement limits to be an inherent part of the physical system. When one muscle inflates, the initially minimal resting force of the opposite side muscle increases while the muscle is stretched. The antagonist muscle acts as a hard stop when it reaches its maximum length change. Also, the limited contraction ratio of the agonist fPAM stops the movement at a given joint angle. §.§ Exosuit PrototypeThe exosuit prototype is shown in Figs. <ref> and <ref>. The main wearable components are a fabric elbow brace, a fabric glove, the fPAMs, and two inertial measurement units (IMUs) (BNO055, Adafruit) having a total weight of160 g. One IMU is placed on the dorsal side of the hand, and another is attached to the dorsal side of the forearm in the same orientation when the wrist is straight (Fig. <ref>). The relative orientation of the IMU sensors provides two-degree-of-freedom wrist angle information for the system. We made the fPAMs out of silicone-coated ripstop nylon fabric (30 Denier Double Wall Ripstop Nylon Silicone Coated Both Sides, Rockywoods) based on the fabrication steps described in <cit.> with the ends of most muscles sealed by tying a knot and using glue (Sil-Poxy, Smooth-On) to prevent its sliding. We added push-to-connect pneumatic fittings reinforced with glue on the sides of the muscles to attach the air tubes. One end of each fPAM is connected to a metal hook and attached to a ring sewed to the elbow brace (Fig. <ref>(a)), while the other end is sewed to the glove (Fig. <ref>(b)). This enables the muscles to be stretched tighter than their deflated resting length by attaching them to the elbow band after the glove and the band are put on. Each fPAM is routed through a restricting elastic band which is attached to the glove to ensure that the fPAMs can not slide off the surface of the wrist when they wrap around it. An alternative method to connect the end of the fPAM to the elbow band is shown in Fig. <ref>(c). In this case, a 3D-printed plastic piece is sewed to the band and serves as the structure to which the metal hook can be attached. Incorporating this rigid structure allows us to place the attachment point further away from the body but adds more non-fabric components. An alternative method for attaching the muscle to the glove is to seal the end of the fPAM by gluing the fabric together and then sewing it to the glove (Fig. <ref>(d)). This method increases the low profile nature of the design, but it was not used here, as it proved to be less durable, and the position of the attachment point was more difficult to define.Besides the wearable part, the exosuit consists of off-board components. The fPAMs are connected to closed-loop pressure regulators (QB3, Proportion-Air) through air tubes with a 6.35 mm outer diameter. The regulators can maintain the pressure level between 0 and 137 kPa depending on the input voltage. In the current configuration, a pressurized air source of 200 kPa connected to the regulators through a solenoid valve is used to supply air to the system. An Arduino Uno control board is used to set the desired pressure level based on the angle of the wrist obtained by the IMUs using the control scheme described in Section <ref>. The control signal from the microcontroller goes through a signal conditioning circuit consisting of a low-pass filter and a buffer to set the input voltage to the pressure regulators. The board is supplied by a 15 V DC power source, and it is equipped with a safety switch that controls the solenoid valve to cut off the air supplied to the system.§ MODELINGIn this section, we introduce a planar geometric model of our exosuit design that will be used to calculate the torque applied by an fPAM to the wrist. We propose two torque equations, one describing the torque when the fPAM is running in a straight line between the mounting points, and a second describing the torque when the fPAM partially wraps around the wrist (e.g., the flexor fPAM in the fully extended wrist position). §.§ fPAM force modelingThe force that an fPAM can apply at a given level of contraction can be calculated from <cit.> and is described by Eqn. <ref>.F= π P(1/sin(α_0)^2-3(ϵ-1)^2/tan(α_0)^2)r_0^2 + 2 π E t (ϵ_0-ϵ)r_0 whereϵ =(L_0 -L)/L_0The produced force (F) of an fPAM with a given length (L) depends on the internal pressure (P), the contraction ratio (ϵ) defined as the change in length over the original, fully stretched length of the fPAM (L_0), the fully stretched fiber orientation of the fabric (α_0), the fully stretched radius of the fPAM (r_0), the fabric thickness (t), and the elastic modulus of the fabric (E). The equation is composed of two components added together. The first, pressure-dependent component is the ideal McKibben muscle model <cit.>, which represents the contraction force of a pressurized cylinder that reduces its length while its radius increases with the restriction of an unstretchable outer mesh. The second, pressure-independent component models the force that the stretched fabric applies as linear elasticity. The ϵ intercept point is denoted by ϵ_0, which corresponds to the contraction ratio where the deflated fPAM starts to apply elastic force when it is stretched. We assign zero elastic force for contraction ratios larger than ϵ_0. The fPAM reaches its fully contracted length when the applied force becomes zero. Our tensile testing measurements (Section <ref>) show that the maximum contraction ratio (ϵ_max) varies with pressure, therefore it is important to incorporate this variable into the force equation. To achieve this, we calculate the initial fiber orientation for each pressure by Eqn. <ref> <cit.>, so that the force equation implicitly includes ϵ_max.α_0 = -arcsin(√(ϵ_max^2-2ϵ_max+2/3)/ϵ_max-1)Tis formula is derived by substituting F=0 and ϵ=ϵ_max into Eqn. <ref>. At this value of ϵ, the elastic, pressure-independent component is zero, and the equation can be rearranged to solve for α_0. §.§ Exosuit torque modelTo approximate how much torque a single fPAM can apply to the wrist, we need to know the position of its two endpoints, relative to the center of rotation of the wrist. We used the two-dimensional geometric model (similar to the model used for the cable-driven exosuit in <cit.>) shown in Fig. <ref> to calculate the torque that an actuator producing a given force (F) can apply to the wrist at a given wrist angle θ. We use slightly different models for the torque computation at wrist angles where the actuator does not wrap around the wrist (Fig. <ref> (a)) and where it does wrap around the wrist (Fig. <ref> (b)). In both models, we make the following assumptions regarding the wrist kinematics and fPAM placement. The forearm and the hand are modeled as rigid links connected by a revolute joint. The two mounting points and the center lines of the two links are in the plane perpendicular to the axis of rotation. The axis intersects the plane at point O, which is defined as the center of rotation. In this plane, d_1 represents the projected length of the vector from the mounting point on the forearm (P_1) to the center of rotation (O) onto the center line of the forearm, and w_1 represents the shortest distance between the center line of the forearm and the mounting site on the forearm. The d_2 and w_2 distances denote the analogous placement parameters for the mounting point on the hand (P_2). The joint angle θ describing the orientation of the hand relative to the forearm is drawn in the positive direction on Fig. <ref>(a) given that the reference coordinate frame is defined with the z axis aligned with the axis of rotation and the x axis parallel to the forearm, pointing in the direction of the hand. The figure illustrates flexion/extension movement, however, this model can be generalized to describe wrist rotation around another rotational axis of the wrist. §.§.§ Straight line model In the case where the actuator does not wrap around the wrist (Fig. <ref>(a)), the modeled torque is given by Eqn. <ref>. This torque is calculated by the cross product between the vector from O to P_2 and the vector along the fPAM in the direction from P_2 to P_1 with magnitude F. L is the current length of the actuator, which is calculated as the Euclidean distance between the mounting points P_1 and P_2.τ =F(d_1 d_2 - w_1 w_2) sin(θ) + (d_1 w_2 + d_2 w_1) cos(θ)/L whereL =√( (- d_1-d_2 cos(θ) + w_2 sin(θ))^2 + ( w_1 -d_2 sin(θ) -w_2 cos(θ))^2) ) §.§.§ Partially wrapped model In the case when the actuator partially wraps around the wrist (Fig. <ref>(b)), an extended geometric model with an approximated radius of curvature of the wrist can be used to calculate the torque. The wrist is assumed to have a circular surface with a constant radius (r_w), which defines the point where the fPAM first touches the wrist (Q_1) and the length of the segment where the fPAM is in contact with the wrist (from Q_1 to the last point of wrapping (Q_2)). The angle ψ is defined as the angle between the y axis and the vector from O to Q_1, and the angle ϕ corresponds to the arc segment where the wrapping occurs (i.e., between OQ_1 and OQ_2). The ψ angle is calculated by finding the position of Q_1 using Thales's theorem to construct the tangent from P_1 to the wrist circle. Also, the R_1 and R_2 parameters are assigned to denote the distances between the mounting points and the center of rotation. This allows us to define a closed-form expression for ϕ as given in Eqn. <ref>.ϕ =π/2 -θ -(ψ+arccos(r_w/R_2)+arcsin(w_2/R_2))where ψ=-arctan2((ŷ×O⃗Q⃗_⃗1⃗) ·ẑ,ŷ·O⃗Q⃗_⃗1⃗)To calculate the torque, first, we need to consider the change in the length of the fPAM. We formulated a new equation (Eqn. <ref>), where the overall length of the muscle (L_w) during wrist wrapping is the sum of the distance between P_1 and Q_1, the arc length between the wrapping points Q_1 and Q_2 and the distance between Q_2 and P_2. The derived equation for the muscle length will be used to calculate the ideal fully stretched length of each fPAM by substituting the maximum wrist angle in the direction that causes wrapping into the equation. L_w = √(R_1^2-r_w^2) + r_wϕ + √(R_2^2-r_w^2) Once we know the length of the fPAM, the magnitude of the fPAM force (F) at a given pressure can be computed using Eqn. <ref>. Compared to calculating the Euclidean distance between the endpoints, as in the case when the actuator does not wrap around the wrist, the length is increased, and therefore the magnitude of the force will be larger. The force will point in the direction from P_2 to Q_2. Similarly to the previous torque equation, the torque (τ_w) during wrist wrapping is computed as the cross product of the vector from O to P_2 and the force vector (Eqn. <ref>). τ_w = Fr_w/||P_2 Q_2||((d_2cos(θ) - w_2sin(θ)) cos(ϕ-ψ) - (d_2sin(θ) + w_2cos(θ) ) sin(ϕ-ψ)) where||P_2 Q_2|| = √( (r_wsin(ϕ-ψ) -d_2cos(θ)+w_2sin(θ))^2 +(r_wcos(ϕ-ψ) -d_2sin(θ)-w_2cos(θ))^2 )The transition between the straight-line and wrapping model occurs when the fPAM touches the surface of the wrist while the joint extends. For the 2D geometric model, this occurs when the line along the two mounting points changes from being disjoint to becoming tangent to the circle representing the wrist. To decide which model to use for a given joint angle, we examine the number of intersection points between the line along P_1 and P_2 (assuming the general form of y=ax+b) and the wrist circle (x^2+y^2=r_w^2) by solving the system of equations for the x and y coordinates of the shared points. This leads to a quadratic equation in either x or y, therefore the discriminant of the quadratic formula (D) (Eqn. <ref>) can be used to determine the number of solutions.D=2ab^2-4(1+a^2)(b^2-r_w^2)The transition between the straight line to the wrapping model happens, when there is one intersection point (D=0). § DESIGN OPTIMIZATIONIn this section, we examine how can we design the exosuit to have the strength to fully assist the movement of the human wrist. This corresponds to the challenge of finding the parameters of the exosuit such that it can apply as high torque as the maximum biological torque in any wrist configuration. To solve this task, we conducted a parameter optimization to fit the modeled torque to biological reference data. First, we discuss how the reference biological torque data was selected, then we describe the decision process behind selecting the set of parameters to optimize, and finally, we present the parameter optimization results using our torque model. §.§ Reference biological torquePublished peak torque data of the biological wrist can be found in the literature <cit.>. These biomechanical studies state that the measured torque depends on multiple factors including the wrist posture. The effect of wrist posture was investigated by measuring the peak wrist torques in various wrist configurations <cit.>. The maximum torque values for wrist flexion, extension, radial deviation, and ulnar deviation were reported, along with a range of motion data in these directions. The results in <cit.> showed that the maximum torques in ulnar and radial deviation (7.14 Nm and 6.33 Nm, respectively) are higher than in flexion and extension (4.42 Nm and 3.41 Nm, respectively) but the range of motion is significantly smaller (48% of the flexion/extension range). Based on the force-contraction ratio relationship of the fPAM <cit.>, this actuator can provide high forces when the contraction ratio is close to zero, but the force approximately linearly decreases to zero as the muscle contracts. Therefore, we expect that it is more difficult for the exosuit to match the biological torques when the torque values follow a uniform distribution over a large joint angle range. We chose the flexion torque over the flexion/extension range to be the reference data, as this torque profile is the closest to the previously described criteria, and thus it is the most challenging for the exosuit to reach. Peak biological wrist force during flexion of the human wrist in the 90^∘ extension to 90^∘ flexion joint range (with an increment of 15^∘) was reported in <cit.>, but the data suggests that the range of motion is larger than the range reported in other sources <cit.>, and it shows that the torque increases as the wrist approaches the fully flexed state, which is not aligned with our observations. Instead of using data from the literature, we conducted a torque measurement on a single subject using the setup in Fig. <ref>, which is made of acrylic sheet and tube elements. The hand plate (component 1 in Fig. <ref>(a)) is connected to the torque sensor (Mini 45, ATI) (component 2), which has a circular plate on the top side and which is fixed to the base plate on the bottom side. The hand plate is directly attached to the circular plate such that the orientation of the hand plate can be changed using screws to fix the angle of the wrist in a desired position between 90^∘ extension and 90^∘ flexion with an increment of 22.5^∘. The forearm plate (component 3) reaches over the circular plate to provide support for the forearm, but it is only attached to the base plate. At the other end of the forearm plate, a tilted half cylinder (component 4) connects to it to secure the position of the upper arm. Additional support structures, i.e., brackets and hook-and-loop tapes (components 5, 6, and 7), prevent the horizontal displacement of the hand, forearm, and upper arm. Throughout the torque measurement, the wrist angle was monitored by recording the position of three motion capture markers (Impulse X2E, PhaseSpace) on the exosuit. The locations of the markers were defined based on the marker placement in <cit.> adding some changes due to the measurement setup limiting the available surfaces on the radial side of the arm. The marker at the hand is on the head of the second metacarpal, the marker at the wrist is on the styloid process of the radius, and the marker at the elbow is approximately halfway between the medial and lateral epicondyle of the humerus. Fig. <ref>(b) illustrates the markers placed on the hand (marker 1), the wrist (marker 2), and the forearm (marker 3). Because the vertical positions of the markers are not the same, the vectors between the markers were projected into the horizontal plane when the angle between them was calculated. The peak voluntary flexion torque was measured three times (while the fPAMs were not actuated) varying the angle of the hand plate from 67.5^∘ extension to 90^∘ flexion in steps of 22.5^∘. The average biological peak torque is shown in Fig. <ref>, along with the parameter optimization results described in the next subsection. It is important to note that the measured reference data is not generalized for multiple users, however, the results provide the torque profile of a healthy individual capable of performing activities of daily living. Moving forward, the findings of the paper discuss the results of this case study.§.§ Parameter optimizationThe goal of the parameter optimization is to find the parameters that allow the exosuit to achieve at least as high torque as the biological reference torque at all wrist angles. The collected average biological peak torque data described in the previous section was used for the optimization, with the number of data samples increased from 8 to 140 using “spline" interpolation in MATLAB (continuous grey line in Fig. <ref>). This made the optimization more independent of the specific angles chosen for the measurement points. We formulated an objective function as the sum of the positive differences between the biological reference torque and the modeled exosuit torque across the full range of joint angles. By minimizing this objective function, a mismatch between the model and reference data is penalized when the modeled exosuit torque is lower than the reference torque. The exosuit parameters can be divided into two groups. The first group includes the fPAM parameters, which describe the actuator itself as in the modeling equations: the fully stretched radius (r_0) and length (L_0), and the internal pressure (P). The second group includes the placement parameters, which describe where the fPAMs are attached to the glove and elbow band (d_1, w_1, d_2, and w_2 in Fig. <ref>). In the first group, the fully stretched radius and the initial length depend on the fabrication process of the fPAM, and the applied pressure can be changed through the operation of the exosuit. Because the torque is proportional to the magnitude of the fPAM force, the force should be maximized for the purpose of optimization. The pressure and the square of the radius are both proportional to the magnitude of the ideal, pressure-dependent force, and the radius is proportional to the elastic force (Eqn. <ref>), so these parameters should be fixed at their highest value. The highest achievable pressure based on our current physical system is 137 kPa, which corresponds to the maximum output pressure of the pressure regulators. For this optimization, we consider the maximal contraction ratio (ϵ_max) to be 0.28, as that corresponds to 137 kPa in <cit.>. We chose to set the fully stretched radius to 1.23 cm, which corresponds to the widest fPAM (with approximately 5 cm diameter when unstretched and uninflated) that we fabricated and subjectively considered to be low profile in the exosuit. For different design considerations and system properties (e.g., different pressure regulators), the choice of these parameter values is restricted only by the inverse proportional relationship between the maximum pressure and the fully stretched radius <cit.>. Similarly, the force magnitude is higher if the muscle is less contracted, therefore the initial, fully stretched length of the fPAM should be the shortest length that still allows the user to reach the full range of motion in the direction that stretches the muscle. We computed the initial length using Eqn. <ref> with the maximal extension angle to match these criteria. For this optimization, we defined the maximal extension angle as -65.4^∘, which corresponds to the largest extension angle where we measured reference peak biological torque. Because we used one specific type of silicone-coated ripstop nylon fabric, the fabric thickness (t) and the elastic modulus of the fabric (E) are constant parameters. The reported thickness of the fabric by the manufacturer is 0.08 mm, and the value that we used for elastic modulus is 9.06 MPa based on our discussion with the authors of <cit.>. The placement parameters determine the moment arm and the required actuator length change, as well as the direction of the actuator force, which changes throughout the range of motion. Therefore, it is not straightforward how they influence the exosuit torque. For example, minimizing d_2 seems to be a good choice to maximize torque, because moving the mounting point closer to the wrist center of rotation leads to a smaller required actuator length change, allowing the actuator to operate closer to its maximum force, but the moment arm will simultaneously be reduced, which decreases the torque. Therefore, we conducted a design optimization on the four placement parameters. We defined upper and lower bounds on each parameter based on the dimensions of the arm of the current user and our capability to make a firm attachment point on the glove and the elbow brace. For d_1 and d_2, the lower bounds ensure that we can still place a short fPAM across the wrist by keeping a small distance from the center of rotation both on the forearm and the hand. The upper bounds are based on the fact that the lengths of the hand and forearm limit how far away the fPAM can be placed from the wrist. For w_1 and w_2, the lower bounds are based on the smallest width measured on the hand and forearm, and the upper bounds are based on increasing these distances by adding an elevating structural component to the exosuit (Fig. <ref>(c)). Our choices of parameter bounds are shown in Table <ref>.The last parameter that appears in the torque equations is the radius of the wrist. This parameter is independent of the exosuit configuration. The radius was computed as half of the measured width of the human wrist in the sagittal plane, which is 2 cm for the user in this study. To find the optimal values of these parameters, first, we conducted an exhaustive search on the discretized parameter space with 100 points for d_1 and d_2 and 50 points for w_1 and w_2. Then, we used another method for the optimization to reduce the running time, which was based on using the modified version of the built-in function fminsearch in the MATLAB Optimization Toolbox that allows the definition of upper and lower bounds of the parameters. The running time of a single cycle of the algorithm is on average 0.9 seconds. Because it is not guaranteed to converge to the global minimum, we ran the optimization for a set of initial guesses and chose the resulting values that minimized the objective function. The set of initial guesses contained points of the discretized parameter space with the resolution presented in Table <ref>. The two algorithms gave the same set of parameters, but the running time was significantly shorter, 24 minutes on average for the second method compared to 4 hours and 44 minutes for the first method, therefore it proved to be more efficient for the parameter optimization. The torque with the optimized parameter values is plotted in Fig. <ref> across the flexion/extension wrist angles.The optimization results show that the optimized exosuit torque (black curve inFig. <ref>) is smaller than the biological peak torque (grey curve) for all wrist extension angles, with a value close to the reference torque at full extension but linearly decreasing until reaching -1^∘. However, the torque increases at the early stage of wrist flexion and becomes higher than the reference torque for a set of flexion angles starting from 9^∘. The exosuit torque becomes lower than the biological peak torque at 46^∘ and continues decreasing until it reaches zero at 72^∘. When the wrist moves from full extension to -1^∘ of extension (close to the neutral position), the model describing the fPAM wrapping around the wrist defines the torque (Eqn. <ref>). In this range, the moment arm is constant, but the force and thus the torque decreases as the muscle length shortens compared to the fully stretched length. When the wrist is rotated from -1^∘ of extension towards full flexion (with torque described by Eqn. <ref>), the muscle runs in a straight line between the mounting points, and its length monotonically decreases. Although initially the torque increases due to the sudden increase in the moment arm, the torque eventually approaches zero as the moment arm gradually stops rising and the fPAM continues to produce smaller forces. This result indicates that, given the set of constraints we have placed on our design, an fPAM-based exosuit for this specific user cannot reach the same peak torque over the whole range of movement as the human wrist. Note, however, that the typical functional range of human wrist motion (35^∘ of flexion/extension <cit.>) is smaller than the full joint range, and on this smaller range, the exosuit should reach close to the peak biological torque.Besides the parameter optimization for our current setup, we ran a second optimization to predict the exosuit torques for a system that can provide twice as high pressure (274 kPa) as the current system (137 kPa). This pressure is below the 448 kPa burst pressure of the fPAM with the implemented fully stretched radius of 1.23 cm, which was calculated as described in <cit.>. The optimal placement parameters remained the same compared to the previous optimization, except d_2 was decreased to 3.53 cm, and the predicted torque (dashed black curve in Fig. <ref>) is higher than or very close to the biological torque over the full flexion/extension range. §.§ Parameter perturbationIn addition to analyzing the modeled torque with the optimized parameter values, we added perturbations to the optimized parameter set for our current setup to examine how the change in individual parameters influences the modeled torque of the wrist exosuit. We considered parameters outside of the parameter bounds used for the optimization so as to understand the effect of considering new design approaches to change the parameter bounds. This exploration also helps us understand how the physical prototype might behave if its parameters are not exactly as expected or vary during operation.Fig. <ref>(a) presents the modeled torques when actuator design parameters (r_0 and P) are perturbed by ±0.5 cm and ±69 kPa, and the initial length (L_0) is perturbed by ±1 cm. Here the perturbation means that one parameter value is increased or decreased while the other parameters are unchanged. The fPAM force is proportional to the internal pressure and the square of the radius, so increasing these quantities provides the desired torque for higher joint angles, but the range of motion is not affected by the perturbation of these parameters. However, when the muscle's initial length gets smaller, the modeled torque reaches zero at a larger flexion angle. This means that the upper limit of the wrist flexion angles, where the biological peak torque is reached, can be increased, but simultaneously, the range of possible extension angles is decreased. Therefore, the range of motion in the flexion/extension plane remains limited, but it can be shifted in favor of the flexion or the extension direction. When L_0 is increased, the muscle will be slack at full extension, thus the overall torque magnitude and the maximum flexion angle will be reduced. This can occur on the exosuit when the fPAM is not stretched enough when it is put on.Fig. <ref>(b) shows the modeled torques when d_1, d_2, w_1, w_2, and r_w are changed by ±1 cm and the initial length of the muscle is recalculated accordingly. These perturbations demonstrate how the exosuit torque changes when the positions of the mounting points differ from the optimal placement. Some perturbations correspond to scenarios when the parameters exceed the defined bounds. This indicates how the optimized torque profile would change for users with different arm dimensions. When d_1 is increased, the lever arm does not change significantly, but the fPAM becomes longer, which results in a larger capability for absolute length change of the muscle and thus increased torque overall joint angles. For d_2, it depends on the joint angle whether the increase or the decrease of this parameter value results in a higher torque. Moving towards the fully flexed wrist position, increasing d_2 leads to first increased and then decreased torque, and decreasing d_2 leads to first decreased and then increased torque. This behavior is based on the combined effect of the change of the lever arm and the change of the fPAM force given the formulated muscle length. The perturbation of w_1 and w_2 follows a similar trend to the perturbation of d_2, but changing w_1 makes only a small torque magnitude change, while w_2 has a more significant influence on the torque, especially affecting the wrist angle where the torque reaches zero. Additionally, the perturbation of w_2 closely corresponds to the distance between the fPAM and the wrist, so it decreases delays, and its increase brings forward, the unwrapping towards wrist flexion. The radius of the wrist (r_w) is the only placement parameter that changes the moment arm during wrapping. The perturbation of r_w changes the slope of the torque when the wrapping model applies, the wrist position where the transition to the straight-line model occurs, and the range of motion and the torque magnitude over the wrist flexion angles.§ MODEL VALIDATION AND EXPERIMENTAL RESULTSAs the modeled exosuit torque is based on a two-dimensional geometric model with fixed parameters, it is not guaranteed that the behavior of the soft physical system can be accurately predicted by the described equations. In this section, we describe the procedure and results of the measurement of the torque of the physical exosuit, as well as the process of identifying model parameters. We propose adjustments to the model that increase its accuracy. We also present the results of a measurement to compare the biological and exosuit-actuated range of motion of the human wrist. §.§ Measurement of the fPAM forceBefore attaching the flexor fPAM on the exosuit, we conducted a tensile testing measurement (Fig. <ref>) to identify the parameters of the modeled force of the actuator. During the measurement process, the fPAM was stretched from its initial, fully contracted state (measured on the inflated fPAM at a given pressure when the force readings approached -10 N) to zero contraction (measured on the deflated fPAM when the applied load was 230 N as in <cit.>) and then it was returned to its initial state. The linear force of the fPAM was measured by a load cell (SM-1000-961, ADMET) for three cycles of length change. The measurement was repeated for five pressure levels (0 kPa, 34 kPa, 68 kPa, 103 kPa, and 137 kPa). The test setup is shown in Fig. <ref>(a), and the measurement data is shown in Fig. <ref>(b). For each nonzero pressure level, the smoothed and averaged force-contraction ratio plots were approximated by an 8th-order polynomial so that we could sample points at arbitrary contraction ratios. The same polynomial approximation does not work well for the zero-pressure elastic force, which converges to zero when the contraction ratio increases, therefore a piecewise function consisting of an exponential and a constant zero function was used for approximating the fPAM force at zero pressure.The initial length L_0 and radius r_0 of the fully stretched but uninflated fPAM were measured, and the maximum contraction ratio was derived from the zero crossing of each force curve for each different pressure level. Still, an adjustment of ϵ_max and r_0 was required to match the modeled force to the measured data. For the fPAM forces which correspond to nonzero pressure, this was achieved by running an exhaustive search over the discretized parameter space to minimize the difference between the measured and modeled ideal force in the neighborhood of the measured values of these parameters. For the zero pressure elastic force, we used the exhaustive search to get an optimal value for ϵ_0 assuming that r_0 equals the average of its previously calculated values. Table <ref> contains the derived fPAM parameters with ϵ_0 denoted as the ϵ_max value at zero pressure. The angle of the fiber orientation α_0 was calculated based on the derived maximum contraction ratio as described in the model of the fPAM (Eqn. <ref>). The modeled force based on Eqn. <ref> is plotted in Fig. <ref>(b) using the calculated parameter values.§.§ Torque applied by the exosuit We used a torque sensor (Mini 45, ATI) to measure the torques applied by the flexor muscle of the exosuit to the relaxed wrist over the range of wrist angles from -67.5^∘ to 90^∘ with an increment of 22.5^∘ on the same experimental setup which was used for the measurement of biological peak torques previously presented (Fig. <ref>). Similarly to that experiment, the measurement was repeated for five pressure levels (0 kPa, 34 kPa, 68 kPa, 103 kPa, and 137 kPa) three times at each wrist angle. The arm of the user was attached to the setup as illustrated in Fig. <ref>. The attachment point of the fPAM on the hand was placed close to the fingers to leave space for the brackets in the middle of the palm to brace the hand. The other mounting point was placed on the forearm as close to the elbow as possible.As the elbow band and glove that the fPAM connects to are both made of fabric (and the surface of the human arm is soft as well), the positions of the mounting points at the ends of the fPAM change when the muscle is inflated or stretched. For this reason, the positions of the attachment points need to be monitored to accurately model the torque. To track the positions of the endpoints relative to the human arm, motion capture markers were placed on the radial side of the exosuit to measure the current wrist angle (as before), and also on the palmar side of the exosuit right over the knot at the end of the fPAM as shown in Fig. <ref>. The placement parameters corresponding to the two-dimensional geometric wrist model were derived from the horizontal components of the marker coordinates. The parameters from Table <ref> were used to model the fPAM force, but the initial length of the fPAM was reduced to 32.0 cm to make the actuator more stretched at full wrist flexion given the actual placement. The radius of the wrist (r_w) was determined by running an exhaustive search over a range of 1 cm to 7 cm with a resolution of 0.01 cm to minimize the sum of the RMS error between the measured and modeled torques for the two models over all data points. Fig. <ref> shows the measured and modeled torque values based on actuating the flexor fPAM over the defined flexion/extension range. The dots correspond to the average measured exosuit torques at each wrist angle, where the colors indicate the applied pressure going from 0 kPa (blue) to 137 kPa (red).We used two approaches to derive the placement parameters for the exosuit torque model. The first method (dashed lines) used the actuator endpoint position data from the motion capture system at each measurement point. The second method (solid lines) modified the model to predict the exosuit torque without directly using real-time data from the motion capture system. For the second method, the placement parameters were fixed to be equal to the measured parameters with the actuator deflated and at zero wrist angle (Table <ref>). To model the displacement of the actuator endpoints relative to their initial position on the body, a model in the form of Eqn. <ref> was added, which includes the effect of the fabric stretching of the elbow band and the glove, and the translation of the soft tissue of the arm.F=K_iΔ x_i^2,i∈{ 1,2}The indices differentiate the sites of the actuator endpoints, where 1 corresponds to the forearm and 2 corresponds to the hand. The force (F) equals the magnitude of the force applied by the fPAM at the endpoints. The Δ x_i denotes the displacement of each endpoint in the direction of the force. Using the endpoint position data, we calculated the force and displacement at each measurement point. The stretching coefficients K_i were computed at each measurement point using Eqn. <ref>, and these values were averaged to derive a single value for each coefficient (Table <ref>). As shown in Fig. <ref>, the second model approximates the torque better when the joint angle approaches the fully flexed position. The mean absolute error (MAE) of the second model's overall measurement points is 0.279 Nm, which is 26.1% of the average magnitude of the measured torques, while the MAE of the first model is0.374 Nm, which is 34.9% of the average torque magnitude.§.§ Range of motionWe also conducted a measurement to compare the active biological and the exosuit-actuated range of motion of the human wrist. Motion capture markers were placed on the exosuit similarly as for the torque measurement (Fig. <ref>), but this time markers were placed on both the radial and the palmar side of the arm to collect angle data along two different planes. The endpoints of the actuated muscle were also tracked.The extensor muscle was attached to the hand just below the head of the third metacarpophalangeal joint. The radial deviation muscle was placed close to the base of the first metacarpal (the base of the thumb) and the ulnar deviation muscle was mounted symmetrically to the opposite side of the hand. The other mounting points for all fPAMs were placed on the forearm as close to the elbow as possible. The measurement process consisted of two parts. First, the exosuit-actuated wrist range was measured when the arm of the user was relaxed and the exosuit was actuated by applying 137 kPa pressure to the corresponding fPAM to passively move the human wrist to maximum flexion/extension (Fig. <ref>(a)) and ulnar/radial deviation (Fig. <ref>(b)) while the forearm was placed on the table. Then, the same measurement was repeated when the user of the exosuit voluntarily moved the wrist and the fPAMs on the exosuit were not actuated (Fig. <ref>(c) and (d)). Table <ref> contains the average and the standard deviation of three range of motion measurements for both the exosuit-actuated and the voluntarily moved wrist. To derive the modeled range of motion, the same measurement was repeated once with tracking the endpoints of the actuated muscles at the joint limit. The torque was estimated form 0^∘ to 90^∘ along the four movement directions using our first torque model with placement parameters derived by the same method as for the torque measurement. We assumed that the fPAM parameters r_0 and ϵ_max are the same as for the measured flexor fPAM at 137 kPa for all the muscles and we measured the fully stretched length for all fPAMs separately. The modeled joint limits were defined as the wrist angle where the modeled torque reached zero. The model error was computed by subtracting the measured exosuit angle limits from the modeled angle limits. The modeled limits were higher in all cases and so the computed error values are positive (Table <ref>).§ CONTROL AND DEMONSTRATION To demonstrate how the prototype of the wrist exosuit functions to move the human wrist, we implemented a control algorithm to automatically position the wrist at a desired joint configuration (Fig. <ref>). We applied feedback control, as it is easy to implement and only requires input information about the wrist angle, which is measured by the two IMUs on the exosuit. Although this also limits the precision of the control, we successfully used the feedback algorithm for the two trajectory tracking tasks described in this section. §.§ Wrist angle measurementOur control system (Fig. <ref>) is based on the error between the desired and actual wrist angle. The wrist configuration consists of two physical angles that independently represent the flexion/extension angle and the ulnar/radial deviation angle, and these angles can be regulated by two independent controllers. First, we calculated these physical angles from the measurements of two IMUs. One IMU is placed on the dorsal side of the forearm and the second one is placed on the dorsal side of the hand. The wrist orientation is given by the relative orientation of the two IMUs. The quaternion representing this relative orientation (q) is computed based on the formula <cit.> expressing successive rotations in quaternions. This formula is presented in Eqn. <ref> with notation specific to our application.[ q_w; q_x; q_y; q_z ] = [ q_w^{f} q_x^{f} q_y^{f} q_z^{f};-q_x^{f} q_w^{f} q_z^{f} - q_y^{f};-q_y^{f}-q_z^{f} q_w^{f} q_x^{f};-q_z^{f} q_y^{f}-q_x^{f} q_w^{f} ][q_w^{h}; q_x^ {h}; q_y^ {h}; q_z^ {h} ]In the formula, q_w and [q_x q_y q_z]^T are the scalar and the vector components of q. The same notation is used for the components of the q^{f} and q^{h} quaternions, which represent the orientation of the IMUs on the forearm and the hand, respectively. After this conversion, the wrist orientation is expressed in the right-handed coordinate frame of the forearm IMU, which is positioned such that the z axis is aligned with the forearm and points towards the hand, and the x axis lies in the coronal plane and points in the ulnar direction. Using Rodriguez’s formula, the relative orientation is converted into spherical coordinates using Eqn. <ref> with θ defined as the polar angle measured from the polar axis y and ϕ as the azimuthal angle measured from the z axis. Then, from the angles of spherical coordinates, the physical angles are derived using Eqn. <ref>.θ=arccos(2q_y^2+2q_w^2-1) ϕ=arctan2(q_xq_y-q_zq_w,q_yq_z+q_xq_w) θ_fe=arctan2(sinθcosϕ,cosθ) θ_ur=arctan2(sinθsinϕ,cosθ)These physical angles correspond to the wrist angle in the flexion/extension (θ_fe) and ulnar and radial deviation (θ_ur) directions. §.§ Control system Based on the measured actual wrist angle and the desired wrist angle, the wrist angle error is used to change the pressure of the antagonistic muscles that control each physical angle. To produce antagonistic coordination, the error, scaled by a gain, is added to the current pressure of the agonist muscle that moves the wrist closer to the desired angle and subtracted from the current pressure of the antagonist muscle resisting the movement. We set the value of the gain to 0.0083 kPa/deg (manually tuned based on observing the performance of the controller), which helps to convert the error in degrees to a small change of pressure in kPa at each time step through the control loop, which in our implementation had a timestep of 0.014 s. The pressure is then capped to restrict its value to the assigned operating range from 0 kPa to 68 kPa. The upper limit was set to approximately the smallest pressure value that was not exceeded in any of the fPAMs while completing the defined tasks. Co-contraction of the two muscles causes increased stiffness. The initial stiffness depends on the initial pressures, which were set to 13.8 kPa for all muscles, but the stiffness was not directly regulated during operation. The stiffness, however, can increase during operation when, for example, the wrist movement is restricted and the pressure builds up in one muscle. To be able to decrease this built-up stiffness, we included an additional condition that the pressure of the resisting fPAM should decrease twice as fast if its current pressure exceeds a given threshold. This threshold was set as 13.8 kPa based on observing the operating pressures with simple antagonistic feedback. The desired pressure levels were converted to a voltage signal and sent to the closed-loop pressure regulators connected to the corresponding fPAMs.§.§ Planar angle trackingAs a first demonstration of exosuit control, we studied how our antagonistic feedback controller can position the relaxed human wrist along the flexion/extension direction in the horizontal plane by coordinating the operation of two artificial muscles. First, the wrist angle was set to zero when the wrist was in a neutral resting position. When the trajectory tracking started, the wrist was at an angle of 0^∘ and the pressure in the muscles was set to 13.8 kPa. The desired wrist position was gradually increased by 10^∘ to move the wrist towards the extension direction. The wrist angle remained constant for 10 seconds before moving to the next position. From 30^∘ of wrist extension, the desired wrist angle was gradually decreased in the same manner to reach 40^∘ of wrist flexion. Fig. <ref> illustrates the exosuit completing the planar positioning task by showing the exosuit at different goal positions (Fig. <ref>(a))and showing the desired trajectory and the measured trajectory during three trajectory tracking trials (Fig. <ref>(b)). The results confirm that the exosuit is able to reach the desired wrist angles, however, we can observe inaccuracies in the tracking when the desired position changes. The step responses of the three actual trajectories when increasing and decreasing the joint angle have an average of 0.84 s and 0.54 s rise time, and 19% and 44% overshoot, respectively.§.§ Spatial trajectory trackingAs a second demonstration, we studied how our controller can control the wrist motion along its two degrees of freedom. We defined a sinusoidal desired trajectory in joint space for both flexion/extension (θ_fe) and ulnar/radial deviation (θ_ur) angles to imitate the tracing of a near-elliptic trajectory by the hand. The amplitudes of the sinusoidal trajectories were defined to be within the range of motion such that the flexion/extension angles are between -40^∘ and 30^∘ and the ulnar/radial deviation angles are between -10^∘ and 30^∘. Both trajectories have a period of 24 seconds, but there is a 90^∘ phase shift between them. Similarly to the other trajectory tracking task, the joint angles were set to zero at a neutral position of the wrist before starting the tracking. The results of the spatial trajectory tracking are shown in Fig. <ref>. Fig. <ref>(a) illustrates the realized motion of the wrist by showing the wrist position on four equally spaced points of the trajectory. Fig. <ref>(b) shows the desired and four actual trajectories for the duration of three 24-second periods. It takes approximately 3.6 seconds to reach the desired trajectory after the start from the initial, zero position. This results in small disruptions compared to the targeted smooth motion. The root mean square angle error of the tracking (not including the short initial settling phase) is 5.18^∘ for flexion/extension and 7.12^∘ for ulnar/radial deviation. § DISCUSSION In this section, we discuss the results of the presented work, starting with the modeling of the exosuit-actuated torque, and then focusing on the experimental results and demonstration. We then present proposed movement assistance applications based on the observed properties of the exosuit. §.§ Torque modeling and parameter optimizationWe used a two-dimensional geometric model to describe the torque applied by the exosuit to the wrist at a given joint angle. The model assumes that the fPAM force acts in the plane perpendicular to a fixed rotational axis. For the exosuit prototype, however, the fPAM is not completely aligned to be in this plane, therefore the placement parameters describe the projection of the attachment point positions into this plane, which makes the parameter identification challenging. Additionally, the agonist and antagonist muscles, as well as the two pairs of antagonistic muscles, may interfere with each other when they are assigned to control flexion/extension and radial/ulnar deviation separately.Another assumption we made is to model the wrist as a circle, which simplifies the geometric model, however, it assigns the same value (the value of r_w) to two different physical quantities, the distance from the center of rotation to the point where wrapping occurs, and the radius of curvature of the wrist surface. The model accuracy can be improved by modifying the model to assign separate parameters to these physical quantities. The results of the parameter optimization showed that, at the pressure allowed by the current setup, the optimized placement parameters (Table <ref>) were equal to the upper or lower bounds. When the fPAM force was increased by increasing the value of the pressure (Fig. <ref>), only d_2 changed, which indicates that, for the given user, we reached the optimal torque profile when the mounting points were placed as close as possible to the skin (minimal w_1 and w_2) and as far as possible from the wrist on the forearm (maximal d_1). Although these optimization results can be different for other users and different fPAM parameters, the current results highlight the importance of finding the right distance between the wrist and the mounting point on the hand (d_2) given the constraints on the pressure.The optimization with doubled maximum pressure and the results of the perturbation of the exosuit parameters show that we can reach higher torque with increased fPAM parameters (Fig. <ref>(a)) and a larger range of motion when the placement parameters are adjusted (Fig. <ref>(b)). Therefore, it is advantageous to increase the internal pressure or the diameter of the fPAM. The increase of the diameter is only limited by the fact that the burst pressure decreases with the inverse of diameter, as well as the practical consideration of the increasing bulkiness as a larger fPAM is inflated. The input pressure is limited by the chosen pressure regulators or pressure source.§.§ Results of the torque measurementTo evaluate the exosuit, we conducted a case study, where we compared the measured and modeled torque for the flexion fPAM with a single set of parameters for a single user. While this study does not give generalized data, it highlights some design challenges and key characteristics of the measured quantities.The evaluation process, first, highlighted the importance of correctly identifying the fPAM parameters. The modeled force did not match the measured tensile testing data when we used the measured fully stretched radius (r_0) value, therefore this parameter had to be adjusted. Also, the force did not scale proportionally with the pressure due to the different maximum contraction ratio (e_max) values. For example, the magnitude of the force at 137 kPa pressure was similar to the force at 103 kPa because e_max was smaller for 137 kPa. The tensile testing measurement proved to be a good method to calculate the parameters and, thus, to approximate the fPAM force with the model. However, it is important to conduct a more comprehensive analysis to understand the relationship between pressure and maximum contraction ratio. This relationship should be incorporated into the torque model to enhance its predictive capabilities, providing a more accurate assessment of how torque scales with pressure. Compared to the measured torque, the modeled torque showed a mismatch especially for high wrist flexion angles when we directly used the actuator endpoint position data from the motion capture system (Fig. <ref>). One potential source of the model error is the inaccuracy of the endpoint tracking, as it is challenging to attach the markers directly to the end of the fPAM body close to the end-sealing knot. Additionally, another potential source of error is the inaccuracy regarding our method of mapping the 2D model to the human arm, as this mapping (e.g., the identification of the wrist angle) is not straightforward.The second method of calculating the modeled torque allowed us to use the motion capture data only when the wrist is at the neutral angle by modeling the endpoint stretching. This significantly reduced the modeling error, leading to a better torque estimation through the flexion/extension angles, especially for high wrist flexion angles (Fig. <ref>). To achieve this accuracy, however, the wrist radius (r_w) and the stretching coefficients (K_1 and K_2) need to be identified (e.g., through a calibration process).The results of the torque measurement (Fig. <ref>) additionally show that the torque of the exosuit prototype is smaller than the optimized torque (Fig. <ref>), which highlights the challenges of fabricating the fPAM with the desired parameters (L_0, r_0, ϵ_max) and interfacing the exosuit with the human upper limb. Firstly, the mounting points did not follow closely the optimal placement due to the restrictions from the measurement setup (e.g., the mounting point on the hand was moved further away from the wrist to not to be covered by the brackets of the hand plate). Secondly, the position of the mounting points changed due to stretching, which overall reduces the torque magnitude.§.§ Results of the range of motion measurementWhen measuring the biological range of motion, the user did not notice mechanical resistance from the exosuit. In comparison with the biological range of motion, we expected to get a similar exosuit actuated range in each movement direction, except for wrist flexion, as the measured torque profile showed that the flexor fPAM can not apply torque to move the wrist over approximately 50^∘ of wrist flexion. The results (Table <ref>) confirmed the reduced range in wrist flexion, which was 44.5^∘ compared to the biological 80.9^∘. Similarly, in the case of wrist extension, the exosuit-actuated range was smaller than the biological (38.7^∘ compared to 56.0^∘), although proportionally it is closer to the biological range than the exosuit-actuated range for flexion. The range was smaller than the biological for ulnar/radial deviation as well (42.7^∘ compared to 62.3^∘), although not so significantly. This result was unexpected because the reduced range due to the increased w_2 distance (compared to the same distance in flexion/extension) was compensated by moving the mounting points closer to the wrist (decreased d_2).We also found a mismatch between the measured and the modeled joint limits in all four directions of motion with the smallest error for the flexor fPAM (Table <ref>). This highlights the need to identify the parameters of the fPAM in use (the parameters used to model all the fPAMs were those of the flexor fPAM) and to conduct further work on refining the model and measurement process to be able to predict the range of motion.For the current exosuit design, the refinement of the placement on the body and the reinforcement of the material of the glove and elbow band at the mounting points can increase the range of motion in all directions. In the case of wrist flexion, to reach the full biological range of motion, we need to find ways to increase the maximum contraction ratio of the fPAM.§.§ Results of the control and demonstrationsFor completing the trajectory tracking tasks, we used an antagonistic feedback control algorithm. The primary aim of the control system was to effectively demonstrate the exosuit's capacity for achieving desired wrist configurations. Additionally, we analyzed the accuracy of the trajectory tracking to quantify the performance of the implemented control.In the case of the planar positioning task (Fig. <ref>(b)), the actual joint angle reached the increased or decreased desired angles with a significant delay and overshoot. When the joint angle was increased, the desired position was reached with 0.84 s rise time and19% overshoot. When the angle was decreased, the goal position was reached quicker with 0.54 s rise time, but the overshoot increased to 44%. By using pressure feedback, the unit change of pressure causes a different change of the torque depending on the actual wrist angle, so the rate of change in the joint angle is not well regulated. This indicates that the applied feedback control with a constant gain could likely be improved to better adapt to such quick changes. The control algorithm is better suited for the spatial positioning task because the sinusoidal desired wrist angles change more gradually. In general, we observed that the exosuit with the implemented control was able to reach the desired wrist orientations, and it could resist small disturbances (e.g. when the biological hand resists the desired movement), but the accuracy of the trajectory tracking could likely be improved with different control methods.§.§ Proposed movement assistance applicationsWearable exosuits have multiple application areas. Based on the evaluation of the proposed exosuit design, our primary proposed application is for conducting rehabilitation exercises at home (e.g., stretching exercises for reducing spasticity). The wearable part of the exosuit is easy to fabricate and personalize for each user with the proposed pipeline shown in Fig. <ref>. The cost of the wearable part of the exosuit is low (approximately $134). The off-board base components have a significantly higher cost (approximately $3572 including a compressed air source), however, the same base can be used sequentially by multiple users. Also, the cost of all the components can be reduced, especially the pressure regulation components, which are the most expensive part of the base (approximately $3229).The base is compact enough to bring the device home, along with a portable air compressor or pump, and it can remain stationary when performing the exercises. Due to the limited fPAM stretching and contraction, the exosuit can be designed to have hard-stop limits, which makes it safe to operate without the supervision of a professional. Also, with the introduced simple feedback control, the exosuit is able to perform exercises where the wrist is slowly moved along a pre-defined trajectory. Here, we only demonstrated the use of the exosuit for passively moving the wrist, but it could likely provide assistance and resistance as well. The observed limited torque and movement range in wrist flexion, however, restricts the usefulness of the exosuit for stretching across the full range of motion, therefore, further research should focus on overcoming this limitation (e.g., by increasing the maximum contraction ratio of the fPAM or by finding an alternative routing for the fPAMs). The application of the exosuit as a movement assistive device for daily activities is currently limited because the base of the exosuit is not portable, so the user must be tethered while wearing the exosuit. Also, the implementation of more advanced control and sensing is necessary for this application. Compared to the currently used algorithm, the control should be improved to better utilize the quick dynamic response of the fPAM <cit.> and produce a quicker response of the exosuit. The physical capabilities of the device, however, seem to be satisfactory in providing assistance, because people do not use the full range of motion for most activities <cit.>, and the required assistive torque for most activities is also smaller than the peak biological torque which was used as a reference in this work, so the discovered limitations in torque or range of motion might not limit the exosuit's use. § CONCLUSION We presented a novel soft wrist exosuit with a symmetric arrangement of four fabric pneumatic artificial muscles to move the wrist in flexion/extension and ulnar/radial deviation. We introduced a two-dimensional model of the fPAM placement to calculate the torque applied by the exosuit to the wrist, and we developed a parameter optimization method for choosing the placement parameters. We optimized the model parameters to reach the peak torque of the human wrist in flexion/extension for a given user. The results show that, within the defined parameter bounds, the modeled exosuit torques are close to or higher than the biological reference torques, except for high wrist flexion angles over 46^∘. To validate the model, we measured the torque that a flexor fPAM applies to the wrist. We derived the parameters of the fPAM by conducting a tensile testing measurement, and we determined the placement parameters by tracking the endpoint positions with a motion capture system. We modeled the torque both by using fixed position parameters and by including a model of fabric stretching at the mounting points. Compared to the former method, the latter method increased the accuracy of the model from 34.9% to 26.1% error. We also measured the biological and exosuit-assisted range of motion along the two degrees of freedom of the wrist, which confirmed the limited range primarily in wrist flexion. Finally, we demonstrated the capability of the exosuit to move the wrist, first, to a desired position in flexion/extension, and then to follow a desired trajectory in two degrees of freedom, using an antagonistic feedback control algorithm. The exosuit with the given control was able to track the desired trajectory with RMSE of 5.18^∘ in flexion/extension and 7.12^∘ in ulnar/radial deviation. Our future work will explore how the exosuit can be used for at-home rehabilitation exercises (e.g., stretching) and movement assistance. Additionally, we aim to improve both the exosuit design and control based on the observations in Section <ref> to enhance the performance of the exosuit for the targeted applications. § ACKNOWLEDGEMENTS We thank Mark Plecnik for providing access to the laser cutting and tensile testing machines used for experiments. Also, we thank Nicholas Naclerio for useful discussions about fabric pneumatic artificial muscles.IEEEtran | http://arxiv.org/abs/2310.17861v1 | {
"authors": [
"Katalin Schäffer",
"Yasemin Ozkan-Aydin",
"Margaret M. Coad"
],
"categories": [
"cs.RO"
],
"primary_category": "cs.RO",
"published": "20231027022636",
"title": "Soft Wrist Exosuit Actuated by Fabric Pneumatic Artificial Muscles"
} |
2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).Woodstock'22: Symposium on the irreproducible science, June 07–11, 2022, Woodstock, NY 1]Shunji Nishimura[ orcid=0000-0001-6600-5136, [email protected], url=https://onct.oita-ct.ac.jp/seigyo/nishimura_hp/, ] [1]National Institute of Technology, Oita College, 1666 Maki, Oita City, Oita Prefecture, JapanIn the field of Boolean satisfiability problems (SAT), at-most-k constraints, which suppress the number of true target variables at most k, are often used to describe objective problems. At-most-k constraints are used not only for absolutely necessary constraints (hard constraints) but also for challenging constraints (soft constraints) to search for better solutions. To encode at-most-k constraints into Boolean expressions, there is a problem that the number of Boolean expressions basically increases exponentially with the number of target variables, so at-most-k often has difficulties for a large number of variables. To solve this problem, this paper proposes a new encoding method of at-most-k constraints, called approximate-at-most-k, which has totally fewer Boolean expressions than conventional methods on the one hand. On the other hand, it has lost completeness, i.e., some Boolean value assignments that satisfy the original at-most-k are not allowed with approximate-at-most-k; hence, it is called approximate. Without completeness, we still have potential benefits by using them only as soft constraints. For example, approximate-at-most-16 out of 32 variables requires only 15% of a conventional at-most-k on the literal number and covers 44% of the solution space. Thus, approximate-at-most-k can become an alternative encoding method for at-most-k, especially as soft constraints. SAT at-most-k encodings soft constraints Approximate-At-Most-k Encoding of SAT for Soft Constraints [ 2023-10-25 ============================================================§ INTRODUCTION SAT, or the Boolean satisfiability problem, demonstrates its availability for real-world problems in many areas. To tackle a real-world problem, we have to describe it as a combination of constraints compatible with SAT, and there is a commonly used constraint called at-most-k, along with some Boolean variables, which is satisfied if the variables have at most k number of trues in total.One of the problems with at-most-k constraints is the combinatorial explosion; the number of encoded Boolean expressions for an at-most-k constraint will explode along with increasing the target variables. To alleviate the restriction around the explosion problem, several encodings <cit.> have been proposed such as binary encoding, sequential counter encoding, commander encoding, product encoding, etc. While all of these are genuinely at-most-k constraints, of course, this paper provides a different approach to the problem that attempts to drastically reduce the number of Boolean expressions, in exchange for losing an accurate count of trues. The encoding method we propose, called approximate-at-most-k, is no longer genuine at-most-k because some parts of solutions for the original at-most-k may not be included in the solution space of our approximate-at-most-k. For example, assignment (X_1,X_2,X_3,X_4,X_5)=(true,true,false,false,true) has three trues, so satisfies at-most-3, but may not be satisfied with approximate-at-most-3, depending on the model implemented at that time. In terms of proof theory, where the judgment of a SAT solver is regarded as the existence of proof, we can say that approximate-at-most loses completeness.Despite the lack of completeness, approximate-at-most can still be useful in some cases, such as using them as soft constraints <cit.>, or preferences, in other words, that can be used to describe optional desires around the objective problem. For an example of university timetabling <cit.>, on the one hand, it is necessary that the same teacher not teach two different classes at the same time (this is called a hard constraint). On the other hand, a university policy is suitable for soft constraints; it may be preferable that only 5 teachers have continuous classes, rather than 10 teachers. For soft constraints, we assume that it is not necessary to exactly evaluate satisfiability, and we can compare the benefit of the solution coverage with the cost of their Boolean expressions.The fundamental idea of approximate-at-most-k is common to fractional encoding <cit.> . While fractional encoding has completeness, approximate-at-most-k does not, as mentioned above, and focuses only on reducing the number of Boolean expressions.§ APPROXIMATE AT-MOST-K ENCODING§.§ Fundamental ideaAn example is shown in Fig. <ref>, which illustrates the idea of approximate-at-most-k encoding. First, set A of four variables (depicted as circles) must be constrained by at-most-2. Next, the number of trues in A_1, the left half of A, constrains variables B_1, the left half of the set of variables B. Specifically, as follows: * when 0 trues in A_1, B_1 is constrained by at-most-0,* when 1 true in A_1, B_1 is constrained by at-most-2,* when 2 trues in A_1, B_1 is constrained by at-most-4 (makes no sense).In general, B_1 is dynamically constrained by the number of trues in A_1. Since at most two in A can be true, at most four in B can be true. Thus, these constraints in total behave as an at-most-4 constraint on B.Note that this is not a proper at-most-4 constraint because some cases of possible solutions are missing. For example, if B_1 has three trues and one true in the other variables of B, the right half, then that case satisfies an at-most-4 constraint on B but does not satisfy our idea given above. Actually, in that case, A_1 needs two trues and the right half of A needs (at least) one true, and that is not possible under at-most-2 constraint on A. Because of this incompleteness, we call the idea approximate-at-most. We have to be careful about approximate-at-most not to use for determination of satisfiability, but to use only for searching better solutions along with soft constraints.That approximate-at-most-4-of-8 constraint is composed of a few at-most-2-of-4 constraints. Since the number of Boolean expressions for at-most constraints grows exponentially with the size of target variables, roughly speaking, we may expect the number of Boolean expressions on approximate-at-most-k will be reduced in some cases, as it were ”single large or several small.“ §.§ 2 × 2 modelsThe idea is able to apply to tree structure recursively, as shown in Fig. <ref>. Two Boolean variables in the same column at a parent node constrain corresponding four Boolean variables at the child node;when there are number n (0 ≤ n ≤ 2) of trues in the two variables of the parent, the four child variables are constrained by at-most-2n. In Boolean expressions,v_1 ⇒ AtMost_0{u_11,u_12,u_21,u_22}∧v_2 ⇒ AtMost_2{u_11,u_12,u_21,u_22}where v_i and u_ij denote variables in a parent node and child node respectively, and we assume v_i are in order encoding <cit.>, i.e., v_2 ⇒ v_1 holds. By giving at-most-k (0 ≤ k ≤ 4) at the four variables of the top, these models generate approximate-at-most-(k/4 · 2^m+1) of 2^m+1 at the bottom, where m=1,2,⋯ denotes the height ofthe tree.§.§ generalized h × w modelsMore generalized models are shown in Fig. <ref>, in which each node except the bottoms has a matrix of variables with height h and width w. On the same hierarchy level, the height and width of nodes are identical. Between a parent column of height h_i and its children of h_i+1× w_i+1, we need h_i+1· w_i+1 to be multiple of h_i, i.e. h_i+1· w_i+1 modh_i = 0; when h_i+1· w_i+1 = a · h_i for some a and n trues in the parent column, the child variables are constrained byat-most-(a · n). In Boolean expressions,⋀_j=1,⋯,hv_j ⇒ AtMost_h' · w' · (j-1)/h{child variables of v}where v_j denotes parent variables of height h and the child node has h' · w' variables. We also assume v_jare in order encoding, i.e., v_j ⇒ v_j-1 (j=2,⋯,h) holds. For leaf nodes at the bottom, they are simply defined sets of variables of number h_n × m, where h_n is the parent's height and an arbitrary m. By giving at-most-k at the top node, these models generate approximate-at-most-(k/(h_1 · w_1) ·Π w_i · h_n · m)-of-(Π w_i · h_n · m). For the sake of ease, let us also use a fractionto denote the number of the constraints as approximate-at-most-a/b-of-n, when the top node has b variables and at-most-a is given, to generate approximate-at-most-(a/b · n)-of-n in the integer expression.§ EXPERIMENTAL RESULTS All software materials for these experiments are on the GitHub repository <cit.>.§.§ 2 × 2 modelsHere are the results of the number of Boolean expressions and coverages of the solution space, about 2 × 2 models. The CNF (Conjunctive Normal Form) of approximate-at-most-1/2-of-16 shows the following: * auxiliary variables: 12* clauses: 58* literals: 168where every two variables in columns are encoded by order encoding <cit.> and each at-most-k constraint for small numbers employs binomial (pairwise) encoding. There are 39,203 possible solutions and approximate-at-most covers 68.2% of them, overall, which means fewer numbers are included such as 7∼0 true(s). For the solutions of just 8 true variables, there are 12,870 possible solutions, and approximate-at-most covers 38.1% of them. While it will be depended on the objectives of using SAT whether to be focused on overall possible solutions or possible solutions of the maximal number, this paper mainly deals with the former, overall possible solutions from this point.Comparing approximate-at-most to conventional encoding methods, we focused on the literal number and choose counter encoding <cit.>, that demonstrates superior results at the literal number among the other encodings: binomial, binary, commander, and product <cit.>. Counter encoding of at-most-k is as follows.⋀_i=1^n-1x_i ∨ r_i,1⋀_j=2^x r_1,j⋀_i=2^n-1⋀_j=1^k r_i-1,j∨ r_i,j⋀_i=2^n-1⋀_j=2^kx_i ∨r_i-1,j-1∨ r_i,j⋀_i=2^n x_i ∨r_i-1,kwhere x_1,⋯,x_n are target variables and r_i,j denotes auxiliary variables. The black and gray lines inFig. <ref> indicate the literal number of approximate and counter at-most-1/2 respectively. Literal rate is defined as below:literal rate = (approximate literals) / (counter literals).As expected, approximate consumes a lower number than a conventional encoding. About the solution coverages, defined as below,coverage = (solutions by approximate) / (all solutions),the black line in Fig. <ref>indicates them. Predictably, it becomes lower as target variables increase. With the literal rate of approximate/counter, the dashed line (same as in Fig. <ref>), the coverage is larger than the literal rate at 8∼64 variables but it turns around at 128 variables. Since we naturally hope to gain maximal coverage by fewer literals, the value of coverage per literal rate, the gray line, could be regarded as efficiency from that point of view, i.e.,efficiency = coverage / (literal rate).In other words, the efficiency indicates a kind of advantage over counter encoding, with consideration for solution coverage. In the range of Fig. <ref> , approximate-at-most-1/2 of 32 variables exercises the most efficiently: 44.5% coverage on 15.4% literal rate (approximate: 376 / counter: 2,449). §.§ h × w modelsFor an arbitrary k and n, where k < n, there are many h × w models to implement approximate-at-most-k of n in general. For an example of approximate-at-most-5 of 10, the followings are available: * h_1=2,w_1=2,m=3, at-most-2 on the top, fix 1 false and 1 true on the bottom variables* h_1=2,w_1=2,h_2=2,w_2=2,m=2, at-most-2 on the top, fix 3 falses and 3 trues on the bottom variableswhere h_i,w_i, and m are as shown in Fig. <ref>, and there are 8 other models.Among such models to implement approximate-at-most-k, we are naturally interested in the most efficient model, and Fig. <ref> shows the best efficiencies (= coverage / literal rate vs counter) for each approximate-at-most-k,of 10, 20, and 30. For instance, approximate-at-most-5 of 10 has the best efficiency on the model:h_1=2,w_1=3,m=2 , at-most-3 on the top, fix 1 false and 1 true on the bottom variablesand shows the following. * approximate literals: 140 / counter literals: 216 = 64.8%* solution coverage: 64.9%* efficiency: 1.0From the graph in totality, we can expect high efficiency when the k value increases, but larger target variables depress it.Focusing on the around approximate-at-most-25 of 30, the gray line, efficiencies indicate high at 24 and 26 but relatively low at 25. The models at 24 and 26 are based on h_1=2,w_1=4,h_2=2,w_2=2,m=2, and 25 is based on h_1=2,w_1=3,h_2=2,w_2=3,m=2. The former model seems efficient but approximate-at-most-25 cannot be implemented with the model. More specifically, the model converts at-most-6 on the top into approximate-at-most-24 of 32 on the bottom (the case of 24), and at-most-7 on the top into approximate-at-most-28 of 32 on the bottom (the case of 26). Since fixing the bottom variables to false decreases target variables and fixing to true decreases both count number and target variables, approximate-at-most-25 of 30 is not able to generate by the former efficient model with any adjustments of fixing target variables. This makes approximate-at-most-25 relatively low efficiency.§ DISCUSSIONWe studied the coverage which indicates how far approximate-at-most-k covers the possible solutions. There are two types of coverage: overall coverage and maximum-count coverage; for example, about at-most-8 of 16 variables, considering overall 0∼8 counts of true and considering only 8 count of true, respectively. In this paper, we have mainly focused on the former, because it aims at the entire solution space and seems to be a comprehensive notion. However, using an at-most-k constraint as a soft constraint, we generally want to find the maximum-count true case, such as 8 count of true for at-most-8. Thus the maximum-count coverage will be considered for investigation in future work.If an approximate-at-most-k covers 50% of the possible solutions, then every possible solution is included in the approximate-at-most-k solutions with a probability of 50%. To solve a real-life problem, there is usually a huge space of possible solutions, and assuming 10 solutions there at this time, we can find the solution with a probability of 99.9% (1-0.5^10). From this point of view, we can expect practical utility more than the percentage of the coverage, and require a quantitative evaluation.§ CONCLUSION This paper proposes a new method for efficiently encoding at-most-k constraints into SAT, called approximate-at-most-k, which consumes fewer Boolean expressions than conventional encodings. Approximate-at-most-k has gained low consumption at the cost of neglecting the completeness of the solution space, and it cannot be used to determine satisfiability. Meanwhile, it is useful to search for better solutions together with soft constraints, in fewer Boolean expressions.The experimental results support that approximate-at-most-k consumes relatively less than conventional counter encoding. Considering the coverage of solution space, we observed the relationship between the reduction rate of literals and the coverage; for example, approximate-at-most-16 of 32 consumes only 15% of counter encoding on the literal number and covers 44% of the solution space. In solving a real-world problem, approximate-at-most-k should be considered when there are massive soft constraints without sufficient computational resources. | http://arxiv.org/abs/2310.17898v1 | {
"authors": [
"Shunji Nishimura"
],
"categories": [
"cs.LO"
],
"primary_category": "cs.LO",
"published": "20231027051200",
"title": "Approximate-At-Most-k Encoding of SAT for Soft Constraints"
} |
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States [][email protected] Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States The ever-growing intersection of quantum electrodynamics (QED) and molecular processes has shown remarkable and unanticipated advancements in altering molecular properties and reactivity by exploiting light-matter couplings. In recent years, multiple ab initio methods have been developed to compute the eigenstates of molecular systems strongly coupled to cavities, ranging from the mean-field to quantum many-body methods. The quantum many-body methods, such as coupled-cluster theories, usually rely on the quality of mean-field reference wavefunctions. Hence, developing efficient and physically reliable mean-filed approaches for molecular quantum electrodynamics problems is crucial. The current widely used methods, such as QED Hartree-Fock and the self-consistent counterpart, are limited to specific coupling regimes. In this work, we developed a variational transformation-based molecular quantum electrodynamics mean-field method, namely VT-QEDHF, for light-matter interaction at arbitrary coupling strength. The numerical benchmark demonstrates that the VT-QEDHF method naturally connects both QEDHF and self-consistent QEDHF methods at the two limits, showcasing the advantage of VT-QEHDF across all coupling strengths.First-principles molecular quantum electrodynamics theory at all coupling strengths Yu Zhang January 14, 2024 =================================================================================== § INTRODUCTION.The increasing overlap between quantum electrodynamics (QED) and molecular activities has led to breakthroughs in tailoring molecular properties and activities through light-matter interactions <cit.>. When strongly coupled, both photons and electrons (or other elementary excitations) within materials become essential and intermingle equally quantized. In such an environment, the concept of independent “free” particles ceases to exist. Instead, the elementary excitations in the strong light-matter interaction regime are polaritons, which represent a superposition between quantized light and material <cit.> and display characteristics of both light and matter. Research suggests that material properties can be modulated via these polaritons, engendering a diversity of photophysical and photochemical phenomena; that is, polariton chemistry <cit.>. Given that the energies of photons and the strength of light-matter interactions can be fine-tuned through cavity manipulations, the robust coupling between light and matter unveils a novel paradigm for modifying material characteristics, with a spectrum of possible applications including lasing <cit.>, long-distance energy transmission <cit.>, Bose-Einstein condensates <cit.>, and various chemical processes <cit.>.In the thriving field of polariton chemistry (or molecular quantum electrodynamics at large), investigating the influence of arbitrary light-matter coupling strengths on molecular properties and behaviors necessitates a robust and universally applicable theoretical approach <cit.>. However, the absence of a reliable theoretical framework that seamlessly traverses all coupling regimes hinders the full potential of QED-assisted modulation of molecular properties. Despite significant progress in understanding the effects of confined fields on many molecular characteristics, a comprehensive and first-principles framework for exploring these phenomena across all coupling regimes is still lacking. To date, variational theories <cit.>, QED Hartree-Fock (QEDHF) <cit.>, semi-empirical method <cit.>, QED Density Functional Theory (QED-DFT) <cit.>, QED coupled cluster (QED-CC) <cit.>, QED Time-Dependent Density Functional Theory (QED-TDDFT) <cit.>, and Diffusion Quantum Monte Carlo <cit.> methods have been proposed to study the light-matter interactions. In particular, post-Hartree-Fock methods depend on an optimal mean-field theory (as the reference state) to achieve better accuracy. Although they are effective in addressing several aspects of molecular interactions within quantum fields, the existing QEDHF methods <cit.> and their self-consistent counterparts <cit.> are primarily limited to specific coupling strengths.To address this research gap, we introduce a variational transformation <cit.> based first-principles QED method, referred to as the VT-QEDHF. This universal approach is designed to function effectively across arbitrary coupling strengths, thereby providing an invaluable tool for exploring and understanding light-matter interactions in a more comprehensive and efficient manner. The VT-QEDHF method transcends the limitations of traditional perturbative and strong coupling approaches, offering a more universal perspective on molecular processes in QED environments. Within the VT-QEDHF framework, the photonic field contribution is accounted for in a nonperturbative manner, ensuring the attainment of the exact wave function in the limit of infinite coupling, thereby providing a consistent and reliable molecular orbital description across various coupling regimes. This first-principles approach not only captures the electron-photon correlation (at the mean-field level) effectively but also elucidates the cavity effects on the electronic ground state while maintaining a manageable computational cost. By bridging the theoretical gap across coupling strengths, the VT-QEDHF method is anticipated to open new avenues for the study and manipulation of molecular properties and behaviors within QED environments, offering enriched insights and enhancing the predictability and control over light-matter interactions. § THEORYThe total light-matter Hamiltonian of molecular quantum electrodynamics can be described as the widely used nonrelativistic Pauli-Fierz Hamiltonian in the dipole approximation <cit.>,Ĥ_PF= Ĥ_e + ∑_α[ ω_α (â^†_αâ_α+1/2) +√(ω_α/2)λ_α·D̂ (â^†_α + â_α) + 1/2 (λ_α·D̂)^2 ].This Hamiltonian is often referred to as the Pauli-Fierz (PF) Hamiltonian. Where Ĥ_e = T̂_e + V̂ is the bare molecular Hamiltonian (excluding the nuclear kinetic operator) which includes all Coulomb interactions V̂ between electrons and nuclei as well as the electronic kinetic energy operators T̂_e, which is given by the expression,Ĥ_e = ∑_μνh_μνĉ^†_μĉ_ν + 1/2∑_μνλσI_μνλσĉ^†_μĉ^†_λĉ_σĉ_ν.Where h and I are one-electron and two-electron integrals.D̂ in Eq. <ref> is the molecular dipole operator,D̂=∑_i^N_n z_iR̂_i -∑_i^N_e er̂_i ≡D̂_n + D̂_e,including electronic D̂_e and nuclear D̂_n components. λ_α =√(1/ϵ_0 V)e_α≡λ_αe_α characterizes the coupling between the molecule and cavity quantized field. ω_α and e_α represent the frequency and polarization of the electric field of cavity photon mode α. The last term describes the dipole self-energy (DSE), which is essential to ensure the Hamiltonian is bounded from below and displays the correct scaling with the system size <cit.>.The eigenstate of the molecular QED Hamiltonian can be readily obtained by solving the time-independent Schrödinger equation Ĥ_PF|Ψ⟩ = E|Ψ⟩,where |Ψ⟩ is the correlated electron-photon wavefunction, though the exact solution to the above quantum many-body equation is nontrivial.The mean-field approach is usually the first and fastest method to approximate the quantum many-body problems.At the mean-field level, the QED Hamiltonian can be approximated by |Ψ⟩≈|HF⟩⊗|0⟩ where |0⟩ denotes the photon vacuum state. Consequently, the total energy can be easily introduced via,E_tot = E_HF + 1/2∑_α⟨ (λ_α·D̂)^2⟩. Where the E_HF denotes the electronic HF energy. The DSE contribution to the total energy (second term on the right-hand side of the above equation) can be evaluated via the DSE-mediated one-electron and two-electron integrals (see more details in Supplementary Materials (SM)). Thus, the corresponding Fock matrix (for computing density matrix and molecular orbital properties) can be readily derived by taking the partial derivative of the total energy with respect to the density matrix <cit.>. The resulting QEDHF method (in the Fock state representation) provides an economical way to compute the polariton ground state and can serve as the reference for other post-HF methods. The key drawback of the QEDHF method in the Fock representation is the slow convergence with the Fock state in the strong coupling limit, which can lead to incorrect behavior, such as incorrect origin-dependency and frequency dependency <cit.>, making the QEDHF method in Fock state representation more suitable for weak coupling systems (as the Fock state is the eigenstate of the interaction Hamiltonian in the λ→ 0 limit). Such drawbacks can be mitigated with the coherent state (CS) representation <cit.>,|z_α⟩≡ e^z_αâ_α^† - z_α^*â_α|0⟩≡Û(z_α)|0⟩,where z_α =-⟨λ_α·D̂⟩_HF/√(2ω_α). It's clear from the above equation that CS is a linear combination of complete Fock states where the coefficients are determined by the displacement due to the light-matter coupling strength. The resulting QEDHF in CS representation <cit.> thus mitigates the origin-variance problem. However, the molecular orbitals and Fock matrix remain origin-dependent charged systems in <cit.>.Only recently, a fully origin-invariant formulation was developed within a self-consistent strong coupling QEDHF formalism (namely SC-QEDHF). The SC-QEDHF framework is stimulated by the fact that, in the infinite coupling limit (i.e., Ĥ_e ≪Ĥ_p + Ĥ_ep or λ_α→∞), the Hamiltonian is dominated by the photon and electron-photon interaction terms, and the corresponding wavefunction can be well approximated by a Gaussian state,|Ψ^∞⟩=e^-∑_αλ_α/√(2ω_α)e_α·D̂(â_α - â^†_α)|HF,0⟩≡Û_λ|HF,0⟩.This is widely recognized as the polaron transformation within the context of electron-phonon interaction scenarios <cit.>. This approach has recently been adapted for use in polariton chemistry <cit.>. Consequently, we can employ the Û_λ operator to transpose the Hamiltonian into a new framework, wherein the resultant transformed Hamiltonian effectively eliminates the explicit electron-photon coupling terms. In particular, after undergoing the transformation, the electronic and photonic operators becomeÛ^†_λĉ_νÛ_λ =∑_μĉ_μ X_μν,Û^†_λâ_αÛ_λ =â_α -λ_α/√(2ω_α)e_α·D̂,where X_μν =exp[-∑_αλ_α/√(2ω_α)e_α·D̂(â^†_α - â_α) ]|_μν.Consequently, under the polariton transformation, the resulting Hamiltonian becomes (denoted as Ĥ^p)Ĥ^p = Û^†_λĤ_PFÛ_λ = Û^†_λĤ_eÛ_λ + ∑_αω_αâ^†_αâ_α.The transformed electronic Hamiltonian H̃_e ≡Û^†_λĤ_eÛ_λ is formally the same as the original one with the electronic operators dressed by the X operator. Since the dipole coupling operator e_α·D̂ in the X operator is not diagonal, it's more convenient to transform the operator into the dipole basis (defined as the eigenstate of e_α·D̂ operator, denoted by the symbols p, q, r, s and the corresponding eigenvalues are denoted as η_p). Then, the corresponding QEDHF energies and Fock matrix can be derived. More details can be found in Ref. <cit.>.To bridge the treatment in weak and strong coupling limits, here we present a variational transformation-based QEDHF method for the arbitrary coupling regime. The central idea is that, instead of using Û_λ, we adopt variational parameters f_α to control the variational transformationÛ_f <cit.> (also called Lang-Firsov transformation <cit.>)Û_f = e^-∑_αf_α/√(2ω_α)e_α·D̂(â_α-â^†_α).which helps the seek for an optimal mean-field approximation to the cavity QED Hamiltonian. Such idea has been previously used in strong electron/exciton-phonon interactions, including exciton transport <cit.>, polaron formation <cit.>, and dissipative quantum transport <cit.>.With the variational transformation (VT), the resulting Hamiltonians become,Ĥ({f_α}) =H̃_e({f_α}) + ∑_αω_αâ^†_αâ_α + ∑_α√(ω_α/2)(Δλ_α) e_α·D̂(â^†_α+â_α )+ (Δλ_α)^2/2(e_α·D̂)^2.where Δλ_α=λ_α - f_α and the parameters f_α are to be variationally minimized. H̃_e({f_α}) is the VT dressed electronic Hamiltonian, where the original electronic operator becomes Û^†_f ĉ_νÛ_f = ∑_νĉ_ν X^f_μν and X^f_μν =exp[-∑_αf_α/√(2ω_α)e_α·D̂(â^†_α - â_α) ]|_μν. The detailed derivation can be found in Supplementary Materials (SM).Compared to the fully transformed polariton Hamiltonian in Eq. <ref>, the variationally transformed Hamiltonian in Eq. <ref> includes a partially dressed electronic Hamiltonian H̃_e({f_α}) and residues in the bilinear coupling and DSE terms (controlled by f_α). The last two terms in Eq. <ref> are referred to as the residual bilinear coupling and DSE terms, respectively. It's obvious that when f_α/λ_α=0 (or 1), Eq. <ref> reduces to the original PF Hamiltonian Ĥ_PF or fully transformed polariton Hamiltonian Ĥ^p. It should be noted that the transformed Hamiltonians in Equations <ref> and <ref> are both exact, as no approximation was made in the transformation. The exact diagonalization of the two Hamiltonians should give the same eigenstates.Applying the mean-field approximation to the wavefunction allows us to define the VT-QEDHF wave function as|Ψ⟩ = e^-f_α/√(2ω_α)e_α·D̂ (â_α - â^†_α)|HF, 0⟩≡Û_f |HF, 0⟩.In the dipole basis <cit.>, this becomes|Ψ⟩ = e^-f_α/√(2ω_α)∑_p η_p ĉ^†_p ĉ_p (a_α - a^†_α)|HF, 0⟩,and the transformed electronic operators in the dipole basis are given byÛ_f^†ĉ_p Û_f = ∑_νĉ_p X^f_p,where X^f_p = exp[-∑_αf_α/√(2ω_α) (e_α·D̂)_pp (â^†_α - â_α) ].Consequently, the VT-QEDHF energy in the dipole basis becomes E =∑_pqh̃_pqρ_pq G_pq + 1/2∑_pqrsĨ_pqrs( ρ_pqρ_rs - 1/2ρ_psρ_rq) G_pqrs + f^2_α/2∑_p ρ_pp[ (e_α·D)_pp - η_p ]^2+ f^2_α/2∑_pq( ρ_ppρ_qq - 1/2ρ_pqρ_qp) [ (e_α·D)_pp - η_p ] [ (e_α·D)_qq - η_q ] + (Δλ_α)^2/2⟨HF|⟨0| (e_α·D̂)^2 |HF⟩|0⟩. Here, h̃ and Ĩ represent one-electron and two-electron integrals in the dipole basis, respectively, ρ_pq is the density matrix, and G are the Franck-Condon factors derived by integrating out the photonic degrees of freedom from the VT-dressed one-/two-electron integrals (i.e., ⟨0| (X^f)^†_p X^f_q |0⟩ <cit.>). The first two terms in Eq. (<ref>) are formally the same as the HF energy of the pure electronic system, but with one-/two-electron integrals replaced by the VT-dressed ones. The third and fourth terms account for relaxation in the dipole basis set <cit.>. Finally, the last term in Eq. (<ref>) represents the residual DSE.The explicit form of G can be found in the Supplementary Material (SM). The corresponding Fock matrix can be derived from the energy derivatives with respect to the density matrix. Moreover, the optimal {f_α} can also be optimized during the SCF procedure via the energy derivatives with respect to f_α (i.e., ∂ E/∂ f_α). The detailed formulas for the Fock matrix and ∂ E/∂ f_α, which are used for updating the density matrix and variational parameters, can be found in the SM. Additionally, VT-QEDHF can be augmented with the CS basis set, defined by the residue bilinear coupling as z^f_α≡ -f_α⟨e_α·D̂⟩/√(2ω_α) = -f_α/λ_α z_α, leading to the effective ansatzΨ = e^-∑_α pf_α/√(2ω_α)η_p ĉ^†_p ĉ_p (a_α - a^†_α)Û(z^f_α) |HF⟩|0⟩.This resulting formalism is denoted as the VT-QEDHF-CS method.§ NUMERICAL EXAMPLES We demonstrate the validity and advantages of the VT-QEDHF method across various coupling strengths using a sample molecule (C_2N_2H_6 isomer, with the STO-3G basis set employed). Configurations of the isomer along the trans-cis pathway are detailed in the Supplementary Material (SM). Figure <ref> plots the ground state energy of the C_2N_2H_6 molecule using different methods. The VT-QEDHF method with a predefined variational parameter f (i.e., without optimizing f) is referred to as the VT-QEDHF(f) method. This method shows a natural progression to the QEDHF and SC-QEDHF methods at the limits of f=0 and f=λ, respectively. The red star in Figure <ref> indicates the optimized VT-QEDHF energy, which is the lowest among the VT-QEDHF(f) energies as shown. The optimized f values for the weaker (Figure <ref>a) and stronger (Figure <ref>b) coupling cases are 0.53 and 0.73, respectively. These values suggest that stronger couplings necessitate greater transformation in the Hamiltonian, with the corresponding results more closely aligned with the SC-QEDHF method.Furthermore, the additional optimization of f does not notably amplify the SCF optimization workload. For the calculations in Fig <ref>, the SC-QEDHF method reaches convergence after 26 iterations, while the VT-QEDHF method meets the same criteria after 36 iterations, indicating a marginal increase in computational duration. Although the VT-QEDHF method incorporates both VT-dressed and DSE-mediated one-/two-electron integrals, the computation of the VT-dressed one-electron and two-electron integrals predominantly contributes to the bottleneck. This computation must be undertaken in every iteration, which is the same in the SC-QEDHF method. Consequently, the computational expenses of the VT-QEDHF and scQEDHF methods are nearly equivalent.Subsequently, we determined the polariton ground state energies along the trans-cis reaction pathway using the HF and QEDHF methods. These results are depicted in Fig. <ref>, with the photon frequency and coupling parameter (λ) set at 0.1 and 0.5 au, respectively. Compared with the QEDHF and SC-QEDHF methods, VT-QEDHF captures a larger amount of electron-photon correlation. This leads to reduced ground state energies throughout the reaction pathway, underscoring its reliable performance along the reaction coordinate.We investigated the optimal variational transformation f across varying photon frequencies and electron-photon coupling strengths. The LiH molecule is used here to scan a wide parameter space efficiently. These results are illustrated in Fig. <ref>. As anticipated, varying electron-photon coupling strengths dictate distinct optimal values for f in the variational transformation. Moreover, f displays a consistent increase with the electron-photon coupling strength λ. As λ tends toward small values, the ratio f/λ gravitates towards zero or a finite value contingent on photon energies. Nevertheless, in the weak coupling scenario where λ→ 0, the f/λ ratio remains low, aligning with a minimal (or no) polariton transformation limit. Conversely, the f/λ ratio is near unity in the strong coupling domain, reflecting a comprehensive polariton transformation. Within the intermediate range, the variational transformation culminates with a finite value for f/λ. This highlights the imperative nature of the variational transformation across a broad parameter regime to obtain optimal mean-field ground states. § SUMMARYIn summary, this study introduces the variational transformation-based electronic structure theory (VT-QEDHF) for molecular QED applications encompassing all ranges of coupling strengths. This methodology adeptly captures the optimal mean-field part of both electron-photon and photon-mediated electron-electron correlations. Furthermore, this framework is universally applicable to any fermion-boson interaction, making it suitable for studying the coupling of electrons with other quantized bosonic entities such as plasmons and phonons. As an example, our approach can be extended to the investigation of polaron formation from the first principles.While VT-QEDHF is robust across all coupling strengths at the mean-field level, it inherently underestimates both intrinsic and photon-mediated electronic correlations. To address this limitation, our forthcoming research will focus on the integration of VT-QEDHF into QED-CCSD and EOM-CCSD frameworks. Given the superior performance of VT-QEDHF over existing QEDHF and SC-QEDHF methods, we are optimistic that the advanced QED-CC methods augmented with VT-QEDHF <cit.> will significantly improve correlation energy estimations in all coupling regimes.Additional note: While drafting this manuscript, we became aware of a recent paper that employs similar concepts <cit.>. However, the variational transformation in Ref. <cit.> is limited to diagonal terms (of the dipole coupling operator) in the transformation. In contrast, our transformation is more general, and the corresponding elements are evaluated within the dipole basis. We acknowledge support from the US DOE, Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under Triad National Security, LLC (“Triad") contract Grant 89233218CNA000001 (FWP: LANLECF7). This research used computational resources provided by the Institutional Computing (IC) Program and the Darwin testbed at Los Alamos National Laboratory (LANL), which is funded by the Computational Systems and Software Environments subprogram of LANL's Advanced Simulation and Computing program. LANL is operated by Triad National Security, LLC, for the National Nuclear Security Administration of the U.S. Department of Energy (Contract No. 89233218CNA000001). Data availability. The data supporting this study's findings are available from the corresponding author upon request.Code availability. The developed code used for this study is available from the corresponding author upon request. Supplementary Materials for “First-principles molecular quantum electrodynamics theory at all coupling strengths" § QEDHF METHODLike the Hartree-Fock (HF) method for purely electronic systems, the QEDHF reference wavefunction is a direct product of a single Slater determinant of electronic orbitals and a zero-photon state.Fock State Representation. The QEDHF equations can be derived by starting with the reference states,|R⟩=|Ψ_0⟩⊗ (∑_nC_n∏_α|n_α⟩) ≡|Ψ_0⟩⊗|P⟩.where n=(n_1, n_2, …).And |n_α⟩ = (â^†_α)^n_α/√(n_α!)|0⟩ are the normalized photon number states for mode α. |Ψ_0⟩ denotes the Slater determinant of electronic orbitals. For a given electronic state (such as an HF state), the total energy can then be minimized with respect to the photon coefficients C_n, which is achieved by diagonalizing the Hamiltonian after integrating out the electronic degrees of freedom (DOF), resulting in the dressed photonic Hamiltonian,_P = ⟨Ψ_0||Ψ_0⟩=E_M + ∑_α[ ω_α (^†_α_α+1/2) + 1/2⟨ (_α·D̂)^2 ⟩+√(ω_α/2)_α·⟨D̂⟩ (^†_α + _α) ]. In the evaluation of the expectation of the Dipole Self-Energy (DSE) operator, it should be noted that ⟨ (λ_α·D̂)^2 ⟩≠ (λ_α·⟨D̂⟩)^2 because(_α·D̂)^2 =_α·D̂_α·D̂=∑_μνλσd̅^α_μνd̅^α_λσĉ^†_μĉ^†_λĉ_σĉ_ν -∑_μνq̅^α_μνĉ^†_μĉ_ν.where d̅^α_μν = λ_α·⟨μ|D̂|ν⟩ and q̅^α_μν = λ_α·⟨μ|q|ν⟩·λ_α are modified dipole and quadrupole integrals, respectively. The above derivation does not assume the completeness of the one-particle basis set and is employed in Ref. <cit.>. Conversely, the second-quantized form for the square of the electric dipole operator is often approximated as the product of second-quantized electric dipole operators in many studies <cit.>,(_α·D̂)^2= ∑_μνd̅^α_μνc^†_μ c_ν∑_λσd̅^α_λσĉ^†_λĉ_σ= ∑_μνλσd̅^α_μνd̅^α_λσĉ^†_μĉ^†_λĉ_σĉ_ν + ∑_μσ(∑_νd̅^α_μνd̅^α_νσ) ĉ^†_μĉ_σ.Nevertheless, this expression shows that the DSE can be evaluated via photon-mediated one- and two-electron integrals. Consequently, the partial derivative of the QEDHF energy with respect to the electronic density matrix yields a new Fock matrix incorporating the DSE-mediated exchange and correlation matrix <cit.>. Alternatively, the total Fock matrix can be evaluated by modifying the one-electron and two-electron integrals,h_μν → h_μν -1/2∑_αq̃^α_μν, I_μνλσ → I_μνλσ+ ∑_αd̃^α_μνd̃^α_λσ CS Representation. In fact, Eq. <ref> can be diagonalized by the unitary transformation:Û(z) = ∏_αexp[z_αâ^†_α - z_α^* â_α]where z_α = -λ_α·⟨D̂⟩/√(2ω_α). The resulting PF Hamiltonian in the CS representation isĤ_CS = H_e + ∑_α{ω_αâ^†_αâ_α +1/2[λ_α· (D̂ - ⟨D̂⟩)]^2-√(ω_α/2)[λ_α· (D̂ - ⟨D̂⟩)](â^†_α + â_α)}.Note thatÛ^†â_αÛ = â_α + [z^*_αâ_α - z_αâ^†_α, â_α] = â_α + z_α. With the CS representation, the transformed Hamiltonian automatically ensures convergence with respect to the number of photon number states since a coherent state is a linear combination of many photon number states,|z_α⟩≡Û(z_α) |0⟩= e^-|z_α|^2/2∑_n=0^∞z_α^n/√(n!)|n_α⟩,where Û(z_α) = e^-|z_α|^2/2 e^z_αâ^†_α e^-z^*_αâ_α is used.After the unitary transformation, the Hamiltonian can be solved with the ansatz|R⟩ = |HF⟩⊗|0⟩.With this ansatz, the QEDHF energy isE_QEDHF = E_HF + 1/2∑_α⟨λ_α· [D̂ - ⟨D̂⟩)]^2 ⟩,i.e., the bilinear coupling term in Eq. <ref> does not contribute to the QEDHF total energy when the Hamiltonian is represented in the coherent-state basis. The electronic HF energy is E_HF = Tr[h + 1/2(J - K)]D, where D is the one-electron density matrix. h, J, and K are the one-electron integral, Coulomb, and exchange potentials, respectively.The DSE in CS representation becomes,Ĥ_DSE≡ 1/2∑_αλ_α· [D̂ - ⟨D̂⟩)]^2 =1/2∑_α[(λ_α·D̂)^2 + (λ_α·⟨D̂⟩)^2- 2(λ_α·D̂)(λ_α·⟨D̂⟩)] =1/2∑_αd̅^α_μνd̅^α_λσĉ^†_μĉ^†_λĉ_σĉ_ν- ∑_α[1/2q̅^α_μν + (λ_α·⟨D̂⟩) d̅^α_μν] ĉ^†_μĉ_ν+ 1/2∑_α (λ_α·⟨D̂⟩)^2where the first term is given by Eq. <ref>. Hence, substituting Eq. <ref> into the QEDHF energy expression in CS representation (Eq. <ref>) is equivalent to an electronic HF energy with modified two-electron and one-electron integrals, subject to a difference of 1/2∑_α (λ_α·⟨D̂⟩)^2,h_μν→h_μν - ∑_α[1/2q̅^α_μν + (λ_α·⟨D̂⟩) d̅^α_μν], I_μνλσ→I_μνλσ + ∑_αd̅^α_μνd̅^α_λσ.§ VARIATIONAL QED-HF THEORY FOR ARBITRARY COUPLING STRENGTH In this section, we describe the variational polaron transformation-based QED-HF method. The parameterized unitary transformation is defined asÛ_f= e^-∑_αf_α/√(2ω_α)e_α·D̂(â_α-â^†_α).where f={f_α} are the parameters to be optimized. The electronic and photonic operators, after the transformation, becomeÛ^†_f ĉ_νÛ_f= ∑_μ c_ν X_μν,Û^†_f â_αÛ_f= â_α - f_α/√(2ω_α)e_α·D̂.whereX_μν =exp[-∑_αf_α/√(2ω_α)e_α·D̂(â^†_α - â_α) ]|_μν.Note that the electronic operator is not present in X_μν in contrast to the unitary operator Û_f. To derive the above equations, we used the identity e^S A e^-S=∑_n 1/n!A^(n) where A^(n)=[S,A^(n-1)] and A^(0)=A. For the electronic operator ĉ_μ, we have (we define ζ^α_μν=f_α/√(2ω_α) (e_α·D̂)_μν(â_α-â^†_α)),ĉ^(1)_μ =[S, ĉ_μ]=∑_α,μ'νζ^α_μ'ν[Ê_μ'ν, ĉ_μ] = -∑_α,νζ^α_μνĉ_ν,ĉ^(2)_μ = [S, ĉ^(1)_μ]=∑_α,μ'ν'ζ^α_μ'ν'[Ê_μ'ν', -∑_νζ^α_μνĉ_ν] = ∑_ανν'ζ^α_μνζ^α_νν'ĉ_ν'=-∑_ν(ζ^α)^2_μνĉ_νand so on. After the transformation, the Hamiltonian becomesĤ({f_α}) =H̃_e + ∑_αω_α(â^†_α - f_α/√(2ω_α)e_α·D̂)(â_α - f_α/√(2ω_α)e_α·D̂)+ ∑_α√(ω_α/2)λ_αe_α·D̂(â^†_α+â_α- 2 f_α/√(2ω_α)e_α·D̂) + λ^2_α/2(e_α·D̂)^2 = H̃_e + ∑_αω_αâ^†_αâ_α + ∑_α√(ω_α/2)(Δλ_α) e_α·D̂(â^†_α+â_α ) +(Δλ_α)^2/2(e_α·D̂)^2.which is the Eq. <ref> in the main text and Δλ_α=λ_α-f_α. Hence, after the variational transformation, Eq. <ref> is formally the same as the original Hamiltonian with 1) λ_α replaced with Δλ_α and 2) photonic displacement operator dresses electronic integrals. The dressed electronic Hamiltonian readsH̃_e = h̃_μνĉ^†_μĉ_ν + Ĩ_μνλσĉ^†_μĉ^†_λĉ_σĉ_ν.whereh̃_μν =∑_μ'ν' h_μ'ν'X^†_μμ'X_νν',Ĩ_μνλσ = ∑_μ'ν'λ'σ'X^†_μμ'X^†_λλ'I_μ'ν'λ'σ' X_νν'X_σσ'are the dressed one- and two-electron integrals. §.§.§ QED energies and dipole basis The evaluation of displacement operator elements depends on the diagonalization of the e_α·D̂ matrix. Therefore, transforming the basis into a dipole basis set simplifies the process,X_μν =∏_αV^α_μ pexp[-f_α/√(2ω_α) (e_α·D̂)_p(^†_α - _α) ]V^α_pν. where V^α is the transformation matrix that diagonalizes the dipole coupling matrix (e_α·D̂)_μν. Thus, we can rewrite the original Hamiltonian in the dipole basis as introduced in Ref. Riso:2022uw.Consequently, the one-electron part of the QEDHF energy is E_T =∑_μνh̃_μνρ_μν, where the photon-dressed one-electron integral is⟨0_p|h̃_μν|0_p⟩= [∑_μ'ν'h̃_μ'ν' X^†_μμ'X_νν']= [∑_μ'ν'∑_pqh_μ'ν'∏_α U^α_μ p e^f_α/√(2ω_α) (e_α·D̂)_p(^†_α - _α) U^α_pμ'U^α_ν q e^-f_β/√(2ω_β)e_α·D̂_q(^†_β - _β)U^α_qν'] = [∑_μ'ν'∑_pqh̃_μ'ν'∏_α U^α_μ p U^α_ν qG^α_pq U^α_pμ' U^α_qν'].The two-electron integrals in the photonic vacuum state are⟨0_p|Ĩ_μνλσ|0_p⟩= ∑_μ'ν'λ'σ'X^†_μμ'X^†_νν'I_μ'ν'λ'σ' X_λλ'X_σσ'= ∑_μ'ν'λ'σ'∑_pqrs∏_α V^α_μ'pV^α_pμV^α_ν' qV_qνI_μ'ν'λ'σ' V^α_λ' r V^α_rλV^α_σ'sV^α_sσG^α_pqrs.Where the Gaussian factors areG^α_pq=exp[-∑_αf^2_α(η^α_p - η^α_q)^2/4ω_α], G^α_pqrs=exp[-∑_αf^2_α(η^α_p - η^α_q+η^α_r - η^α_s)^2/4ω_α]. Additionally, the DSE residue contributes to the total energy as well. Analogous to the QED-HF formalism, the residual DSE contribution can be computed via DSE-mediated one-electron and two-electron integrals h^p_μν=-∑_α(Δλ_α)^2/2q̃_μν, I^p_μνλσ= ∑_α(Δλ_α)^2/2d̃_μνd̃_λσ.This formulation allows the residual DSE-mediated Fock matrix and its associated energies to be expressed in a manner analogous to the electronic components.In summary, the total energy in the dipole basis is given byE =∑_pqh̃_pqρ_pqG_pq+ 1/2∑_pqrsĨ_pqrs(ρ_pqρ_rs - 1/2ρ_psρ_rq) G_pqrs+ ∑_α pf^2_α/2ρ_pp[(e_α·D̂)_pp - η_p]^2+ ∑_pqαf^2_α/2(ρ_ppρ_qq-1/2ρ_pqρ_qp)[(e_α·D̂)_pp - η_p][(e_α·D̂)_qq - η_q]+ ∑_α(Δλ_α)^2/2⟨|⟨0|(e_α·D̂)^2|⟩|0⟩. §.§.§ Gradients of total energy with respect to variational transformation parameterThe variational optimization of the {f_α} parameters is achieved via the variational minimization procedure along with the density matrix optimization. In particular, the optimal transformation parameters are obtained when the energy gradients with respect to the {f_α} are equal to zero. The gradient (in the dipole basis) is given by∂ E/∂ f_α=∂ E_e/∂ f_α + ∂ E_DSE/∂ f_α= ∑_pq-f_α (η_α,p-η_α,q)^2/2ω_αh̃_pqρ_pq G_pq + ∑_pqrs-f_α(η^α_p - η^α_q + η^α_r - η^α_s)^2/2ω_αĨ_pqrs(ρ_pqρ_rs - 1/2ρ_psρ_rq) G_pqrs + f_α∑_p ρ_pp[(e_α·D̂)_pp - η_p]^2+ f_α∑_pq(ρ_ppρ_qq-1/2ρ_pqρ_qp)[(e_α·D̂)_pp - η_p][(e_α·D̂)_qq - η_q]-Δλ_α⟨|⟨0|(e_α·D̂)^2|⟩|0⟩. | http://arxiv.org/abs/2310.18228v2 | {
"authors": [
"Xinyang Li",
"Yu Zhang"
],
"categories": [
"physics.chem-ph"
],
"primary_category": "physics.chem-ph",
"published": "20231027160709",
"title": "First-principles molecular quantum electrodynamics theory at all coupling strengths"
} |
inst1]Wolfgang Lang [email protected] [inst1]organization=University of Vienna, Faculty of Physics, addressline=Boltzmanngasse 5, city=Vienna, postcode=1090, country=Austria The relevant length scales for superconductivity are of the order of nanometers. By confining the superconducting condensate to such dimensions, many physical properties change substantially, and novel phenomena emerge, which are absent in the pristine material. We discuss various methods of creating artificial nanostructures by top-down approaches in metallic and copper-oxide superconductors and their applications. Such nanostructures can be used to control magnetic flux quanta in superconductors, anchoring them to engineered defects to avoid dissipation, guiding their motion, or building artificial flux-quanta arrangements. Nanopatterned superconductors are essential for creating model systems for basic research and enable building almost dissipationless and ultrafast electronic devices and highly sensitive sensors.copper-oxide superconductor fluxonics helium ion microscope high-temperature superconductor ion irradiation Josephson effect lithography nano-constrictions nanostructure pinning lattice superconductor vortex vortex commensurability vortex ratchet 0.2cm This author-created manuscript version is made available under the https://creativecommons.org/licenses/by-nc-nd/4.0CC-BY-NC-ND 4.0 license. The version of record is available at https://doi.org/10.1016/B978-0-323-90800-9.00014-7https://doi.org/10.1016/B978-0-323-90800-9.00014-7.2024 Elsevier. 0.2 cm Cite as: W. Lang, “Nanostructured Superconductors,” in T. Chakraborty(Ed.), Encyclopedia of Condensed Matter Physics (Second Edition). Academic Press, Oxford, 2024, pp. 368–380.§ INTRODUCTIONSuperconductivity originates from the long-range coherence of bosonic quantum particles that condense into a lower energy state and is thus called a macroscopic quantum phenomenon. One might wonder what could be the advantage of confining the superconducting condensate to the nanoscale? The answer lies in two essential characteristic lengths that characterize superconductivity. The Ginzburg-Landau coherence length ξ determines the distance over which the density of superconducting carriers can change from its peak value to zero and vice versa. The size of ξ can vary from several tens of nm in metallic superconductors down to about 1 nm in the copper-oxide superconductors with high critical temperature T_c (HTS). The other length scale is set by the London penetration depth λ, which describes the decay of an external magnetic field from the edge of a superconductor toward its interior, from which it is ultimately expelled.The ratio between these two lengths bifurcates between two types of superconductors. While so-called type-I superconductors expel a magnetic field completely (Meissner effect), the more abundant and more widely used type-II superconductors let the magnetic field enter in quantized portions of the magnetic flux Φ_0 = h/(2e), where Φ_0 is termed the flux quantum, h and e are Planck's constant and the elementary charge, respectively. These flux quanta in a superconductor are called fluxons or vortices; the latter name results from the fact that circular supercurrents in the surrounding material stabilize these flux quanta.Coming back to the two characteristic lengths, it turns out that the so-called Ginzburg-Landau parameter, the ratio κ = λ/ξ, determines the type of a superconductor, with κ≥ 1/√(2) leading to type-II superconductivity. Another important observation is the temperature dependence of ξ(T)=ξ(0) (1-T/T_c)^-1/2 which implies a diverging enhancement of ξ(T) from its minimal value ξ(0) at T = 0 when T_c is approached. Ginzburg-Landau theory also predicts the same temperature dependence also for λ(T) and, thus, κ is temperature-independent near T_c.In what follows, we will mainly discuss thin films of superconductors. In many situations, these can be considered three-dimensional (3D) materials, as long as the coherence length perpendicular to the surface is smaller than the film's thickness t_z. However, from the temperature dependence of ξ(T) it is evident that close to T_c a transition to a two-dimensional (2D) behavior in very thin films might occur when ξ(T) > t_z.Another issue becomes important for film thicknesses t_z ≲λ(T). Then λ(T) has to be replaced by an effective penetration depth (also called Pearl length) Λ(T) = 2λ(T)^2/t_z. Since λ(T) (and Λ(T), respectively) control the range of interactions between vortices, the relevant length scales can become macroscopic in very thin films. In any case, vortices separated by distances smaller than the (effective) penetration depth will behave as collective ensembles, occasionally termed `vortex matter'. Note that in a magnetic field applied perpendicular to the surface of a thin superconductor, large demagnetization effects lead to a complete penetration of the magnetic flux and to suppression of the Meissner screening.Vortex physics is a fascinating and complex field of research, and only an introductory glimpse can be presented here to underpin the key issues related to nanostructured superconductors. For deeper insights, see <cit.> in this Encyclopedia and the review by <cit.>.Confining the superconducting condensate to the range of the length scales introduced above inevitably means that superconductors need to be structured to dimensions of few-μm and, primarily, to the nanoscale. The use of thin films and lateral nanostructuring allows one to substantially change many physical properties and create novel phenomena, absent in the pristine material. For example, the controlled manipulation of vortices by anchoring them to engineered defects, guiding their motion, or building artificial vortex matter plays a significant role. Engineered vortex systems are essential for creating model systems for basic research and enable to build almost dissipationless and ultrafast electronic devices based on vortex manipulation, the so-called fluxonics.Taking advantage of Josephson effects in weakly-coupled superconductors requires barriers of only a few nm in width. The Josephson junction is the central building block for sensitive magnetic field sensors, rapid single flux quantum logic circuits, THz frequency generators, and, last but not least, superconducting quantum computing.This chapter focuses on the nanostructuring of HTS, which offer easy operation using cryocoolers or liquid nitrogen. However, the brittle nature and complex crystallographic structures of HTS make the fabrication of nanopatterns a problematic endeavor and call for new techniques. Many of these concepts have been developed since the first edition of this Encyclopedia. For a detailed overview of nanostructured metallic superconductors, the reader is referred to the book of <cit.>.§ NANOFABRICATION TECHNIQUES§.§ Lithography and ion-beam milling The standard techniques for nanopatterning of superconductor structures focus on thin-film processing. One of the most commonly used methods is photo- or electron-beam lithography, which is well-known for the fabrication of semiconductor devices. After growing thin films on a suitable substrate and depositing a photoresist layer on top, the planar structures are defined by illuminating the photoresist through a mask or directly processing it with an electron-beam (e-beam) writer. While the wavelength of the light limits the former method, e-beam lithography can provide resolutions at the order of 10 nm but is a slow sequential technique.Unfortunately, the subsequent etching processes of both photoresist and thesuperconductor result in significant degradation of resolution. Another issue is that etching artifacts, such as underetching below the photoresist layer, progressively limit resolution as the thickness of the superconducting film increases. Thus, patterns in metallic superconductors are generally reported with lateral structures on the order of 100 nm. For example, a vibrant application is the fabrication of single photon detectors <cit.> that commonly consist of few-nm thick NbN or NbTiN films patterned to long stripes, folded into meandering patterns to increase the active area <cit.>. The above limitations are of particular importance for the reproducible fabrication of constriction-type Josephson junctions (Dayem bridges), for which a resolution of a few nanometers would be required.For HTS, the situation is even more complicated because they are brittle and have a complex crystallographic structure. Lithographic techniques combined with etching steps have been successfully applied to create patterns with several hundred nanometers features sizes <cit.>. The combination of e-beam lithography followed by argon-ion milling allowed the production of patterns with a minimum line width of 25 nm <cit.>. Since this is still too large to fabricate Josephson junctions directly, alternative fabrication methods, such as growing the thin film over a step-edge in the substrate, must be employed <cit.>. §.§ Electromigration An elegant way to further narrow the cross-section of superconductor bridges fabricated by conventional lithography is based on electromigration. This usually undesirable process, combining a local temperature rise and a high electric field, leads to a displacement of atoms from their original crystal-lattice positions. Closed-loop controlled electromigration, however, can avoid adverse effects. With the gradual reduction of the cross-section of aluminum constrictions to ≲ 150 nm^2, a geometry-induced transition from thermally-assisted phase slips to quantum phase slips has been reported <cit.>. Other applications of electromigration include the tuning of Nb superconducting quantum interference devices (SQUIDs) <cit.> and the controlled migration of oxygen atoms in YBa_2Cu_3O_7 (YBCO) bridges <cit.>. §.§ Templating strategies In bottom-up fabrication methods, the nanostructures are already predefined during the growth process. Since patterning the substrate material is often less challenging, several methods have been developed that take advantage of the controlled imperfection in the substrate to introduce the desired structures into the superconductor. One example is the self-organized growth of highly ordered porous alumina, a membrane-like system consisting of triangular arrays of pores. A superconducting Nb thin film deposited directly on the alumina template with a pore spacing of 50 nm acts as a nanoscale array for vortex pinning <cit.>.Nanowires of metallic superconductors less than 10 nm in diameter and up to 1 μm in length can be fabricated by `molecular templating' <cit.>. After a freestanding carbon nanotube is placed over a narrow and deep slit in a Si substrate, a superconductor such as MoGe is sputtered onto the entire arrangement. The resulting layer consists of the electrodes connected by the nanowire, which are all made of the same material and thus have excellent contact resistances. Subsequently, the properties of the nanowire can be further fine-tuned in a transmission electron microscope or by applying voltage pulses.Another technique uses vicinally cut substrates with a periodic nanoscale step structure of the clean substrate surface. YBCO films grow on such substrates in a roof-tile-like arrangement of the copper-oxide layers by self-assembly. The morphology and microstructure of such vicinal films strongly depend on the miscut angle of the SrTiO_3 substrate and the thickness of the on-top grown YBCO <cit.>, Bi_2Sr_2CaCu_2O_8 <cit.>, and Hg_1-xRe_xBa_2CaCu_2O_6+δ (Re: rare earth element) <cit.> films. The linear arrangement of dislocations resulting from the step structure in YBCO leads to an exceptionally high critical current density. Moreover, symmetry breaking in such vicinal films also allows experimental access to out-of-plane properties of HTS, such asresistivity, Hall effect <cit.>, photoconductivity <cit.>, and channeling of vortex strings along the ab-plane <cit.>.Strategies based on substrate modification are very versatile. Nanostructures can be deposited on the substrate prior to the deposition of a superconductor film to modify its structural and superconducting properties. A variety of interactions can be tailored with thickness modulation of the superconductor by insulating dots, proximity effects caused by metallic dots, or magnetic interactions with ferromagnetic dot arrays that interact with the vortex lattice in the superconductor <cit.>. §.§ Ion-induced nanostructures The previously discussed patterning techniques have several disadvantages, especially when applied to HTS. First, all methods that remove material result in open side faces of the HTS, allowing the relatively mobile oxygen ions to escape from the crystallographic framework and thus lowering the oxygen content. In most cases, this leads to a degradation of the superconducting properties. On the other hand, the brittle nature of HTS limits the stability of the remaining material structures, making it challenging to achieve sub-μm resolution.Irradiation of HTS with electrons, protons, and light or heavy ions provides an alternative route to modify the properties of HTS while leaving the surface of the material nearly intact. Before going into further details, a brief account of the limiting parameters is in order. A first consideration concerns the penetration range of irradiation. Only neutrons and high-energy ions can penetrate deeply enough into bulk materials <cit.>. Irradiation with swift heavy ions produces columnar tracks of a few nm diameters in the HTS. The crystallographic structure is destroyed, and a non-superconducting amorphous channel remains <cit.>. Such a columnar defect with a diameter similar to the in-plane coherence length is ideal for blocking the motion of vortices. This enhanced pinning of vortices leads to a higher critical current. These disordered arrangements of line-shaped pinning centers are not only very important for applications but have also triggered the theoretical and experimental study of novel phases of vortex matter, such as the vortex <cit.> and Bose <cit.> glasses.A more subtle method of tailoring superconducting properties is to create point defects while leaving the crystallographic framework intact. They can be produced by irradiation with electrons, protons, or light ions. Point defects are imperfections at the atomic level that occur when the energy transfer from the incident particle to the crystal is on the order of the energy required to form vacancies. At energies up to a few MeV, the incident particle collides with and displaces a nucleus, eventually creating a collisional cascade that propagates through the material. The tradeoff in the choice of parameters stems from the fact that lighter particles, such as electrons, require extremely high fluence to achieve any appreciable change of the superconducting properties. In contrast, heavier particles have limited penetration depths and more significant scattering of the collisional cascades <cit.>.A suitable candidate for the controlled fabrication of point defects in thin films of the most commonly used HTS, YBCO, are He^+ ions of moderate energy. The superconducting properties are tailored by displacing mainly oxygen atoms, which are more loosely bound than the other atomic species, with binding energies of 1 … 2 eV for the chain atoms O1 and about 8 eV for the O2/O3 atoms in the CuO_2 planes, as shown in Fig. 1(a). Their displacement creates Frenkel defects at interstitial positions <cit.> outlined in Fig. 1(b), leading to a controllable decrease <cit.> or complete suppression of T_c <cit.>, depending on the ion fluence. Since the charge transfer from the displaced atoms is still operational, Frenkel defects do not lead to a significant change in the carrier density.Another process is responsible for reducing T_c: In classical superconductors with s-wave symmetry of the superconducting gap, the introduction of point defects has little effect on T_c and is even used technically to improve the critical current. In contrast, the anisotropic d-wave nature of the gap in HTS makes it susceptible to tiny defects that reduce not only the normal-state conductivity and carrier mobility but also T_c <cit.>.However, a uniform statistical distribution of point defects is not very helpful. By focusing the ion beam before it hits the surface of the HTS film, one can create nearly cylindrical domains populated with point defects. These columnar defects (CDs) form a landscape where superconductivity is locally suppressed. Lateral modulation of ion fluence can be achieved mainly by two different methods outlined in Fig. <ref>.§.§.§ Masked ion irradiation of thin films An extensive array of multiple ion beams can be created by masking a wide-field collinear ion beam, which is commonly available with ion implanters, as schematically shown in Fig. <ref>(a). An HTS film, thinner than the ion's penetration depth, is fabricated on a suitable substrate. The mask protects selected areas of the HTS film from irradiation and exposes the other sample parts to irradiation. While the former remain superconducting, the T_c of the latter is reduced or suppressed depending on the applied ion energy and fluence. Notably, the collision cascades widen due to the straggling of ion trajectories within the HTS so that the mask patterns become blurred with increasing depth. As a result, the lateral resolution is typically limited to about 10 nm as indicated by simulations with 75 keV He^+ irradiation <cit.>.Different techniques were used to create the mask. Either a photoresist layer was deposited on the HTS and processed with standard UV <cit.>, e-beam, <cit.>, or focused ion beam <cit.> lithography, then etched and used as the mask, or a metal layer was deposited directly on the YBCO film and then patterned by ion beam milling <cit.>. Alternatively, a Si stencil mask is fabricated and mounted at a small distance from the superconductor film <cit.>. This method allows the mask to be reused, does not require the multiple processing steps associated with photoresist, and avoids potential surface degradation.The main advantage of masking techniques is the parallel processing of many structures. Shortcomings are the resolution limitations resulting from the preparation of the mask, and in the case of freestanding masks, there are geometrical limitations, e.g., disconnected blocking elements are not possible.§.§.§ Focused ion beam modification of thin films In contrast to conventional focused ion beam (FIB) machines, which use Ga to ablate the material, devices for focused ion irradiation with light ions have only recently become available. The helium ion microscope (HIM) <cit.> combines scanning focused ion beam sources for He^+ and Ne^+ ions and imaging via secondary electron detection. The HIM consists of a gas-field ion source that emits ions from a tip containing only three atoms (the trimer), electrostatic ion optics to focus and trim the beam, and a deflection system that moves the beam across the sample stage with optional blanking (Fig. <ref>). Because of the high source brightness of the trimer and the short de Broglie wavelength of He^+ ions, an image resolution better than 0.5 nm and an unprecedented depth of focus can be achieved <cit.>.Using a He or Ne beam instead of the conventional Ga–FIB technique, contamination of the target material by Ga ions is avoided and achieves higher lateral resolution. For example, nanopores as small as 1.3 nm in diameter have been fabricated in 1 nm-thick carbon nanomembranes <cit.>. As initial applications in HTS, thin barriers of insulating material, written across prepatterned YBCO microbridges with the focused ion beam, form Josephson junctions <cit.>, and ultradense arrays of CDs build a complex pinning landscape for vortices <cit.>. At the time of writing, hexagonal arrays of CDs with spacings as small as 20 nm had been fabricated in YBCO thin films <cit.>.The use of He–FIB relies on the weak bonding of oxygen in HTS and cannot be employed in the same way for other superconductors. However, a focused Ne beam offers a compromise between Ga–FIB and He–FIB for direct milling of metallic superconductors. This has been demonstrated, for example, for patterning constrictions in NbN films to form nanowires <cit.>.Other techniques for growing superconductor nanostructures using focused electron and ion beams are discussed in this Encyclopedia by <cit.>, and the properties of superconducting microtubes and nanohelices are examined by <cit.>.§ PROPERTIES OF NANOPATTERNED SUPERCONDUCTORS Nanostructuring of superconductors enables selective tailoring of the superconducting condensate on length scales smaller than the London penetration depth, mainly by creating lateral structures. Such patterns allow controlled interaction of vortices, their manipulation, as well as tunneling effects between two superconducting condensates weakly coupled by a small non-superconducting interlayer. Some of these applications are described below. §.§ Vortex pinning arrays Lateral nanostructuring of superconducting films with regular arrays of CDs allows the creation of artificial pinning landscapes that lead to commensurability effects with the flux line lattice. This occurs at the so-called matching fields B_k = k Φ _0/A, where k is an integer number of pinning sites (or a rational number of vortices) in the unit cell of the pinning array of area A. For square lattices A = a^2 and for hexagonal patterns A = √(3) a^2/2 with a the nearest neighbor spacing of the CDs.The number of flux quanta that can be trapped inside a cylindrical hole or in a non-superconducting pinning site depends on its diameter r_p and the coherence length ξ(T) of the superconductor. For anisotropic layered materials such as HTS, the in-plane coherence length ξ_ab(T) is relevant. Once all pinning sites are filled by one flux quantum, the formation of multiquanta vortices becomes energetically favorable when r_p exceeds a certain critical radius <cit.>. The saturation number of trapped flux quanta can be estimated by n_S ≃ r_p/2ξ(T) <cit.>.A special situation arises for blind holes in a superconductor, i.e., holes that do not completely penetrate the material. The remaining superconducting bottom layer carries the screening currents of individual vortices trapped in the blind hole. Although several flux quanta are trapped in the blind hole, individual vortices can still be made visible <cit.>.Multiquanta vortices are a phenomenon that occurs only in nanostructured superconductors and does not exist in pure homogeneous superconductors. Conversely, when small pinning sites cannot accommodate the total flux, the excess flux is forced to enter the material via interstitial vortices. The various commensurability effects in a pinning landscape for different magnetic fields applied orthogonally to the film surface are illustrated in Fig. <ref>, which is based on experimental results by Lorentz microscopy of vortices in a Pb film patterned with a square array of tiny blind holes. For filling factors k < 1, the flux quanta are arranged in a superlattice with respect to the pinning array; for k = 1, each hole is filled with precisely one flux quantum, and for k = 2, the excess flux is taken up by interstitial vortices.The commensurability effects manifest themselves in various physical parameters, such as steps in the magnetization loops and peaks in the critical current as indicators of enhanced pinning forces. Minima in the resistance versus magnetic field curves point to commensurability effects of moving vortex ensembles.Fig. <ref> shows an example of the dramatic change of the magnetization loop M(H) after perforating a 60 nm-thick WGe film with a square array of holes 340 nm in diameter and 1 μm apart<cit.>. The area of the M(H) loop in the perforated film is massively increased due to the holes acting as artificial pinning centers. Distinct cusps in the loop appear at the matching fields B_k = k × 2.07 mT, indicating the trapping of multiquanta vortices. The staircase-like reduction of M(H) with increasing field results from the few excess vortices that appear after filling the holes with k vortices. These are initially repelled by the trapped vortices and can move in the interstitial region with higher mobility. As soon as the number of the excess vortices is large enough, they fill up the holes to k+1 multiquanta vortices.When the diameter of holes in a superconducting film is increased to a size close to their spacing, a gradual change from a pinning lattice to a multiply connected network of superconducting wires takes place. Such wire networks exhibit a modulation of the critical temperature with the magnetic flux, the Little-Parks effect <cit.>. A further enlargement of the holes leads to the formation of disconnected superconducting islands with intriguing properties <cit.>.In HTS, commensurability effects can be preferably detected by electric transport measurements, e.g., in YBCO thin films patterned with a square lattice of holes <cit.>. However, the situation is more complicated because YBCO thin films have strong inherent pinning by crystallographic microtwinning, screw dislocations, and other intrinsic defects. The competition between pinning on these immanent defects and trapping vortices on engineered pinning sites requires that the latter be relatively dense, with a spacing of ≲ 300 nm. Such a resolution is hard to achieve by lithographic methods but is within reach of masked or focused ion beam modification, which can raise the magnitude of the matching fields into the range of several tesla.A demonstration of vortex commensurability effects at high magnetic fields is shown in Fig <ref>(a) in a YBCO film with a dense hexagonal pinning lattice with a = 30 nm spacing. Since a is much smaller than the Pearl length Λ(0), peaks in the critical current caused by enhanced pinning of vortices can be observed at temperatures far below the superconducting transition. Matching effects of mobile vortices at higher temperatures can be traced as dips in the resistivity, as shown in Fig. <ref>(b). Moreover, the preferential trapping of vortices within the CDs can be confirmed by tilting the applied magnetic field away from the axes of the CDs by an angle α as illustrated in the inset of Fig. <ref>(b). Then, the position of the matching dips scales with the magnetic field component parallel to the axes of the CDs. These effects prevail up to high tilt angles of α≤ 70^∘ <cit.>. §.§ Complex pinning landscapes When moving from pinning lattices with hexagonal or square arrangements to more complex periodic or aperiodic tilings, numerous unusual phenomena occur. Some examples are shown in Fig. <ref>. Such patterns have been studied theoretically <cit.> and experimentally in metallic superconductors with holes <cit.> and magnetic dots <cit.>, like Penrose <cit.>, honeycomb <cit.> and Kagomé <cit.> lattices and artificial vortex ice arrangements in geometrically frustrated pinning lattices <cit.>.In HTS, such studies are more challenging and could be hampered by strong intrinsic pinning. However, complex pinning structures lead to competition between the pinning forces at the CDs and the elastic energy of the vortex lattice, attempting to restore the natural hexagonal vortex configuration of a clean superconductor. For example, pinning landscapes that force the magnetic flux quanta in an ice-like flux arrangement due to a geometrically frustrated energy landscape can transition to a periodic flux distribution at higher temperatures, thawing the vortex ice <cit.>.Another example of a complex pinning landscape, the quasi-Kagomé tiling, shows an unconventional commensurability effect. At elevated temperatures, all pins are occupied by vortices, and one interstitial vortex is magnetically caged in each void of the lattice. The balance between the pinning forces exerted by the CDs and the vortex caging potential can be tuned with the temperature <cit.>. Controlled manipulation of such magnetically confined vortices can pave the way toward fluxonic applications of HTS. §.§ Vortex ratchets and flux diodes By introducing spatial asymmetry into the pinning arrays, many exciting effects emerge. In general, the idea of a `Brownian motor' refers to Brownian motion in combination with an unbiased external input signal that can induce submicron directional motion of particles <cit.>. A well-known example is the directional propagation of vortices in an appropriately structured superconductor, usually referred to as a `vortex ratchet' or `fluxon pump' <cit.>. It provides a flexible and well-controlled model system for studyingstochastic transport processes and can operate up to THz frequencies <cit.>.Vortex ratchets can be realized by various concepts, such as 2D asymmetric channel walls. They can be further extended to design `fluxon optics' devices, concave/convex fluxon lenses that disperse/concentrate fluxons in nanodevices <cit.>. Also, asymmetric potential barriers, e.g., in the form of square arrays of triangular pinning centers <cit.>, double-well traps, <cit.> or an asymmetric arrangement of symmetric traps, <cit.> lead to vortex rectification effects.Interestingly, ratchet effects are proposed in binary particle mixtures without needing for an asymmetric substrate <cit.>. For example, in layered superconductors, such as HTS, an inclination of the magnetic field from the c axis leads to a mixture of pancake vortices and Josephson strings. This hybrid vortex system exhibits ratchet effects with time asymmetric drives <cit.>.Finally, understanding ratchet effects in different systems is an exciting topic. Studying these effects with fluxons provides a more direct and controllable experimental approach than most other systems.Ultimately, these investigations may pave the way to cellular automata as an alternative concept <cit.> for performing clocked logic operations on discrete particles. Moreover, vortex ratchets are proposed <cit.> as an effective method for evacuating magnetic flux from superconducting devices where inadvertently trapped flux might be detrimental for operation, such as in SQUID magnetic sensors.A related concept is the lossless superconducting diode. This is an electronic device that has zero resistance only for one direction of applied current and is a desirable device for building electronic circuits with ultra-low power consumption. It can be realized as a superconducting film patterned with a conformal array of nanoscale holes that breaks spatial inversion symmetry <cit.>. A conformal array is a structure resulting from the transformation of a uniform hexagonal pinning array that retains the sixfold order of the original lattice but exhibits a gradient in site density <cit.>. §.§ Guided vortex motion Controlled vortex motion along a predefined path can be achieved by patterning narrow channels or parallel rows of defects into a superconductor. This so-called `guided vortex motion' results from an easy track in which vortex pinning is reduced <cit.>, a row of holes <cit.>, or, in HTS, the suppression of superconductivity by heavy-ion irradiation <cit.>. Guided vortex motion can be detected by the deflection of vortices from their trajectory imposed by the Lorentz force.This leads to a pronounced transverse voltage <cit.> that can be distinguished from the conventional Hall effect by its even symmetry with respect to the reversal of the magnetic field. Guided vortex motion also plays an important role in experiments where vortices are accelerated to high velocities, as discussed by <cit.>. §.§ Josephson junctions A ubiquitous need for nanostructuring of superconductors arises from the fabrication of Josephson junctions (JJs), where two superconducting systems are separated by an insulating or metallic barrier of a few nm thickness or by another weak coupling link. It is crucial that the Josephson weak links are stable and can be reproducibly fabricated. While for metallic superconductors the industrial fabrication of circuits based on JJs has been established for many years <cit.>, the fabrication of JJs in HTS is much more challenging. It is still in the phase of cumulative progress.JJs consisting exclusively of HTS materials have been produced by introducing a crystallographic fault (break junctions), by using a grain boundary between different crystallites, by growing thin HTS films over a substrate step, by narrow constrictions (Dayem bridges), or by multilayers forcing the current along the c-axis <cit.>. Here we restrict ourselves to discussing JJs in HTS produced by masked or focused ion irradiation. A general account of this subject can be found in the chapter by <cit.>.Several attempts have been made to fabricate JJs by irradiation techniques. Initially, direct writing of narrow lines with an electron beam across a pre-patterned YBCO bridge in a scanning electron microscope resulted in somewhat unstable JJs and required high electron doses. Later, creating a weak link in YBCO bridges by implanting oxygen ions through a lithographically defined mask led to superconducting-normal-superconducting (SNS) JJs with resistively shunted junction (RSJ) properties <cit.>. However, the minimum width of the mask structures of about 20 nm, and the inevitable straggle of ion collision cascades in the YBCO film set resolution limits and prevent fabrication of superconductor-insulator-superconductor (SIS) junctions, which would be the ultimate goal.A similar technique, but using 200 keV Ne^+ ions, allows complete penetration of the ions through the YBCO film, avoiding implantation and limiting the intended damage to the creation of point defects <cit.>. This technique can be scaled up to integrating of many JJs in a 2D series-parallel array; 15,820 (28 × 565) JJs have been demonstrated <cit.>.Focused He^+ ion irradiation in a HIM took the fabrication of JJs one step further. Tunnel junctions can be directly written with the focused He-ion beam into YBCO films, as illustrated in Fig. <ref>. The properties of the barrier are controlled by varying the irradiation dose. With this technique, SIS junction can also be realized <cit.>. Scanning transmission electron analysis shows that the amorphous tracks created by 1500 He^+/nm have a lateral extension of 4 nm, while no destruction of the crystallographic structure is observed at lower fluence. Nevertheless, the devices produced with the lower doses show an explicit JJ behavior <cit.>.The He–FIB technique can, in principle, be applied to other HTS, as has been demonstrated for La_1.84Sr_0.16CuO_4 <cit.> and other superconducting materials such as MgB_2 <cit.>. The versatility of directly written structures and JJs provides a tremendous advantage in fabricating superconducting quantum interference devices <cit.>, JJ arrays <cit.>, and even more complex devices on the same substrate for various applications.§ CONCLUSIONS AND OUTLOOK In this chapter, we have outlined various methods for fabricating nanostructured superconductors. While established lithographic and ion-milling techniques enable the patterning of metallic superconductors, novel techniques are required for nanoscale structures in copper-oxide superconductors. For example, focused He^+-ion beam irradiation creates columnar channels of point defects and can be used to create vortex pinning landscapes and weak links for Josephson junctions. Superconducting nanostructures are already indispensable for many applications and will bring further significant advances to applications of superconducting electronics, from fluxonics to ultra-high sensitivity magnetometers and many others. Beyond that, the dawning age of superconducting quantum computing depends heavily on precise and reproducible nanostructured circuits on a large scale.§ ACKNOWLEDGMENTSThis work was supported by the Austrian Science Fund (FWF), grant I4865-N, and is based upon work from COST Actions CA16218 (NANOCOHYBRI), CA19108 (Hi-SCALE), and CA19140 (FIT4NANO), supported by COST (European Cooperation in Science and Technology).tocsectionReferences elsarticle-harv98 natexlab#1#1[#1],#1 [Aichner et al.(2020)Aichner, Mletschnig, Müller, Karrer, Dosmailov, Pedarnig, Kleiner, Koelle and Lang]AICH20 authorAichner, B., authorMletschnig, K.L., authorMüller, B., authorKarrer, M., authorDosmailov, M., authorPedarnig, J.D., authorKleiner, R., authorKoelle, D., authorLang, W., year2020. titleAngular magnetic-field dependence of vortex matching in pinning lattices fabricated by focused or masked helium ion beam irradiation of superconducting YBa_2Cu_3O_7-δ thin films. journalLow Temp. Phys. volume46, pages331–337. 10.1063/10.0000863. noteFiz. Nizk. Temp. 46, 402-–409.[Aichner et al.(2019)Aichner, Müller, Karrer, Misko, Limberger, Mletschnig, Dosmailov, Pedarnig, Nori, Kleiner, Koelle and Lang]AICH19 authorAichner, B., authorMüller, B., authorKarrer, M., authorMisko, V.R., authorLimberger, F., authorMletschnig, K.L., authorDosmailov, M., authorPedarnig, J.D., authorNori, F., authorKleiner, R., authorKoelle, D., authorLang, W., year2019. titleUltradense tailored vortex pinning arrays in superconducting YBa_2Cu_3O_7-δ thin films created by focused He ion beam irradiation for fluxonics applications. journalACS Appl. Nano Mater. volume2, pages5108–5115. 10.1021/acsanm.9b01006.[Backmeister et al.(2022)Backmeister, Aichner, Karrer, Wurster, Kleiner, Goldobin, Koelle and Lang]BACK22 authorBackmeister, L., authorAichner, B., authorKarrer, M., authorWurster, K., authorKleiner, R., authorGoldobin, E., authorKoelle, D., authorLang, W., year2022. titleOrdered Bose glass of vortices in superconducting YBa_2Cu_3O_7-δ thin films with a periodic pin lattice created by focused helium ion irradiation. journalNanomaterials volume12, pages3491. 10.3390/nano12193491.[Baumans et al.(2016)Baumans, Cerbu, Adami, Zharinov, Verellen, Papari, Scheerder, Zhang, Moshchalkov, Silhanek and Van de Vondel]BAUM16 authorBaumans, X.D.A., authorCerbu, D., authorAdami, O.A., authorZharinov, V.S., authorVerellen, N., authorPapari, G., authorScheerder, J.E., authorZhang, G., authorMoshchalkov, V.V., authorSilhanek, A.V., authorVan de Vondel, J., year2016. titleThermal and quantum depletion of superconductivity in narrow junctions created by controlled electromigration. journalNat. Commun. volume7, pages10560. 10.1038/ncomms10560.[Bergeal et al.(2005)Bergeal, Grison, Lesueur, Faini, Aprili and Contour]BERG05 authorBergeal, N., authorGrison, X., authorLesueur, J., authorFaini, G., authorAprili, M., authorContour, J.P., year2005. titleHigh-quality planar high-T_c Josephson junctions. journalAppl. Phys. Lett. volume87, pages102502. 10.1063/1.2037206.[Berghuis et al.(1997)Berghuis, Di Bartolomeo, Wagner and Evetts]BERG97 authorBerghuis, P., authorDi Bartolomeo, E., authorWagner, G.A., authorEvetts, J.E., year1997. titleIntrinsic channeling of vortices along the ab plane in vicinal YBa_2Cu_3O_7-δ films. journalPhys. Rev. Lett. volume79, pages2332–2335. 10.1103/PhysRevLett.79.2332.[Bezryadin et al.(2000)Bezryadin, Lau and Tinkham]BEZR00 authorBezryadin, A., authorLau, C.N., authorTinkham, M., year2000. titleQuantum suppression of superconductivity in ultrathin nanowires. journalNature volume404, pages971–974. 10.1038/35010060.[Bezryadin et al.(1996)Bezryadin, Ovchinnikov and Pannetier]BEZR96 authorBezryadin, A., authorOvchinnikov, Y.N., authorPannetier, B., year1996. titleNucleation of vortices inside open and blind microholes. journalPhys. Rev. B volume53, pages8553–8560. 10.1103/PhysRevB.53.8553.[Blatter et al.(1994)Blatter, Feigel'man, Geshkenbein, Larkin and Vinokur]BLAT94R authorBlatter, G., authorFeigel'man, M.V., authorGeshkenbein, V.B., authorLarkin, A.I., authorVinokur, V.M., year1994. titleVortices in high-temperature superconductors. journalRev. Mod. Phys. volume66, pages1125–1388. 10.1103/RevModPhys.66.1125.[Bothner et al.(2014)Bothner, Seidl, Misko, Kleiner, Koelle and Kemmler]BOTH14 authorBothner, D., authorSeidl, R., authorMisko, V.R., authorKleiner, R., authorKoelle, D., authorKemmler, M., year2014. titleUnusual commensurability effects in quasiperiodic pinning arrays induced by local inhomogeneities of the pinning site density. journalSupercond. Sci. Technol. volume27, pages065002. 10.1088/0953-2048/27/6/065002.[Brandt(2024)]BRAN24R authorBrandt, E.H., year2024. titleSuperconductivity: Ginzburg-Landau theory and vortex lattice, in: editorChakraborty, T. (Ed.), booktitleEncyclopedia of Condensed Matter Physics (Second Edition). publisherAcademic Press, addressOxford, pp. pages693–701. 10.1016/B978-0-323-90800-9.00079-2.[Burnett et al.(2017)Burnett, Sagar, Kennedy, Warburton and Fenton]BURN17 authorBurnett, J., authorSagar, J., authorKennedy, O.W., authorWarburton, P.A., authorFenton, J.C., year2017. titleLow-loss superconducting nanowire circuits using a neon focused ion beam. journalPhys. Rev. Appl. volume8, pages014039. 10.1103/physrevapplied.8.014039.[Buzdin and Feinberg(1996)]BUZD96b authorBuzdin, A., authorFeinberg, D., year1996. titleElectromagnetic pinning of vortices by non-superconducting defects and their influence on screening. journalPhysica C volume256, pages303–311. 10.1016/0921-4534(95)00664-8.[Buzdin(1993)]BUZD93 authorBuzdin, A.I., year1993. titleMultiple-quanta vortices at columnar defects. journalPhys. Rev. B volume47, pages11416. 10.1103/PhysRevB.47.11416.[Castellanos et al.(1997)Castellanos, Wördenweber, Ockenfuss, v.d. Hart and Keck]CAST97 authorCastellanos, A., authorWördenweber, R., authorOckenfuss, G., authorv.d. Hart, A., authorKeck, K., year1997. titlePreparation of regular arrays of antidots in YBa_2Cu_3O_7 thin films and observation of vortex lattice matching effects. journalAppl. Phys. Lett. volume71, pages962–964. 10.1063/1.119701.[Civale(1997)]CIVA97R authorCivale, L., year1997. titleVortex pinning and creep in high-temperature superconductors with columnar defects. journalSupercond. Sci. Technol. volume10, pagesA11–A28. 10.1088/0953-2048/10/7a/003.[Cole et al.(2006)Cole, Bending, Savel'ev, Grigorenko, Tamegai and Nori]COLE06 authorCole, D., authorBending, S., authorSavel'ev, S., authorGrigorenko, A., authorTamegai, T., authorNori, F., year2006. titleRatchet without spatial asymmetry for controlling the motion of magnetic flux quanta using time-asymmetric drives. journalNat. Mater. volume5, pages305–311. 10.1038/nmat1608.[Collienne et al.(2021)Collienne, Raes, Keijers, Linek, Koelle, Kleiner, Kramer, Van de Vondel and Silhanek]COLL21 authorCollienne, S., authorRaes, B., authorKeijers, W., authorLinek, J., authorKoelle, D., authorKleiner, R., authorKramer, R.B.G., authorVan de Vondel, J., authorSilhanek, A.V., year2021. titleNb-based nanoscale superconducting quantum interference devices tuned by electroannealing. journalPhys. Rev. Appl. volume15, pages034016. 10.1103/physrevapplied.15.034016.[Córdoba(2024)]CORD24 authorCórdoba, R., year2024. titleAdditive nanofabrication using focused ion and electron beams, in: editorChakraborty, T. (Ed.), booktitleEncyclopedia of Condensed Matter Physics (Second Edition). publisherAcademic Press, addressOxford, pp. pages448–464. 10.1016/B978-0-323-90800-9.00035-4.[Cuppens et al.(2011)Cuppens, Ataklti, Gillijns, Van de Vondel, Moshchalkov and Silhanek]CUPP11 authorCuppens, J., authorAtaklti, G.W., authorGillijns, W., authorVan de Vondel, J., authorMoshchalkov, V.V., authorSilhanek, A.V., year2011. titleVortex dynamics in a superconducting film with a kagome and a honeycomb pinning landscape. journalJ. Supercond. Novel Magn. volume24, pages7–11. 10.1007/s10948-010-0893-7.[Cybart et al.(2009)Cybart, Anton, Wu, Clarke and Dynes]CYBA09 authorCybart, S.A., authorAnton, S.M., authorWu, S.M., authorClarke, J., authorDynes, R.C., year2009. titleVery large scale integration of nanopatterned YBa_2Cu_3O_7-δ Josephson junctions in a two-dimensional array. journalNano Lett. volume9, pages3581–3585. 10.1021/nl901785j.[Cybart et al.(2014)Cybart, Cho, Wong, Glyantsev, Huh, Yung, Moeckly, Beeman, Ulin-Avila, Wu and Dynes]CYBA14 authorCybart, S.A., authorCho, E.Y., authorWong, T.J., authorGlyantsev, V.N., authorHuh, J.U., authorYung, C.S., authorMoeckly, B.H., authorBeeman, J.W., authorUlin-Avila, E., authorWu, S.M., authorDynes, R.C., year2014. titleLarge voltage modulation in magnetic field sensors from two-dimensional arrays of Y-Ba-Cu-O nano Josephson junctions. journalAppl. Phys. Lett. volume104, pages062601. 10.1063/1.4865216.[Cybart et al.(2015)Cybart, Cho, Wong, Wehlin, Ma, Huynh and Dynes]CYBA15 authorCybart, S.A., authorCho, E.Y., authorWong, T.J., authorWehlin, B.H., authorMa, M.K., authorHuynh, C., authorDynes, R.C., year2015. titleNano Josephson superconducting tunnel junctions in YBa_2Cu_3O_7-δ directly patterned with a focused helium ion beam. journalNat. Nanotechnol. volume10, pages598. 10.1038/nnano.2015.76.[Dobrovolskiy(2024)]DOBR24R authorDobrovolskiy, O.V., year2024. titleFast dynamics of vortices in superconductors, in: editorChakraborty, T. (Ed.), booktitleEncyclopedia of Condensed Matter Physics (Second Edition). publisherAcademic Press, addressOxford, pp. pages735–754. 10.1016/B978-0-323-90800-9.00015-9.[Dobrovolskiy et al.(2016)Dobrovolskiy, Hanefeld, Zörb, Huth and Shklovskij]DOBR16 authorDobrovolskiy, O.V., authorHanefeld, M., authorZörb, M., authorHuth, M., authorShklovskij, V.A., year2016. titleInterplay of flux guiding and Hall effect in Nb films with nanogrooves. journalSupercond. Sci. Technol. volume29, pages065009. 10.1088/0953-2048/29/6/065009.[Durrell et al.(2000)Durrell, Gibson, Barber, Evetts, Rössler, Pedarnig and Bäuerle]DURR00 authorDurrell, J.H., authorGibson, G., authorBarber, Z.H., authorEvetts, J.E., authorRössler, R., authorPedarnig, J.D., authorBäuerle, D., year2000. titleDependence of critical current on field angle in off-c-axis grown Bi_2Sr_2CaCu_2O_8 film. journalAppl. Phys. Lett. volume77, pages1686–8. 10.1063/1.1310174.[Emmrich et al.(2016)Emmrich, Beyer, Nadzeyka, Bauerdick, Meyer, Kotakoski and Gölzhäuser]EMMR16 authorEmmrich, D., authorBeyer, A., authorNadzeyka, A., authorBauerdick, S., authorMeyer, J.C., authorKotakoski, J., authorGölzhäuser, A., year2016. titleNanopore fabrication and characterization by helium ion microscopy. journalAppl. Phys. Lett. volume108, pages163103. 10.1063/1.4947277.[Fisher et al.(1991)Fisher, Fisher and Huse]FISH91 authorFisher, D.S., authorFisher, M.P.A., authorHuse, D.A., year1991. titleThermal fluctuations, quenched disorder, phase transitions, and transport in type-II superconductors. journalPhys. Rev. volume43, pages130–159. 10.1103/physrevb.43.130.[Fomin(2020)]FOMI20M authorFomin, V.M., year2020. titleSelf-rolled Micro- and Nanoarchitectures. publisherDe Gruyter, addressBerlin/Boston. 10.1515/9783110575576.[Gol'tsman et al.(2001)Gol'tsman, Okunev, Chulkova, Lipatov, Semenov, Smirnov, Voronov, Dzardanov, Williams and Sobolewski]GOLT01 authorGol'tsman, G.N., authorOkunev, O., authorChulkova, G., authorLipatov, A., authorSemenov, A., authorSmirnov, K., authorVoronov, B., authorDzardanov, A., authorWilliams, C., authorSobolewski, R., year2001. titlePicosecond superconducting single-photon optical detector. journalAppl. Phys. Lett. volume79, pages705–707. 10.1063/1.1388868.[Gozar et al.(2017)Gozar, Litombe, Hoffman and Božović]GOZA17 authorGozar, A., authorLitombe, N.E., authorHoffman, J.E., authorBožović, I., year2017. titleOptical nanoscopy of high T_c cuprate nanoconstriction devices patterned by helium ion beams. journalNano Lett. volume17, pages1582–1586. 10.1021/acs.nanolett.6b04729.[Gray et al.(2022)Gray, Rushton and Murphy]GRAY22 authorGray, R.L., authorRushton, M.J.D., authorMurphy, S.T., year2022. titleMolecular dynamics simulations of radiation damage in YBa_2Cu_3O_7. journalSupercond. Sci. Technol. volume35, pages035010. 10.1088/1361-6668/ac47dc.[Haag et al.(2014)Haag, Zechner, Lang, Dosmailov, Bodea and Pedarnig]HAAG14 authorHaag, L.T., authorZechner, G., authorLang, W., authorDosmailov, M., authorBodea, M.A., authorPedarnig, J.D., year2014. titleStrong vortex matching effects in YBCO films with periodic modulations of the superconducting order parameter fabricated by masked ion irradiation. journalPhysica C volume503, pages75–81. 10.1016/j.physc.2014.03.032.[Haage et al.(1997)Haage, Zegenhagen, Li, Habermeier, Cardona, Warthmann, Forkl and Kronmüller]HAAG97 authorHaage, T., authorZegenhagen, J., authorLi, J.Q., authorHabermeier, H.U., authorCardona, M., authorWarthmann, J.R., authorForkl, A., authorKronmüller, H., year1997. titleTransport properties and flux pinning by self-organization in YBa_2Cu_3O_7-δ films on vicinal SrTiO_3 (001). journalPhys. Rev. B volume56, pages8404–8418. 10.1103/PhysRevB.56.8404.[Hadfield(2009)]HADF09R authorHadfield, R.H., year2009. titleSingle-photon detectors for optical quantum information applications. journalNat. Photon. volume3, pages696–705. 10.1038/nphoton.2009.230.[Hänggi and Marchesoni(2009)]HANG09R authorHänggi, P., authorMarchesoni, F., year2009. titleArtificial Brownian motors: Controlling transport on the nanoscale. journalRev. Mod. Phys. volume81, pages387–442. 10.1103/RevModPhys.81.387.[Harada et al.(1996)Harada, Kamimura, Kasai, Matsuda, Tonomura and Moshchalkov]HARA96b authorHarada, K., authorKamimura, O., authorKasai, H., authorMatsuda, T., authorTonomura, A., authorMoshchalkov, V.V., year1996. titleDirect observation of vortex dynamics in superconducting films with regular arrays of defects. journalScience volume274, pages1167–1170. 10.1126/science.274.5290.1167.[Hastings et al.(2003)Hastings, Olson Reichhardt and Reichhardt]HAST03 authorHastings, M.B., authorOlson Reichhardt, C.J., authorReichhardt, C., year2003. titleRatchet cellular automata. journalPhys. Rev. Lett. volume90, pages247004. 10.1103/physrevlett.90.247004.[Heine et al.(2021)Heine, Lang, Rössler and Pedarnig]HEIN21 authorHeine, G., authorLang, W., authorRössler, R., authorPedarnig, J.D., year2021. titleAnisotropy of the in-plane and out-of-plane resistivity and the Hall effect in the normal state of vicinal-grown YBa_2Cu_3O_7-δ thin films. journalNanomaterials volume11, pages675. 10.3390/nano11030675.[Hlawacek and Gölzhäuser(2016)]HLAW16M editorHlawacek, G., editorGölzhäuser, A. (Eds.), year2016. titleHelium Ion Microscopy. publisherSpringer International Publishing, addressSwitzerland. 10.1007/978-3-319-41990-9.[Kahlmann et al.(1998)Kahlmann, Engelhardt, Schubert, Zander, Buchal and Hollkott]KAHL98 authorKahlmann, F., authorEngelhardt, A., authorSchubert, J., authorZander, W., authorBuchal, C., authorHollkott, J., year1998. titleSuperconductor-normal-superconductor Josephson junctions fabricated by oxygen implantation into YBa_2Cu_3O_7-δ. journalAppl. Phys. Lett. volume73, pages2354–6. 10.1063/1.122459.[Kang et al.(2002)Kang, Burnell, Lloyd, Speaks, Peng, Jeynes, Webb, Yun, Moon, Oh, Tarte, Moore and Blamire]KANG02a authorKang, D.J., authorBurnell, G., authorLloyd, S.J., authorSpeaks, R.S., authorPeng, N.H., authorJeynes, C., authorWebb, R., authorYun, J.H., authorMoon, S.H., authorOh, B., authorTarte, E.J., authorMoore, D.F., authorBlamire, M.G., year2002. titleRealization and properties of YBa_2Cu_3O_7-δ Josephson junctions by metal masked ion damage technique. journalAppl. Phys. Lett. volume80, pages814–816. 10.1063/1.1446998.[Karrer et al.(2023)Karrer, Aichner, Wurster, Kleiner, Goldobin, Koelle and Lang]KARR22P authorKarrer, M., authorAichner, B., authorWurster, K., authorKleiner, R., authorGoldobin, E., authorKoelle, D., authorLang, W., year2023. titleHigh-magnetic-field commensurability effects in ultradense pinning lattices fabricated by focused He-ion beam. noteIn preparation.[Kasaei et al.(2018)Kasaei, Melbourne, Manichev, Feldman, Gustafsson, Chen, Xi and Davidson]KASA18 authorKasaei, L., authorMelbourne, T., authorManichev, V., authorFeldman, L.C., authorGustafsson, T., authorChen, K., authorXi, X.X., authorDavidson, B.A., year2018. titleMgB_2 Josephson junctions produced by focused helium ion beam irradiation. journalAIP Adv. volume8, pages075020. 10.1063/1.5030751.[Katz et al.(1998)Katz, Sun, Woods and Dynes]KATZ98 authorKatz, A.S., authorSun, A.G., authorWoods, S.I., authorDynes, R.C., year1998. titlePlanar thin film YBa_2Cu_3O_7- δ Josephson junctions via nanolithography and ion damage. journalAppl. Phys. Lett. volume72, pages2032–2034. 10.1063/1.121255.[Katz et al.(2000)Katz, Woods and Dynes]KATZ00 authorKatz, A.S., authorWoods, S.I., authorDynes, R.C., year2000. titleTransport properties of high-T_c planar Josephson junctions fabricated by nanolithography and ion implantation. journalJ. Appl. Phys. volume87, pages2978–83. 10.1063/1.372286.[Kemmler et al.(2006)Kemmler, Gürlich, Sterck, Pöhler, Neuhaus, Siegel, Kleiner and Koelle]KEMM06 authorKemmler, M., authorGürlich, C., authorSterck, A., authorPöhler, H., authorNeuhaus, M., authorSiegel, M., authorKleiner, R., authorKoelle, D., year2006. titleCommensurability effects in superconducting Nb films with quasiperiodic pinning arrays. journalPhys. Rev. Lett. volume97, pages147003. 10.1103/physrevlett.97.147003.[Koelle et al.(1999)Koelle, Kleiner, Ludwig, Dantsker and Clarke]KOEL99R authorKoelle, D., authorKleiner, R., authorLudwig, F., authorDantsker, E., authorClarke, J., year1999. titleHigh-transition-temperature superconducting quantum interference devices. journalRev. Mod. Phys. volume71, pages631–686. 10.1103/revmodphys.71.631.[Kramer et al.(2009)Kramer, Silhanek, Van de Vondel, Raes and Moshchalkov]KRAM09 authorKramer, R.B.G., authorSilhanek, A.V., authorVan de Vondel, J., authorRaes, B., authorMoshchalkov, V.V., year2009. titleSymmetry-induced giant vortex state in a superconducting Pb film with a fivefold Penrose array of magnetic pinning centers. journalPhys. Rev. Lett. volume103, pages067007. 10.1103/physrevlett.103.067007.[Laguna et al.(2001)Laguna, Balseiro, Domínguez and Nori]LAGU01 authorLaguna, M.F., authorBalseiro, C.A., authorDomínguez, D., authorNori, F., year2001. titleVortex structure and dynamics in kagomé and triangular pinning potentials. journalPhys. Rev. B volume64, pages104505. 10.1103/PhysRevB.64.104505.[Lang et al.(2006)Lang, Dineva, Marksteiner, Enzenhofer, Siraj, Peruzzi, Pedarnig, Bäuerle, Korntner, Cekan, Platzgummer and Loeschner]LANG06a authorLang, W., authorDineva, M., authorMarksteiner, M., authorEnzenhofer, T., authorSiraj, K., authorPeruzzi, M., authorPedarnig, J.D., authorBäuerle, D., authorKorntner, R., authorCekan, E., authorPlatzgummer, E., authorLoeschner, H., year2006. titleIon-beam direct-structuring of high-temperature superconductors. journalMicroelectron. Eng. volume83, pages1495–1498. 10.1016/j.mee.2006.01.091.[Lang and Pedarnig(2010)]LANG10R authorLang, W., authorPedarnig, J.D., year2010. titleIon irradiation of high-temperature su-perconductors and its application for nanopatterning, in: editorMoshchalkov, V.V., editorWördenweber, R., editorLang, W. (Eds.), booktitleNanoscience and Engineering in Superconductivity. publisherSpringer, addressHeidelberg, pp. pages81–104. 10.1007/978-3-642-15137-8.[Latimer et al.(2012)Latimer, Berdiyorov, Xiao, Kwok and Peeters]LATI12 authorLatimer, M.L., authorBerdiyorov, G.R., authorXiao, Z.L., authorKwok, W.K., authorPeeters, F.M., year2012. titleVortex interaction enhanced saturation number and caging effect in a superconducting film with a honeycomb array of nanoscale holes. journalPhys. Rev. B volume85, pages012505. 10.1103/PhysRevB.85.012505.[Latimer et al.(2013)Latimer, Berdiyorov, Xiao, Peeters and Kwok]LATI13 authorLatimer, M.L., authorBerdiyorov, G.R., authorXiao, Z.L., authorPeeters, F.M., authorKwok, W.K., year2013. titleRealization of artificial ice systems for magnetic vortices in a superconducting MoGe thin film with patterned nanostructures. journalPhys. Rev. Lett. volume111, pages067001. 10.1103/PhysRevLett.111.067001.[Laviano et al.(2010)Laviano, Ghigo, Mezzetti, Hollmann and Wördenweber]LAVI10 authorLaviano, F., authorGhigo, G., authorMezzetti, E., authorHollmann, E., authorWördenweber, R., year2010. titleControl of the vortex flow in microchannel arrays produced in YBCO films by heavy-ion lithography. journalPhysica C volume470, pages844–847. 10.1016/j.physc.2010.02.052.[Lee et al.(1999)Lee, Jankó, Derényi and Barabási]LEE99 authorLee, C.S., authorJankó, B., authorDerényi, I., authorBarabási, A.L., year1999. titleReducing vortex density in superconductors using the `ratchet effect'. journalNature volume400, pages337–340. 10.1038/22485.[LeFebvre et al.(2019)LeFebvre, Cho, Li, Pratt and Cybart]LEFE19 authorLeFebvre, J.C., authorCho, E., authorLi, H., authorPratt, K., authorCybart, S.A., year2019. titleSeries arrays of planar long Josephson junctions for high dynamic range magnetic flux detection. journalAIP Adv. volume9, pages105215. 10.1063/1.5126035.[Libál et al.(2009)Libál, Olson Reichhardt and Reichhardt]LIBA09 authorLibál, A., authorOlson Reichhardt, C.J., authorReichhardt, C., year2009. titleCreating artificial ice states using vortices in nanostructured superconductors. journalPhys. Rev. Lett. volume102, pages237004. 10.1103/physrevlett.102.237004.[Likharev(2012)]LIKH12R authorLikharev, K.K., year2012. titleSuperconductor digital electronics. journalPhysica C volume482, pages6–18. 10.1016/j.physc.2012.05.016.[Lyu et al.(2021)Lyu, Jiang, Wang, Xiao, Dong, Chen, Milošević, Wang, Divan, Pearson, Wu, Peeters and Kwok]LYU21 authorLyu, Y.Y., authorJiang, J., authorWang, Y.L., authorXiao, Z.L., authorDong, S., authorChen, Q.H., authorMilošević, M.V., authorWang, H., authorDivan, R., authorPearson, J.E., authorWu, P., authorPeeters, F.M., authorKwok, W.K., year2021. titleSuperconducting diode effect via conformal-mapped nanoholes. journalNat. Commun. volume12, pages2703. 10.1038/s41467-021-23077-0.[Marinković et al.(2020)Marinković, Fernández-Rodríguez, Collienne, Alvarez, Melinte, Maiorov, Rius, Granados, Mestres, Palau and Silhanek]MARI20 authorMarinković, S., authorFernández-Rodríguez, A., authorCollienne, S., authorAlvarez, S.B., authorMelinte, S., authorMaiorov, B., authorRius, G., authorGranados, X., authorMestres, N., authorPalau, A., authorSilhanek, A.V., year2020. titleDirect visualization of current-stimulated oxygen migration in YBa_2Cu_3O_7-δ thin films. journalACS Nano volume14, pages11765–11774. 10.1021/acsnano.0c04492.[Markowitsch et al.(1997)Markowitsch, Stockinger, Lang, Bierleutgeb, Pedarnig and Bäuerle]MARK97 authorMarkowitsch, W., authorStockinger, C., authorLang, W., authorBierleutgeb, K., authorPedarnig, J.D., authorBäuerle, D., year1997. titlePhotoinduced enhancement of the c-axis conductivity in oxygen-deficient YBa_2Cu_3O_7-δ thin films. journalAppl. Phys. Lett. volume71, pages1246–1248. 10.1063/1.119863.[Martin et al.(1997)Martin, Vélez, Nogués and Schuller]MART97a authorMartin, J.I., authorVélez, M., authorNogués, J., authorSchuller, I.K., year1997. titleFlux pinning in a superconductor by an array of submicrometer magnetic dots. journalPhys. Rev. Lett. volume79, pages1929–1932. 10.1103/PhysRevLett.79.1929.[Milošević et al.(2007)Milošević, Berdiyorov and Peeters]MILO07 authorMilošević, M.V., authorBerdiyorov, G.R., authorPeeters, F.M., year2007. titleFluxonic cellular automata. journalAppl. Phys. Lett. volume91, pages212501. 10.1063/1.2813047.[Misko et al.(2005)Misko, Savel'ev and Nori]MISK05 authorMisko, V., authorSavel'ev, S., authorNori, F., year2005. titleCritical currents in quasiperiodic pinning arrays: Chains and penrose lattices. journalPhys. Rev. Lett. volume95, pages177007. 10.1103/PhysRevLett.95.177007.[Misko et al.(2010)Misko, Bothner, Kemmler, Kleiner, Koelle, Peeters and Nori]MISK10 authorMisko, V.R., authorBothner, D., authorKemmler, M., authorKleiner, R., authorKoelle, D., authorPeeters, F.M., authorNori, F., year2010. titleEnhancing the critical current in quasiperiodic pinning arrays below and above the matching magnetic flux. journalPhys. Rev. B volume82, pages184512. 10.1103/PhysRevB.82.184512.[Misko and Nori(2012)]MISK12 authorMisko, V.R., authorNori, F., year2012. titleMagnetic flux pinning in superconductors with hyperbolic-tessellation arrays of pinning sites. journalPhys. Rev. B volume85, pages184506. 10.1103/PhysRevB.85.184506.[Moshchalkov et al.(1996)Moshchalkov, Baert, Metlushko, Rosseel, Bael, Temst, Jonckheere and Bruynseraede]MOSH96a authorMoshchalkov, V.V., authorBaert, M., authorMetlushko, V.V., authorRosseel, E., authorBael, M.J.V., authorTemst, K., authorJonckheere, R., authorBruynseraede, Y., year1996. titleMagnetization of multiple-quanta vortex lattices. journalPhys. Rev. B volume54, pages7385–7393. 10.1103/physrevb.54.7385.[Moshchalkov and Fritzsche(2011)]MOSH11M authorMoshchalkov, V.V., authorFritzsche, J., year2011. titleNanostructured superconductors. publisherWorld Scientific, addressSingapore. 10.1142/9789814343923.[Müller et al.(2019)Müller, Karrer, Limberger, Becker, Schröppel, Burkhardt, Kleiner, Goldobin and Koelle]MULL19 authorMüller, B., authorKarrer, M., authorLimberger, F., authorBecker, M., authorSchröppel, B., authorBurkhardt, C.J., authorKleiner, R., authorGoldobin, E., authorKoelle, D., year2019. titleJosephson junctions and SQUIDs created by focused helium-ion-beam irradiation of YBa_2Cu_3O_7. journalPhys. Rev. Applied volume11, pages044082. 10.1103/PhysRevApplied.11.044082.[Nelson and Vinokur(1993)]NELS93 authorNelson, D.R., authorVinokur, V.M., year1993. titleBoson localization and correlated pinning of superconducting vortex arrays. journalPhys. Rev. B volume48, pages13060–13097. 10.1103/physrevb.48.13060.[Parks and Little(1964)]PARK64 authorParks, R.D., authorLittle, W.A., year1964. titleFluxoid quantization in a multiply-connected superconductor. journalPhys. Rev. volume133, pagesA97–A103. 10.1103/PhysRev.133.A97.[Pedarnig et al.(2002)Pedarnig, Rössler, Delamare, Lang, Bäuerle, Köhler and Zandbergen]PEDA02 authorPedarnig, J.D., authorRössler, R., authorDelamare, M.P., authorLang, W., authorBäuerle, D., authorKöhler, A., authorZandbergen, H.W., year2002. titleElectrical properties, texture, and microstructure of vicinal YBa_2Cu_3O_7-δ thin films. journalAppl. Phys. Lett. volume81, pages2587–2589. 10.1063/1.1508418.[Perez de Lara et al.(2010)Perez de Lara, Alija, Gonzalez, Velez, Martin and Vicent]LARA10 authorPerez de Lara, D., authorAlija, A., authorGonzalez, E.M., authorVelez, M., authorMartin, J.I., authorVicent, J.L., year2010. titleVortex ratchet reversal at fractional matching fields in kagomélike array with symmetric pinning centers. journalPhys. Rev. B volume82, pages174503. 10.1103/PhysRevB.82.174503.[Poccia et al.(2015)Poccia, Baturina, Coneri, Molenaar, Wang, Bianconi, Brinkman, Hilgenkamp, Golubov and Vinokur]POCC15 authorPoccia, N., authorBaturina, T.I., authorConeri, F., authorMolenaar, C.G., authorWang, X.R., authorBianconi, G., authorBrinkman, A., authorHilgenkamp, H., authorGolubov, A.A., authorVinokur, V.M., year2015. titleCritical behavior at a dynamic vortex insulator-to-metal transition. journalScience volume349, pages1202–1205. 10.1126/science.1260507.[Reichhardt and Olson Reichhardt(2007)]REIC07a authorReichhardt, C., authorOlson Reichhardt, C.J., year2007. titleVortex molecular crystal and vortex plastic crystal states in honeycomb and kagomé pinning arrays. journalPhys. Rev. B volume76, pages064523. 10.1103/PhysRevB.76.064523.[Reichhardt et al.(2015)Reichhardt, Ray and Olson Reichhardt]REIC15 authorReichhardt, C., authorRay, D., authorOlson Reichhardt, C.J., year2015. titleReversible ratchet effects for vortices in conformal pinning arrays. journalPhys. Rev. B volume91, pages184502. 10.1103/physrevb.91.184502.[Savel'ev and Nori(2002)]SAVE02 authorSavel'ev, S., authorNori, F., year2002. titleExperimentally realizable devices for controlling the motion of magnetic flux quanta in anisotropic superconductors. journalNat. Mater. volume1, pages179–184. 10.1038/nmat746.[Sefrioui et al.(2001)Sefrioui, Arias, González, Léon, Santamaria and Vicent]SEFR01 authorSefrioui, Z., authorArias, D., authorGonzález, E.M., authorLéon, C., authorSantamaria, J., authorVicent, J.L., year2001. titleVortex liquid entanglement in irradiated YBa_2Cu_3O_7-δ thin films. journalPhys. Rev. B volume63, pages064503. 10.1103/PhysRevB.63.064503.[Silhanek et al.(2006)Silhanek, Gillijns, Moshchalkov, Zhu, Moonens and Leunissen]SILH06 authorSilhanek, A.V., authorGillijns, W., authorMoshchalkov, V.V., authorZhu, B.Y., authorMoonens, J., authorLeunissen, L.H.A., year2006. titleEnhanced pinning and proliferation of matching effects in a superconducting film with a Penrose array of magnetic dots. journalAppl. Phys. Lett. volume89, pages152507. 10.1063/1.2361172.[Silhanek et al.(2010)Silhanek, Van de Vondel and Moshchalkov]SILH10R authorSilhanek, A.V., authorVan de Vondel, J., authorMoshchalkov, V.V., year2010. titleGuided vortex motion and vortex ratchets in nanostructured superconductors, in: editorMoshchalkov, V.V., editorWördenweber, R., editorLang, W. (Eds.), booktitleNanoscience and Engineering in Superconductivity. publisherSpringer, addressHeidelberg, pp. pages1–24. 10.1007/978-3-642-15137-8.[Sochnikov et al.(2010)Sochnikov, Shaulov, Yeshurun, Logvenov and Bozovic]SOCH10 authorSochnikov, I., authorShaulov, A., authorYeshurun, Y., authorLogvenov, G., authorBozovic, I., year2010. titleLarge oscillations of the magnetoresistance in nanopatterned high-temperature superconducting films. journalNat. Nanotechnol. volume5, pages516–519. 10.1038/nnano.2010.111.[Swiecicki et al.(2012)Swiecicki, Ulysse, Wolf, Bernard, Bergeal, Briatico, Faini, Lesueur and Villegas]SWIE12 authorSwiecicki, I., authorUlysse, C., authorWolf, T., authorBernard, R., authorBergeal, N., authorBriatico, J., authorFaini, G., authorLesueur, J., authorVillegas, J.E., year2012. titleStrong field-matching effects in superconducting YBa_2Cu_3O_7-δ films with vortex energy landscapes engineered via masked ion irradiation. journalPhys. Rev. B volume85, pages224502. 10.1103/physrevb.85.224502.[Tafuri(2024)]TAFU24R authorTafuri, F., year2024. titleJosephson junctions, in: editorChakraborty, T. (Ed.), booktitleEncyclopedia of Condensed Matter Physics (Second Edition). publisherAcademic Press, addressOxford, pp. pages616–631. 10.1016/B978-0-323-90800-9.00145-1.[Tafuri and Kirtley(2005)]TAFU05R authorTafuri, F., authorKirtley, J.R., year2005. titleWeak links in high critical temperature superconductors. journalRep. Progr. Phys. volume68, pages2573–2663. 10.1088/0034-4885/68/11/r03.[Tinchev(1996)]TINC96 authorTinchev, S.S., year1996. titleProperties of YBCO weak links prepared by local oxygen-ion induced modification. journalPhysica C volume256, pages191–198. 10.1016/0921-4534(95)00615-x.[Trastoy et al.(2014)Trastoy, Malnou, Ulysse, Bernard, Bergeal, Faini, Lesueur, Briatico and Villegas]TRAS14 authorTrastoy, J., authorMalnou, M., authorUlysse, C., authorBernard, R., authorBergeal, N., authorFaini, G., authorLesueur, J., authorBriatico, J., authorVillegas, J.E., year2014. titleFreezing and thawing of artificial ice by thermal switching of geometric frustration in magnetic flux lattices. journalNat. Nanotechnol. volume9, pages710–715. 10.1038/nnano.2014.158.[Villegas et al.(2006)Villegas, Montero, Li and Schuller]VILL06 authorVillegas, J.E., authorMontero, M.I., authorLi, C.P., authorSchuller, I.K., year2006. titleCorrelation length of quasiperiodic vortex lattices. journalPhys. Rev. Lett. volume97, pages027002. 10.1103/physrevlett.97.027002.[Villegas et al.(2003)Villegas, Savel'ev, Nori, Gonzalez, Anguita, García and Vicent]VILL03 authorVillegas, J.E., authorSavel'ev, S., authorNori, F., authorGonzalez, E.M., authorAnguita, J.V., authorGarcía, R., authorVicent, J.L., year2003. titleA superconducting reversible rectifier that controls the motion of magnetic flux quanta. journalScience volume302, pages1188–1191. 10.1126/science.1090390.[Vinckx et al.(2007)Vinckx, Vanacken, Moshchalkov, Mátéfi-Tempfli, Mátéfi-Tempfli, Michotte, Piraux and Ye]VINC07 authorVinckx, W., authorVanacken, J., authorMoshchalkov, V.V., authorMátéfi-Tempfli, S., authorMátéfi-Tempfli, M., authorMichotte, S., authorPiraux, L., authorYe, X., year2007. titleHigh field matching effects in superconducting Nb porous arrays catalyzed from anodic alumina templates. journalPhysica C volume459, pages5–10. 10.1016/j.physc.2007.04.194.[Van de Vondel et al.(2005)Van de Vondel, Silva, Zhu, Morelle and Moshchalkov]VAND05 authorVan de Vondel, J., authorSilva, C.C.D., authorZhu, B.Y., authorMorelle, M., authorMoshchalkov, V.V., year2005. titleVortex-rectification effects in films with periodic asymmetric pinning. journalPhys. Rev. Lett. volume94, pages057003. 10.1103/PhysRevLett.94.057003.[Wambaugh et al.(1999)Wambaugh, Reichhardt, Olson, Marchesoni and Nori]WAMB99 authorWambaugh, J.F., authorReichhardt, C., authorOlson, C.J., authorMarchesoni, F., authorNori, F., year1999. titleSuperconducting fluxon pumps and lenses. journalPhys. Rev. Lett. volume83, pages5106–5109. 10.1103/physrevlett.83.5106.[Ward et al.(2006)Ward, Notte and Economou]WARD06 authorWard, B.W., authorNotte, J.A., authorEconomou, N.P., year2006. titleHelium ion microscope: A new tool for nanoscale microscopy and metrology. journalJ. Vac. Sci. Techn. B volume24, pages2871–2874. 10.1116/1.2357967.[Weber(2003)]WEBE03R authorWeber, H.W., year2003. titleIrradiation, in: editorCardwell, D.A., editorGinley, D.S. (Eds.), booktitleHandbook of Superconducting Materials. publisherIOP Publishing, addressBristol, pp. pages407–418. 10.1201/9781420034202.[Welp et al.(2002)Welp, Xiao, Jiang, Vlasko-Vlasov, Bader, Crabtree, Liang, Chik and Xu]WELP02 authorWelp, U., authorXiao, Z.L., authorJiang, J.S., authorVlasko-Vlasov, V.K., authorBader, S.D., authorCrabtree, G.W., authorLiang, J., authorChik, H., authorXu, J.M., year2002. titleSuperconducting transition and vortex pinning in Nb films patterned with nanoscale hole arrays. journalPhys. Rev. B volume66, pages212507. 10.1103/physrevb.66.212507.[Wördenweber et al.(2004)Wördenweber, Dymashevski and Misko]WORD04 authorWördenweber, R., authorDymashevski, P., authorMisko, V.R., year2004. titleGuidance of vortices and the vortex ratchet effect in high-T_c superconducting thin films obtained by arrangement of antidots. journalPhys. Rev. B volume69, pages184504. 10.1103/physrevb.69.184504.[Xue et al.(2018)Xue, Ge, He, Zharinov, Moshchalkov, Zhou, Silhanek and Van de Vondel]XUE18 authorXue, C., authorGe, J.Y., authorHe, A., authorZharinov, V.S., authorMoshchalkov, V.V., authorZhou, Y.H., authorSilhanek, A.V., authorVan de Vondel, J., year2018. titleTunable artificial vortex ice in nanostructured superconductors with a frustrated kagome lattice of paired antidots. journalPhys. Rev. B volume97, pages134506. 10.1103/PhysRevB.97.134506.[Yun et al.(2000)Yun, Pedarnig, Rössler, Bäuerle and Obradors]YUN00 authorYun, S.H., authorPedarnig, J.D., authorRössler, R., authorBäuerle, D., authorObradors, X., year2000. titleIn-plane and out-of-plane resistivities of vicinal Hg-1212 thin films. journalAppl. Phys. Lett. volume77, pages1369–1371. 10.1063/1.1289489. | http://arxiv.org/abs/2310.18232v1 | {
"authors": [
"Wolfgang Lang"
],
"categories": [
"cond-mat.supr-con",
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
],
"primary_category": "cond-mat.supr-con",
"published": "20231027161353",
"title": "Nanostructured Superconductors"
} |
[c]0.50Harvard University Department of MathematicsE-mail: [email protected] [c]0.50Harvard University School of Engineering and Applied SciencesE-mail: [email protected] [c]0.50Harvard University Center of Mathematical Sciences and ApplicationsE-mail: [email protected][c]0.50Harvard University Department of MathematicsE-mail: [email protected] We consider certain large random matrices, called random inner-product kernel matrices, which are essentially given by a nonlinear function f applied entrywise to a sample-covariance matrix, f(X^TX), where X ∈^d × N is random and normalized in such a way that f typically has order-one arguments. We work in the polynomial regime, where N ≍ d^ℓ for some ℓ > 0, not just the linear regime where ℓ = 1. Earlier work by various authors showed that, when the columns of X are either uniform on the sphere or standard Gaussian vectors, and when ℓ is an integer (the linear regime ℓ = 1 is particularly well-studied), the bulk eigenvalues of such matrices behave in a simple way: They are asymptotically given by the free convolution of the semicircular and Marčenko–Pastur distributions, with relative weights given by expanding f in the Hermite basis. In this paper, we show that this phenomenon is universal, holding as soon as X has i.i.d. entries with all finite moments. In the case of non-integer ℓ, the Marčenko–Pastur term disappears (its weight in the free convolution vanishes), and the spectrum is just semicircular.Date: October 27, 2023 =0.2in Keywords and phrases: random inner-product kernel matrices, nonlinear random matrices, free convolution, orthogonal polynomials, polynomial regime 2020 Mathematics Subject Classification: 60B20, 15B52 § INTRODUCTION§.§ Our resultsIn this paper, we give a common global law for the spectra of two related families of real-symmetric random matrices which are in some sense nonlinear. Our matrices A = A_N ∈^N × N and A = A_N∈^N × N, called random inner-product kernel matrices, have the entrywise formA_ij = 1/√(N) f( X_i,X_j/√(d))ifi ≠ j, 0ifi = j,A_ij = 1/√(N) f( √(d)X_i,X_j/X_iX_j) = 1/√(N) f( X_i,X_j/√(d)√(d)/X_i√(d)/X_j)ifi ≠ jand X_i≠ 0 ≠X_j, 0otherwise,and “give a global law” means that we find a deterministic measure ρ that is the almost-sure weak limit of the empirical spectral measures ρ_N = 1/N∑_i=1^N δ_λ_i(A), ρ_N = 1/N∑_i=1^N δ_λ_i(A),where (λ_i(A))_i=1^N (resp., (λ_i(A))_i=1^N) are the eigenvalues of A (resp., of A). Here f : → is some fixed function, such as ReLU, representing the nonlinearity; d and N are parameters tending simultaneously to infinity in the so-called polynomial regime where d^ℓ≍ N for some ℓ > 0; and the i.i.d. vectors (X_i)_i=1^N have i.i.d. components drawn from some fixed μ, which is a centered probability measure onwith unit variance.Informally speaking, this normalization implies that X_i,X_j/√(d) is order-one, and √(d)/X_i≈ 1 + (1), so that A and A are entrywise quite close. Indeed, we show that they have the same global law (i.e., ρ_N and ρ_N tend to the same ρ). We include them both since, while A may seem more natural at first, A has some better theoretical properties, namely it appears to have fewer outliers than A (although we do not prove this in the current work, to keep this paper at a manageable length).Our main result extends that of <cit.>, which studies this model when μ is Gaussian measure, so that A deals with Gaussian vectors and A deals with vectors which are uniform on the sphere. In the current work, we show that their result is in fact universal in μ, holding as soon as μ has all finite moments.If the function f happens to be linear, then A is just a sample covariance matrix with the diagonal set to zero. (Zeroing the diagonal keeps the spectrum of A from translating off to infinity; see Remark <ref>.) For general f, then, the matrix A is an entrywise nonlinear function of a sample covariance matrix, scaled so that the spectrum is order one, and so that the nonlinearity f typically has order-one arguments. Furthermore, since f is applied entrywise, any expansion f(x) = ∑_k c_k h_k(x) induces a corresponding expansion A = ∑_k c_k A_k.The fundamental observation of Cheng and Singer <cit.> is that one should take (h_k)_k=1^∞ to be an appropriate sequence of orthogonal polynomials, usually the Hermite polynomials. In this language, the main result of, say, <cit.> is essentially a rigorous version of the following heuristics (more precisely, the version just for integer ℓ): Once placed in this basis, * the matrices A_k in (<ref>) are approximately independent;* the low-degree matrices (A_k)_k=0^⌈ℓ⌉ - 1 are essentially low-rank, so do not affect the global law;* if ℓ is an integer, the matrix A_ℓ is essentially a sample-covariance matrix, so its global law is given by the Marčenko-Pastur distribution; but if ℓ is not an integer then there is no matrix A_ℓ;* each of the high-degree matrices (A_k)_k=ℓ_c^∞, where ℓ_c is the least integer strictly bigger than ℓ, has an asymptotically semicircular distribution, because they are essentially degenerate sample-covariance matrices, in the parameter limit in which the Marčenko-Pastur distribution degenerates to the semicircle law.As a consequence, the limiting measure ρ is the free (additive) convolution of the Marčenko-Pastur and semicircular distributions, with weights given by the coefficients of f in its Hermite expansion, when ℓ is an integer; and just a semicircular distribution without a Marčenko-Pastur part, still with Hermite weights, when ℓ is not an integer.In the work <cit.>, the authors intially consider vectors X_i which are uniform on the sphere, for which certain classical algebraic identities simplify the problem. Roughly speaking, the model is linearized by spherical harmonics, making it easier to see the structure of the four-step heuristic above. Then they extend from spherical to Gaussian vectors by comparison. For general μ, these algebraic identities are not available. Instead, our proof relies essentially on a “pre-processing” step, identifying by hand which parts of A (resp. A) are errors, replacing A (resp. A) by an error-free matrix B, then showing that the Stieltjes transform of B approximately satisfies a self-consistent equation. The exact solution to this self-consistent equation describes the free convolution mentioned above, so we can conclude with perturbation theory. §.§ Related workAn extensive history of this problem was given by the recent paper <cit.>, so we only give a brief overview. Kernel matrices were first introduced in the classical scaling (d fixed and N →∞) by Koltchinskii and Giné <cit.>. El Karoui <cit.> studied random kernel matrices in the high-dimensional linear scaling ℓ = 1 (i.e., d ≍ N), but in a different normalization where the arguments of the nonlinearity f are typically (1), so that the only surviving feature of f is its behavior at zero. Our scaling (where f typically has order-one arguments) was first studied by Cheng and Singer <cit.>, still in the linear regime ℓ = 1, and when μ is Gaussian. As previously mentioned, Cheng and Singer introduced the idea of writing the nonlinearity f in a good basis of orthogonal polynomials, which is fundamental in later works, including ours. Later, by comparing to <cit.> with the Lindeberg exchange method, Do and Vu <cit.> were able to allow for non-Gaussian data in the linear regime. More recently, Fan and Montanari <cit.> found sufficient conditions on f so that, in the linear Gaussian model, the top eigenvalue sticks to the edge of the limiting distribution. As previously mentioned, two of the present authors <cit.> considered the (integer) polynomial case ℓ = 1, 2, 3 … for Gaussian and spherical data; simultaneous and independent work by Misiakiewicz <cit.> considered spherical or Bernoulli data for special nonlinearities f. With the exception of spherical data, the previous works all focus on the unnormalized model (<ref>); we are not aware of previous results on the normalized model (<ref>) beyond the spherical case.Our proof shows that the matrices A and A are well-approximated by a generalized sample covariance matrix B, which we write (up to subtracting the diagonal) as U^∗ TU. In dealing with the B matrix, one technical complication is that the entries of T have different scales from one another; another is that the entries of U are uncorrelated but not independent; and a third is that, because of our polynomial scaling, U ∈^M × N but only with the weaker log M ≍log N rather than M ≍ N (roughly, M ≈ d^L, where L is the degree of the largest nonzero Hermite coefficient of f). Sample covariance matrices with various combinations of these technical difficulties have previously been studied in <cit.> and <cit.>. We also use several ideas from <cit.>: For example we embed our matrices of interest in a larger 2 × 2 block matrix, inspired by <cit.>, and ideas like their Lemma 4.6 appear here under the name of “partial Ward inequalities” in Section <ref>.We also mention several related models: Instead of taking a nonlinearity of a sample covariance matrix (informally f(X^TX)), one can take a sample covariance of a nonlinearity (informally f(X)^Tf(X)). This changes the limiting spectrum; for details we direct readers to works of Pennington and Worah <cit.>, Benigni and Péché <cit.>, and Piccolo and Schröder <cit.>. The non-Hermitian (possibly rectangular) version A_ij = δ_i ≠ j/√(N) f(X_i,Y_j/√(d)) is closely related to the random-feature model (see, e.g., <cit.> for this model in general, and <cit.> for corresponding random-matrix results, all in the linear regime d ≍ N). §.§ OrganizationThe structure of the paper is as follows: Our main results, Theorem <ref> (for polynomial nonlinearities f) and Theorem <ref> (for general nonlinearities f), are given in Section <ref>. One main step in the proof is to replace the matrices A and A with a simpler matrix B, thus separating the “main-term analysis” of the matrix B, which is the bulk of the paper and constitutes Sections <ref> through <ref>, and the “error analysis” of the matrices A-B and A - B, which is Section <ref>. In Section <ref> we give an overview of the main-term analysis, introducing several fundamental resolvent and resolvent-like quantities and stating that they approximately solve various self-consistent equations. In order to prove these claims, we first spend three sections establishing basic tools for our analysis: Various resolvent identities, in Section <ref>; variants of the Ward identity that we call “full Ward inequalities” and “partial Ward inequalities,” in Section <ref>; and a collection of preliminary bounds, in Section <ref>. We then use these tools to prove these approximate self-consistent equations in Sections <ref> and <ref>. These sections complete the proof when the nonlinearity f is a polynomial; in Appendix <ref> we explain how to prove the general case, by approximating general nonlinearities by polynomials. §.§ NotationStochastic domination: We will use the following notation of high-probability boundedness up to small polynomial factors, introduced in <cit.>. If X = X_N(u) and Y = Y_N(u) are two families of real random variables, indexed by N and by u in some set U, we write X ≺ Ywhen, for every ϵ , D > 0, there exist C_ϵ,D and N_0(ϵ,D) such thatsup_u ∈ U(X_N(u) ≥ N^ϵ Y_N(u)) ≤ C_ϵ,D N^-Dfor allN ≥ N_0(ϵ,D).If X is complex, then X ≺ Y is defined as X≺ Y. It will be convenient to write _≺, where for example X ≤ Y + _≺(Z) is defined as X - Y ≺ Z.We remark that our problem has two parameters tending to infinity simultaneously, namely N and d. The definition given here is in terms of N, but one could equally write a definition in terms of d (with d^ϵ, d ≥ d_0, and so on), and it is easy to check that this definition would produce the same result. We will sometimes switch between the two for convenience.Stieltjes transforms: If T ∈^N × N is symmetric, we write its Stieltjes transform with the sign conventions_T(z) = 1/N((T-z)^-1). Summation conventions: We frequently consider sums over multiple indices, but include only the terms where these indices are all distinct. We indicate this with an asterisk on top of the summation notation. For example, ∑_a,b^N,∗ f_a,b is defined as ∑_a,b=1 : a ≠ b^N f_a,b; the notation ∑_a,b,c^N,∗ f_a,b,c means that a,b,c should all be distinct (i.e., a=b≠ c is also excluded from the sum), and so on. Additionally, we use standard exclusion notation like ∑_ν^(μ) to indicate, in this case, the sum over all ν except for ν = μ. Floors and ceilings: If ℓ > 0, then ⌈ℓ⌉ is the smallest integer at least ℓ as usual, but we will also need ℓ_c ℓ + 1if ℓ∈,⌈ℓ⌉ otherwise,for the smallest integer strictly bigger than ℓ, and {ℓ} for the fractional part, i.e. {2.4} = 0.4 and {2} = 0. Other notation: We write ℍ for the complex upper half-plane ℍ = {z ∈ : (z) > 0}, and write a, b for the consecutive integers [a,b] ∩ = {a,a+1,…,b}. We sometimes abuse notation by dropping “” for constant matrices; for example, if A is a matrix and z is a constant, then we write A - z for A - z.§.§ AcknowledgementsThe work of H.-T. Y. is partially supported by the NSF grant DMS-2153335, and by a Simons Investigator award. B. M. is partially supported by NSF grant DMS-1760471. The work of Y. M. L. is partially supported by NSF grant CCF-1910410, and by the Harvard FAS Dean's Fund for Promising Scholarship.§ MAIN RESULTS §.§ Polynomial nonlinearitiesIn this first set of results, we take the nonlinearity f : → to be a fixed polynomial (not depending on d or N), expressible in the Hermite basis as f(x) = ∑_k=0^L c_k h_k(x)for some L and some constants (c_k)_k=0^L, where the h_k are the normalized (non-monic) Hermite polynomials given as_Z ∼N(0,1)[h_i(Z)h_j(Z)] = δ_ij,with the first several given byh_0(x) = 1,h_1(x) = x,h_2(x) = 1/√(2)(x^2-1),h_3(x) = 1/√(6)(x^3-3x). We will always work in the rectangular domain of the complex upper half plane defined by𝐃_τ{z = E + η : τ≤η≤τ^-1, E≤τ^-1}for arbitrary τ > 0. Given ℓ > 0, we will also need the parameterp_ℓ =1/2if ℓ∈,min({ℓ},1-{ℓ})/2otherwise,where we recall that {ℓ} is the fractional part of ℓ. We remark that p_ℓ∈ [0,1/4], unless ℓ is an integer, in which case p_ℓ = 1/2.(Main theorem, polynomial nonlinearities) Suppose that μ has all moments finite. Fix κ, τ, ℓ > 0, a positive integer L, and a polynomial f of the form (<ref>). Suppose that N/d^ℓ - κ = (1)if ℓ is not an integer, (d^-ϵ_0)for some fixed ϵ_0 > 0, if ℓ is an integer.Then, for each fixed z ∈𝐃_τ, we haves_A(z)d →∞→m(z) almost surely, s_A(z)d →∞→m(z) almost surely,with the effective boundss_A(z) - m(z) ≺1/d^p_ℓ if ℓ is not an integer, 1/d^p_ℓ + 1/d^ϵ_0 if ℓ is an integer, s_A(z) - m(z) ≺1/d^p_ℓ if ℓ is not an integer, 1/d^p_ℓ + 1/d^ϵ_0 if ℓ is an integer,,where m(z) is the unique solution in ℍ to the equationm(z) (z + γ_a m(z)/1+γ_b m(z) + γ_c m(z)) + 1 = 0with the f-dependent constantsγ_ac_ℓ^2if ℓ∈, 0otherwise, γ_bc_ℓ√(ℓ! κ) if ℓ∈, 0otherwise, γ_c ∑_k = ℓ_c^L c_k^2.As explained in the proof of <cit.>, it is easy to check that (<ref>) has a unique solution in the upper half plane. Indeed, as mentioned above and in the previous literature, if ℓ is an integer it is the Stieltjes transform of the free (additive) convolution of the semicircle law and the Marčenko-Pastur law, scaled according to γ_a, γ_b, and γ_c; otherwise it is the Stieltjes transform of the semicircle law, rescaled according to γ_c. The first major step of the proof is to show that, for the purposes of a global law, the matrices A and A are each well-approximated by the matrixB_ij = δ_i≠ j/√(N)∑_k = ⌈ℓ⌉^L c_k√(k!)/d^k/2∑_a_1, …, a_k=1 a_1 < a_2 < … < a_k^d X_a_1 i… X_a_k i X_a_1 j… X_a_k j= δ_i≠ j/√(N)∑_k = ⌈ℓ⌉^L c_k/d^k/2√(k!)∑_a_1, …, a_k=1^d,∗ X_a_1 i… X_a_k i X_a_1 j… X_a_k j,which we think of as storing the “main terms” present in A and A. (We recall that the notation ∑_a_1,…,a_k=1^d,∗ means that the a_1,…,a_k are all distinct, but not necessarily ordered. Each of the formulations ∑_a_1<⋯<a_k^d and ∑_a_1,…,a_k^d,∗ will be more convenient at some point of the proof.) We remind the reader of the usual convention that sums like ∑_k=ℓ^L are considered empty if, in this case, L < ℓ. For example, if L < ⌈ℓ⌉ + 1, then γ_c = 0; if L < ⌈ℓ⌉, then B = 0 as a matrix. In fact, if L < ℓ, then γ_a = γ_b = γ_c = 0, and Theorem <ref> says that A and A have bulk spectra tending to a delta mass at zero. We split the proof of Theorem <ref> into the following two propositions. In the statements, we need the parametersq_ℓ = min(ℓ,1,ℓ_c-ℓ)/2, r_ℓ = (1+ℓ-⌈ℓ⌉)/2,which satisfy 0 ≤ q_ℓ, r_ℓ≤ 1/2 for all ℓ > 0. Together these imply the result, since one can easily computep_ℓ = min(q_ℓ,r_ℓ). Under the assumptions above, we have s_B(z) →m(z), almost surely as d →∞, with s_B(z) - m(z)≺1/d^q_ℓ if ℓ is not an integer,1/d^q_ℓ + N/d^ℓ - κ if ℓ is an integer.Under the assumptions above, we have s_A(z) - s_B(z) → 0 and s_A(z) - s_B(z) → 0, almost surely as d →∞, with s_A(z) - s_B(z) ≺1/d^r_ℓ,s_A(z) - s_B(z) ≺1/d^r_ℓ,As mentioned before, the proof of Proposition <ref> takes up the bulk of the paper, namely Sections <ref> through <ref>; the proof of Proposition <ref> is much shorter, and is given in Section <ref>.We now explain why, in the definitions (<ref>) and (<ref>) of A and A, we set the diagonal entries to zero. Consider the diagonal matrices K, K∈^N × N given entrywise byK_ii = 1/√(N) f(X_i^2/√(d)), K_ii = 1/√(N) f(√(d))i.e., K contains the “missing diagonal” elements of A (resp. K of A), and at first glance the reader may find the matrices A + K and A + K with “restored diagonal” elements to be more natural.Since K is a deterministic constant times identity, its role is easy to understand: In this case, the matrix A has an order-one limiting spectral measure, and A+K simply translates this measure along the real line. According to the growth of f at infinity and the power ℓ in d^ℓ≍ N, this shift may be asymptotically negligible, asymptotically constant, or, in the worst case, asymptotically infinity. Proving theorems directly about A avoids the need to spell out these cases; the reader interested in A + K can simply add back the shift.Since K is genuinely random, its role is more nuanced. In this case we may decomposeK = K + K_1/√(N)f(√(d))+ (1/√(N)[ f(X_i^2/√(d)) - f(√(d)) ] )_i=1^N.Of course K plays the same translation role as before, but the role of K_ is new: Roughly speaking, from the CLT we expect X_i^2 ≈ d + Z_i√(d), with Z_i i.i.d. standard normal, so(K_)_ii≈f(√(d)+Z_i) - f(√(d))/√(N)≈f'(√(d))/√(N)Z_i.If f'(√(d)) ≪√(N), this suggests that K__ is asymptotically negligible, so that K is asymptotically just a simple translation as before. But if f'(√(d)) ≫√(N), then — even discarding the translation K — the bulk spectra of A and A+K_ may be substantially different. This is potentially interesting, but to keep the current work to a manageable length, we consider only the zero-diagonal matrices A and A.§.§ General nonlinearitiesIn these more general results, we allow nonlinearities f : →, which should still be fixed (not depending on d or N), as long as they are in some sense well-approximable by polynomials. Our conditions on f are the same as in <cit.>, and our (short) proof that Theorem <ref> on polynomial nonlinearities lifts to Theorem <ref> through this approximation scheme essentially mimics theirs.We assume that our nonlinear function f(x) is piecewise continuous with a polynomial growth rate. Precisely, there exists a positive integer K, a finite subdivision -∞ = α_0 < α_1 < α_2 < ⋯ < α_K < α_K+1 = ∞, and a finite positive constant C such that * For every i ∈{0, …, K}, the function f(x) is continuous and bounded on the open interval (α_i, α_i+1).* f(x)≤ Cx^C when x < α_1 or x > α_K.Under these assumptions, it is easy to show that the sequence (c_k)_k=0^∞ defined byc_k = _Z ∼N(0,1)[f(Z)h_k(Z)]exists and is square-summable:σ^2 ∑_k=0^∞ c_k^2 = _Z ∼N(0,1)[f^2(Z)] < ∞. (Main theorem, general nonlinearities) Suppose that μ has all moments finite. Fix κ, τ, ℓ > 0, and a function f satisfying Assumption <ref> with corresponding constants (c_k)_k=0^∞ and σ^2 from (<ref>) and (<ref>), respectively. Suppose thatN/d^ℓ = κ + (d^-1/2).Then, if m(z) is the unique solution in ℍ to the equationm(z) (z + γ_a m(z)/1+γ_b m(z) + γ_cm(z)) + 1 = 0with the f-dependent constantsγ_ac_ℓ^2if ℓ∈, 0otherwise, γ_bc_ℓ√(ℓ! κ) if ℓ∈, 0otherwise, γ_c∑_k=ℓ_c^∞ c_k^2,then for each fixed z ∈𝐃_τ we haves_A(z)d →∞→m(z) almost surely, s_A(z)d →∞→m(z) almost surely.We prove this result by approximating f by polynomials and using the result for polynomial nonlinearities, Theorem <ref>, as a black box. This is essentially the same approach as <cit.>, although they can allow slightly more general nonlinearities f because they have exact formulas for the distributions, which additionally have better tail decay. In Appendix <ref>, we give the needed modifications for completeness.§ SKETCH PROOF OF PROPOSITION <REF>: MAIN TERMS AS SAMPLE COVARIANCE MATRICESThe main idea of the proof is to rewrite B in the form of a sample-covariance matrix U^*TU, where U has random, centered real entries with unit variance, and where T is a real deterministic diagonal matrix storing the prefactors in (<ref>). (Actually, since the diagonal terms of B are set to zero, we will need to write B = U^∗ T U - D where D stores the diagonal terms.) The main technical difficulty comes from the fact that, while the columns of U in such a decomposition are independent, the entries in each column are only uncorrelated, not independent. A smaller technical difficulty comes from the fact that U will be of size M × N for some M ≫ N (roughly speaking, M ≈ N^L/ℓ – and it suffices to restrict to the case L > ℓ, see Remark <ref>), whereas the most-studied sample covariance matrices typically have M ≍ N.To do this, we need the following notation. Fix once and for all some nonlinearity f, and recall that c_k is the kth coefficient of f in the Hermite basis, f(x) = ∑_k=0^L c_k h_k(x). For every k ∈⌈ℓ⌉, L define 𝐌_k = {(a_1,…, a_k) ∈ [d]^ks.t.a_1 < … < a_k} ifc_k ≠ 0,∅ ifc_k = 0,with size M_k = 𝐌_k. We will also need their union𝐌 = ⋃_k=⌈ℓ⌉^L 𝐌_k, with total size M = ∑_k=⌈ℓ⌉^L M_k.For each k with M_k > 0, we introduce the matrix U^[k]∈^M_k × N with entries U^[k]_μ i = 1/√(N) X_a_1 i… X_a_k i,where μ is the tupleμ = (a_1,…, a_k) ∈𝐌_k.Given two tuples μ and ν, we define their overlap μ,νμ∩ν,where μ and ν are viewed as subsets of 1, d. For example, (2, 4, 5), (3, 5) = 1.We also define the combined-degree M × N matrix U such that U_μ i = U^[k]_μ i for μ∈𝐌_k and all k. Note that, as claimed, U_μ i = 0 and U_μ i^2 = 1/N for all μ and i; the columns U_i are independent for different i; but U_μ i and U_ν i are only uncorrelated for μ≠ν, not necessarily independent, since μ,ν can be nonzero. The deterministic M × M matrix T is defined blockwiseT = [ ⋱ 0; c_k√(k!)·√(N/d^k) I_M_k; 0 ⋱ ],where we skip blocks with c_k = 0 (i.e., I_0 is a 0 × 0 matrix) by convention. The point of all these conventions is to define T in an invertible way, by omitting what would otherwise be zero blocks. For example, if f = c_1h_1 + c_2h_2 + c_4h_4 and ℓ≤ 1, thenT = [ c_1 √(N/d) I_M_100;0 c_2 √(2!N/d^k) I_M_20;00 c_4 √(4!N/d^k) I_M_4 ].If we change this example to keep the same f but let 1 < ℓ≤ 2, say, then the first of the three blocks disappears, and T becomes smaller. Since T is diagonal, we will usually write T_μ instead of T_μμ, and sometimes write T_k instead of T_μ when μ∈𝐌_k (since the value of T_μ depends only on this k).If f has only low-degree terms, i.e. f = ∑_k=0^⌈ℓ⌉ - 1 c_k h_k(x), then by these conventions T does not exist at all. But in this case, the matrix B defined by (<ref>) is zero, so s_B(z) = -1/z = m(z), where m(z) is defined by (<ref>), and Proposition <ref> is immediate. For such matrices, the main result – namely, that both A and A have bulk spectrum which is asymptotically a delta mass at zero – follows from Proposition <ref>, whose proof does not use the matrix T. Thus in the following we will always assume that f has some high-degree terms, so that T is nontrivial. Notice thatB_ij = (U^*TU)_ijδ_i≠ j.Denote the diagonal part removed in (<ref>) by D, i.e. D_ii = U_i^* T U_i and D_ij=0 for i≠ j. The fundamental observation is thatB = U^* T U - D. ConsiderH(z) = [-T^-1U;U^* -z - D ].Define the resolventG(z) = H(z)^-1.From the definition of H in (<ref>) we can see thatG(z) = [ G_M(z)*;G_N(z) ],where the *'s are some block matrices that are irrelevant for our purposes, G_N(z) = 1/U^* T U - z - D = 1/B - zis the resolvent we ultimately want to understand, andG_M(z) = 1/U(z+D)^-1U^* - T^-1is an object we will understand as an intermediate step. Throughout this paper, Greek letter indices like μ and ν will refer to the first M columns and rows of G, while letters like i,j,k will refer to the last N columns and rows of G, so that for example G_ij = (G_N)_ij and G_μν = (G_M)_μν.The fundamental quantities for the main-term analysis ares(z) s_B(z) = 1/N G_N(z),s(z)= 1/M(G_M(z) + T ),ϕ = M/N,where we have dropped B from the notation s_B to save space. We stress that s(z) and s(z) depend on N, although we suppress this from the notation. Later, we will show that s(z) and ϕs(z) are order-one quantities for z ∈𝐃_τ. Since z is fixed throughout the argument, we will often drop it from the notation, writing for examples = s(z), s = s(z),G_N = G_N(z),G_M = G_M(z), …Since G_N is the resolvent of our matrix of interest, the eventual goal is to show that s(z) approximately satisfies a self-consistent equation. To do this, we pass through the auxiliary matrix G_M, which is not a resolvent, but which our analysis shows approximately behaves like one (for example, it approximately satisfies something like the Ward identity; see Lemmas <ref> and <ref>). Precisely, we first show that s(z) and ϕs(z) approximately determine each other through a joint self-consistent equation (roughly, s(z) ≈ -(z+ϕs(z))^-1). Then we show show that ϕs(z) approximately satisfies its own self-consistent equation. From here we recover the self-consistent equation approximately satisfied by s(z), which is exactly satisfied by 𝔪(z), then use perturbation theory of that equation, already developed by <cit.>, to conclude. These steps are split into the following propositions, whose proofs constitute the bulk of the paper.For any fixed τ > 0 and any fixed z ∈𝐃_τ, we have1+s(z)(z+ϕs(z))≺1/d^1/2min(1,ℓ).Let γ_a, γ_b, γ_c be as in (<ref>). For any fixed τ > 0 and any fixed z ∈𝐃_τ, we haveϕs(z) - γ_a/γ_b - z - ϕs(z) + γ_c/z+ϕs(z)≺1/d^q_ℓ if ℓ is not an integer, 1/d^q_ℓ + N/d^ℓ - κ if ℓ is an integer.For any fixed τ > 0 and any fixed z ∈𝐃_τ, we have1/s(z) + z + γ_a s(z)/1+ γ_b s(z) + γ_c s(z)≺1/d^q_ℓ if ℓ is not an integer, 1/d^q_ℓ + N/d^ℓ - κ if ℓ is an integer.Modulo Propositions <ref>, <ref>, and <ref>, the proof of Proposition <ref> is quite short. We re-write (<ref>), 1+m(z)(z+γ_am(z)/(1+γ_bm(z)) + γ_cm(z)) = 0, as1/m(z) + z + γ_a m(z)/1+γ_bm(z) + γ_c m(z) = 0,in order to recall the following stability analysis of this equation, due to <cit.>.<cit.> If deterministic 𝔰 = s(z) approximately solves (<ref>) in the sense that1/s + z + γ_a s/1+γ_b s + γ_c s = ωwith the error term ω satisfyingω≤η/2, η = (z),and m = m(z) exactly solves (<ref>), thens-m≤4ω/η^2. Writeω1/s(z) + z + γ_a s(z)/1+γ_b s(z) + γ_c s(z).Let δ_ℓ = q_ℓ, if ℓ is not an integer, or δ_ℓ = min(q_ℓ,ϵ_0), if ℓ is an integer. For ϵ < δ_ℓ, applying Lemma <ref>, we find (s(z) - m(z)≥ d^ϵ-δ_ℓ)≤(ω≥η/2) + (s(z) - m(z)≥ d^ϵ-δ_ℓ, ω≤η/2) ≤(ω≥η/2) + (ω≥ (η^2/4) d^ϵ-δ_ℓ) ≤ 2(ω≥ (η^2/4)d^ϵ-δ_ℓ) ≤ C_ϵ,D d^-D,where the last inequality follows from Proposition <ref> for d sufficiently large. This verifies (<ref>), and the almost-sure convergence of s(z) - m(z) to zero follows from the Borel-Cantelli lemma. (This is why we require N/d^ℓ - κ≤ d^-ϵ_0 when ℓ is an integer; if e.g. N/d^ℓ - κ∼1/log(d), then this argument would give s(z)-m(z)≤d^ϵ/log(d) for all d sufficiently large, which is of course insufficient for almost-sure convergence.) § BASIC TOOLS: RESOLVENT IDENTITIESThe goal of this section is to prove several exact equalities relating G_N, G_M, and their corresponding minors, which will be used throughout the paper. The matrix H(z) from (<ref>) is (M+N) × (M+N). If E ⊂ 1, N is any so-called exclusion set, we will write H^(E)(z) for the (M+(N - E)) × (M+(N - E)) matrix obtained from H(z) given by erasing the rows and columns indicated by E. Most frequently we will use E = {i} for some index i, in which case we abuse notation by writing H^(i) instead of H^({i}). We define the corresponding resolvent byG^(E)(z)H^(E)(z)^-1.If E = ∅, by convention we set H^(E) = H and G^(E) = G. Although H^(E)(z) and G^(E)(z) have fewer rows and columns than H(z) and G(z), we keep the original values of the matrix indices: For example, G^(i)(z) has entries G^(i)_jk for j,k ∈{1, …, i-1, i+1, …, N}, not {1, …, N-1}. Notice from the definition that we will only ever need minors that remove some i and j indices, never those that remove some μ and ν indices.Recall that G_N = (B - z)^-1, where B_ij = U_i^∗ T U_j δ_i≠ j. We introduce the notation B_j for the j-th column of B with diagonal element excluded, i.e. B_j = (B_1j,…, B_j-1,j, B_j+1,j, …, B_Nj)^T. For any i, any μ, and any (possibly empty) exclusion set E, we haveG_μν^(i) = G_μν - G_μ iG_i ν/G_iiG_iμ = - G_ii∑_α∈𝐌 U_α i G_αμ^(i)G_ii = (-z - ∑_μ, ν∈𝐌 U_μ i(G^(i)_M+T)_μν U_ν i)^-1 = ( -z-D_ii - ∑_μ,ν∈𝐌 U_μ i G^(i)_μν U_ν i)^-1G_M^(E)+T= TU^(E)G_N^(E)(U^(E))^∗ T (G_M + T)_μν = -T_μ∑_j=1^N G_jj∑_α∈𝐌 U_μ j U_α j G_αν^(j) The identities (<ref>) and (<ref>) are very standard in the local-law literature (see, e.g., <cit.> for a proof). The usual version of (<ref>) simplifies to what we have written here since the N × N block of H is diagonal, i.e. H_ik = 0 for i ≠ k, so thatG_iμ = -G_ii( ∑_α∈𝐌 U_α i G_αμ^(i) + ∑_k^(i) H_ik G_kμ^(i)) = -G_ii∑_α∈𝐌 U_α i G_αμ^(i).The identity (<ref>) is just the usual Schur complement formula, again summing only over μ and ν for the same reason.In proving (<ref>), by erasing rows and columns as necessary we may assume E = ∅; then it is a simple arithmetic consequence of the formulas (<ref>) and (<ref>) for G_N and G_M, respectively. Namely, (<ref>) gives= G_N(U^∗ T U - D - z) = G_N U^∗ T U - G_N(z+D),so (z+D)^-1 - G_NU^∗ TU(z+D)^-1+G_N = ( - G_NU^∗ TU)(z+D)^-1 + G_N = -G_N(z+D)(z+D)^-1+G_N = 0,so that=-TU[(z+D)^-1 - G_N U^∗ T U (z+D)^-1 + G_N]U^∗= -TU(z+D)^-1U^∗ ++ TUG_NU^∗ TU(z+D)^-1U^∗ - TUG_NU^∗ TT^-1= (-T+TUG_NU^∗ T)(U(z+D)^-1U^∗ - T^-1) = (-T+TUG_NU^∗ T)G_M^-1,where the last equality is (<ref>). This means thatG_M = -T + TUG_NU^∗ T,which can be rearranged to obtain (<ref>). The proof of (<ref>) is the most involved. It starts with the Sherman–Morrison formula for the inverse of a rank-one update, which is usually formulated for a matrix A, column vector q, and scalar τ as (A+τ qq^∗)^-1 = A^-1 - τ A^-1 qq^∗ A^-1/1+τ q^∗ A^-1q.Left-multiplying by q^∗ and simplifying on the right-hand side, we obtainq^∗ (A+τ qq^∗)^-1 = (1+τ q^∗ A^-1 q)q^∗ A^-1 - τ q^∗ A^-1 qq^∗ A^-1/1+τ q^∗ A^-1q = 1/1+τ q^∗ A^-1q q^∗ A^-1.If we use this with q = U_j, τ = (z+D_j)^-1, and A = U(z+D)^-1U^∗ - U_j(z+D_j)^-1U_j^∗ - T^-1 = (G_M^(j))^-1, so that A+τ qq^∗ = (G_M)^-1, we obtain an equation over vectors of length M; taking the ν entry of each side, we find∑_α U_α j G_αν = 1/1+(z+D_j)^-1 U_j^∗ G_M^(j) U_j∑_α U_α j G^(j)_αν.Store this for a moment; at the same time, rearrange (<ref>) to obtain(U(z+D)^-1 U^* - T^-1)G_M = ,then take the (μν) entry of both sides to get(∑_j (z + D_j)^-1) U_μ j(∑_α U_α j G_αν) - T_μ^-1 G_μν = δ_μν.Now we substitute (<ref>) into the left-hand side, and multiply both sides by T_μ; this yields∑_j T_μ/z+D_j+U_j^∗ G_M^(j) U_j∑_α U_μ j U_α j G^(j)_αν - G_μν = T_μν.Substituting 1/z+D_j+U_j^∗ G_M^(j)U_j = -G_jj, from (<ref>), finishes the proof of (<ref>). § BASIC TOOLS: FULL AND PARTIAL WARD INEQUALITIESSince G_N = (B-z)^-1 is actually a resolvent, it satisfies the usual Ward identityG_N G_N^* =G_N/η,where η =z, and actually the extension to minorsG_N^(E) (G_N^(E))^∗ =G_N^(E)/η. Since G_M+T is not a resolvent, it does not satisfy the Ward identity. However, the goal of this section is to show that it approximately satisfies an inequality that looks like one direction of the Ward identity (roughly speaking, (G_M +T) (G_M+T)^∗≲ (G_M+T)/η, or in coordinates ∑_ν(G_M+T)_μν^2 ≲ (G_M+T)_μμ/η; then (T) disappears since T is real). This says that, for each μ, the sum of (G_M+T)_μν^2 over ν is much smaller than a naive estimate would predict. Actually we show something better, which is crucial for our proof of Lemma <ref>: This sum is also smaller than expected if it is taken, not over all tuples ν, but just over some of them, namely those with a fixed overlap with μ. (Recall that μ,ν denotes the overlap of the multi-indices μ and ν, i.e., the number of X's they have in common.) We call such estimates partial Ward inequalities, in contrast with the original estimates, which we call full Ward inequalities. For any μ, and any (possibly empty) exclusion set E ⊂ [N] with size (1), we have∑_ν∈𝐌(G^(E)_M+T)_μν^2 ≺ G^(E)_μμ/η,uniformly over μ∈𝐌_k. Note that full Ward inequality follows directly from partial Ward inequalities, so we omit the proof.For any k_1, k_2 ∈⌈ℓ⌉, L, t ∈ 0, min(k_1,k_2) and μ∈𝐌_k_1, let S_μ^k_2, t = {ν∈𝐌_k_2|⟨μ,ν⟩ = t}.Then for any μ∈𝐌_k_1 and any (possibly empty) exclusion set E ⊂ [N] with size (1), we have∑_ν∈ S_μ^k_2,t|(G_M^(E)+T)_μν|^2 ≺ G^(E)_μμ/η d^max{-t, ℓ-k_2},uniformly over μ∈𝐌_k_1.For notational simplicity, let S S_μ^k_2,t and Ñ N - E. Because of our convention that 𝐌_k = ∅ when the kth Hermite coefficient of f vanishes, we can have S = 0, but in this case (<ref>) is trivial, so we can and will assume S_μ^k_2,t≠∅. Consider the restrictions of T and U to this subset, denoted by T|_S_μ^k_2, t and U|_S_μ^k_2, t; these are matrices of size S × S and S ×Ñ, respectively, and T|_S_μ^k_2, t = c_k_2√(k_2! N/d^k_2)I. We will write (U^(E)|_S_μ^k_2, t)_i for the ith column of U^(E)|_S_μ^k_2, t. Using the resolvent identity G_M^(E) + T = TU^(E)G_N^(E)(U^(E))^∗ T from (<ref>), we can view the left-hand side of (<ref>) as∑_ν∈ S^k_2,t_μ|(G_M^(E)+T)_μν|^2= ∑_ν∈ S^k_2,t_μ(TU^(E)G_N^(E)(U^(E))^∗ T)_μν^2 = ∑_ν∈ S^k_2,t_μ∑_i,j,k,ℓ=1^N,(E) T_μ U_μ i G_ij U_ν j T_ν T_ν U_ν k (G^∗)_k ℓ U_μℓ T_μ= c_k_2^2 k_2! N/d^k_2∑_ν∈ S^k_2,t_μ∑_i,j,k,ℓ=1^N,(E) T_μ U_μ i G_ij U_ν j U_ν k (G^∗)_k ℓ U_μℓ T_μ= c_k_2^2 k_2! N/d^k_2∑_i,j,k,ℓ=1^N,(E) T_μ U_μ i G_ij( U|_S_μ^k_2, t^*U|_S_μ^k_2, t)_jk (G^∗)_kℓ (U^∗)_ℓμ T_μ= c_k_2^2 k_2! N/d^k_2(T_μ U^(E)G_N^(E) U^(E)|_S_μ^k_2, t^*U^(E)|_S_μ^k_2, t(G_N^(E))^*(U^(E))^* T_μ)_μμ= c_k_2^2 k_2! N/d^k_2((G_N^(E))^∗ (U^(E))^∗ T e_μ),( U^(E)|_S_μ^k_2, t^*U^(E)|_S_μ^k_2, t) ((G_N^(E))^∗ (U^(E)) ^∗ T e_μ)(recalling that notations like ∑_i=1^N,(E) mean summing over indices i ∈ 1, N ∖ E), where e_μ is the standard basis vector of size S × 1, so that (G_N^(E))^∗ (U^(E))^∗ T e_μ is a vector in ^Ñ. Bounding the quadratic form by the operator norm times the vector norm, then applying the (non-partial) Ward identity (<ref>) for G_N, we find∑_ν∈ S^t_μ|(G_M^(E)+T)_μν|^2≲N/d^k_2U^(E)|_S_μ^k_2, t^*U^(E)|_S_μ^k_2, t_(TU^(E)G_N^(E) (G_N^(E))^*(U^(E))^*T)_μμ= N/d^k_2U^(E)|_S_μ^k_2, t^*U^(E)|_S_μ^k_2, t_1/η(TU^(E) G_N^(E) (U^(E))^*T)_μμ= N/d^k_2U^(E)|_S_μ^k_2, t^*U^(E)|_S_μ^k_2, t_ G^(E)_μμ/η,where in the last equality we use that T and U^(E) have real entries, so that TU^(E) G_N^(E) (U^(E))^∗ T = (TU^(E)G_N^(E) (U^(E))^∗ T),and once again the identity G_M^(E)+T = TU^(E)G_N^(E)(U^(E))^∗ T from (<ref>).It remains to show thatN/d^k_2U^(E)|_S_μ^k_2, t^*U^(E)|_S_μ^k_2, t_≺ d^max{-t, ℓ-k_2}Φ.Since the distribution of the left-hand side depends only on the length of μ and not which X's it contains, the final result (<ref>) will be uniform in μ∈𝐌_k_1 as claimed. Recall that the columns ( U^(E)|_S_μ^k_2, t)_i are independent; thus, since AA^∗_ = A^∗ A_, N/d^k_2U^(E)|_S_μ^k_2, t^*U^(E)|_S_μ^k_2, t_ = N/d^k_2U^(E)|_S_μ^k_2, tU^(E)|_S_μ^k_2, t^*_= ∑_i=1^N,(E)N/d^k_2(U|_S_μ^k_2, t)_i(U|_S_μ^k_2, t)_i^*_∑_i=1^N,(E) Z_i _reduces the problem to bounding the operator norm a sum of independent and identically distributed matrices Z = ∑_i=1^N,(E) Z_i, each Z_i ∈^S × S, which we can do with the matrix Bernstein inequality (see, e.g., Theorem 5.4.1 of <cit.>). Notice that matrices Z_i are not centered and do not have bounded norm; thus we need to modify them to use this inequality. To simplify the notation, from now on, we assume without loss of generality that E =N-E+1, …, N, so that ∑_i=1^N,(E) = ∑_i=1^Ñ. We additionally abuse notation by writing N instead of Ñ; this is fine in asymptotics since E = (1). To show (<ref>), it suffices to show that for any ϵ, D > 0 there is N_0(ϵ, D) ∈_>0 such that for any N ≥ N_0(ϵ, D) the following inequality holds:(∑_i=1^N Z_i _ > d^ϵΦ) ≤ N^-D.Fix ϵ > 0 and D > 0. Pick δ∈ (0, ϵ/2). Define the centered matricesZ̃_i = Z_i 1(Z_i≤ d^-t+δ) - [Z_i 1(Z_i≤ d^-t+δ)].Notice that Z_i = N/d^k_2(U|_S_μ^k_2, t)_i_2^2 = N/d^k_2∑_α∈ S_μ^k_2, t U_α i^2 ≺|S_μ^k_2, t| N/d^k_21/N≺ d^k_2 - td^-k_2 = d^-t,and this bound is uniform in i ∈ [N], as the Z_i are independent and identically distributed matrices. Then there exists Ñ_0 such that for any N ≥Ñ_0 we have (Z_i > d^-t+δ) ≤ N^-D-2for all i∈ [N]. Consider the event Ω(N, ϵ, D) = {∀ i∈ [N] Z_i≤ d^-t+δ}.We will use it later to upgrade the result of matrix Bernstein inequality from Z̃_i to Z_i; for now we just store that, from (<ref>) and the union bound, we have(Ω(N, ϵ, D)) ≥ 1 - N^-D-1. The matrix Bernstein inequality for the real-symmetric S × S matrices (Z̃_i)_i=1^N reads(∑_i=1^N Z̃_i_ > t ) ≤ 2Sexp( - t^2/2/σ^2 + Kt/3)for any t > 0, where max_i Z̃_i_≤ K almost surely and ∑_i=1^N [Z̃_i^2] _≤σ^2. Thus we need to give deterministic upper bounds for Z̃_i and ∑_i=1^N [Z̃_i^2]. First, we bound their norm asZ̃_i≤ d^-t+δ + [Z_i 1(Z_i≤ d^-t+δ)],and since Z_i is positive semidefinite we have[Z_i 1(Z_i≤ d^-t+δ)] = max_v = 1[v^∗ Z_i v 1(Z_i≤ d^-t+δ)]≤max_v = 1[v^∗ Z_i v]=Z_i.Notice that Z_i has the form Z_i = N/d^k_2 vv^∗ for a vector v ∈^S with uncorrelated coordinates v_ν = U_ν i; thus Z_i = 1/N·N/d^k_2 = 1/d^k_2. In the following we allow C to change from line to line.Since t ≤ k_2 and δ is small, we thus haveZ_i≤ Cd^-t+δ.Next, we have[Z̃_i^2] = [Z_i^2 1(Z_i≤ d^-t+δ)] - ([Z_i 1(Z_i≤ d^-t+δ)])^2≤[Z_i^2 1(Z_i≤ d^-t+δ)] + [Z_i 1(Z_i≤ d^-t+δ)])^2.The second term on the right-hand side has been studied above. We only need to provide an estimate for the first term. Since Z_i = N/d^k_2 vv^∗, we also have Z_i^2 = Z_iZ_i; using this identity, we find[Z_i^2 1(Z_i≤ d^-t+δ)] = max_v = 1[v^∗ Z_i v Z_i1(Z_i≤ d^-t+δ)] ≤ d^-t+δmax_v = 1[ v^∗ Z_i v ] = d^-t+δ Z_i≤ Cd^-t-k_2+δ.Since the Z̃_i are i.i.d. over i, we thus have∑_i=1^N [Z̃_i^2] = N[Z̃_i^2]≤ Cd^δ (Nd^-t-k_2 + Nd^-2k_2) ≤ Cd^δ d^ℓ-t-k_2≤ Cd^δ d^2max(-t,ℓ-k_2)≤ Cd^δΦ^2,since t < k_2 and since a+b ≤ 2max(a,b) for real a, b.Now we can plug these estimates into the matrix Bernstein inequality (<ref>), choosing t = d^ϵ/2Φ, K = Cd^-t+δ, and σ^2 = Cd^δΦ^2 to obtain(∑_i=1^N Z̃_i_ > d^ϵ/2Φ) ≤ 2S exp( -d^ϵΦ^2/2/Cd^δΦ^2 + Cd^-t+δd^ϵ/2Φ).Since Φ = d^max{-t,ℓ-k_2}, we have Cd^δΦ^2 + Cd^-t+δd^ϵ/2Φ≤ CΦ^2d^δ+ϵ/2. Thus the argument of the exponential is upper-bounded by -Cd^ϵ/2-δ; since S grows polynomially in d and δ < ϵ/2, this implies (∑_i=1^N Z̃_i_ > d^ϵ/2Φ) ≤ N^-D-1for sufficiently large N.To upgrade this to an estimate on ∑_i=1^N Z_i_, we introduce the event Ω̃(N, ϵ, D) = Ω(N, ϵ, D) ∩{∑_i=1^N Z̃_i _≤ d^ϵ/2Φ}.Combining (<ref>) with (<ref>) gives us(Ω̃(N,ϵ,D)^c) ≤ N^-Dfor large enough N. Furthermore, on this event, we have Z̃_i = Z_i - [Z_i 1(Z_i≤ d^-t+δ)], so that∑_i=1^N Z_i_ = ∑_i=1^N Z̃_i + ∑_i=1^N [Z_i 1(Z_i≤ d^-t+δ)]_≤∑_i=1^N Z̃_i_ + ∑_i=1^N [Z_i 1(Z_i≤ d^-t+δ)]_≤ d^ϵ/2Φ + CNd^-k_2≤ d^ϵΦ.This shows that(∑_i=1^N Z_i _ > d^ϵΦ) ≤(Ω̃(N,ϵ,D)^c) ≤ N^-D,which finishes the proof.§ BASIC TOOLS: PRELIMINARY BOUNDSThe goal of this section is to prove several preliminary bounds on various quantities that we will use later. All of the estimates used outside of this section are summarized in the statements of Lemmas <ref> and <ref>.Let B_j be the jth column of B with the (j,j)th entry (which is zero) removed. Then max_j=1^N B_j≺ 1.Since the distribution of B_j does not depend on j, it suffices to prove B_j≺ 1. We also prefer to split(U^∗ T U)_ij = ∑_μ U_μ i T_μ U_μ j = ∑_k=⌈ℓ⌉^L ∑_μ∈𝐌_k U_μ i T_μ U_μ j∑_k=⌈ℓ⌉^L V^(k)_ij,which suggests that we decompose B_j into a sum of vectors B_j = ∑_k = ⌈ℓ⌉^L V_j^(k),where V_j^(k) is the vector whose ith entry is V_ij^(k). ThenB_j≤∑_k = ⌈ℓ⌉^L V_j^(k).Since this sum has a constant number of terms, it suffices to show that V_j^(k)≺ 1 for any k ∈⌈ℓ⌉, L. Now we compute high moments of V_j^(k). For p∈,[V_j^(k)^2p]= ∑_i_1,…,i_p^(j)[ ∏_a=1^p(V_i_aj^(k))^2 ] = T_k^2p∑_i_1,…,i_p^(j)∑_μ_1,ν_1,…, μ_p, ν_p∈𝐌_k[∏_a=1^p U_μ_a i_a U_ν_a i_a] [∏_a=1^p U_μ_a j U_ν_a j]where the last expectations factor since all the i_a's are distinct from j. The expectations vanish unless the X's pair, so ∑_μ_1, ν_1, …, μ_p ν_p ∈𝐌_k has order d^kp nonzero terms instead of the naive d^2kp; each product of expectations contributes order N^-2p; we have T_k^2p = C_2p N^p d^-kp, and ∑_i_1,…,i_p^(j) contributes N^p, so[V_j^(k)^2p] ≤ C_2p,which concludes the proof. For any fixed τ > 0 and any fixed z ∈𝐃_τ, we havemin_j=1^N G_jj(z)≻ 1,min_j=1^N (G_jj(z)) ≻ 1,and therefore1 ≺s(z)≺ 1, 1 ≺(s(z)) ≺ 1.Since G_jj is a diagonal element of the resolvent of B, which has B_jj = 0, the Schur complement formula gives usG_jj = 1/-z-B_j,(B^(j)-z)^-1B_jwhere B_j is the jth column of B except for the (j,j) element, and B^(j) is the corresponding minor. Since z is order one, Lemma <ref> givesB_j,(B^(j)-z)^-1B_j≤(B^(j)-z)^-1_B_j^2 ≤1/ηB_j^2 ≺ 1which proves (<ref>). For (<ref>), we note that, if (u_i)_i=1^N-1 is a (real) orthonormal eigenbasis for B^(j) with corresponding eigenvalues (λ_i)_i=1^N-1, then (B_j,(B^(j)-z)^-1B_j) = ( ∑_i=1^N-11/λ_i-zB_j,u_i^2 ) = ∑_i=1^N-1η/λ_i-z^2B_j,u_i^2 ≥ 0,so thatG_jj = (-z-B_j,(B^(j)-z)^-1B_j)/-z-B_j,(B^(j)-z)^-1B_j^2 = (z)+(B_j,(B^(j)-z)^-1B_j)/-z-B_j,(B^(j)-z)^-1B_j^2≥(z)/-z-B_j,(B^(j)-z)^-1B_j^2 = (z)G_jj^2.Now we put a minimum over j, apply (<ref>), and use (z) ≻ 1 to obtain (<ref>).The lower bound in (<ref>) follows immediately from (<ref>). On the event {(s(z)) > 0}, we have s(z)≥(s(z)); this implies 1 ≺s(z). Similarly, the upper bound in (<ref>) follows from the upper bound in (<ref>), which is immediate since s(z)≤1/η deterministically as the trace of a resolvent. For any fixed τ > 0 and any fixed z ∈𝐃_τ, we havemax_k=⌈ℓ⌉^L max_μ∈𝐌_kd^k/N(G_M(z)+T)_μμ ≺ 1,max_k=⌈ℓ⌉^L max_i=1^N max_μ∈𝐌_k d^k (G_M^(i)(z)-G_M(z))_μμ ≺ 1,max_k=⌈ℓ⌉^L max_i=1^N max_μ∈𝐌_kd^k/N(G^(i)_M(z)+T)_μμ ≺ 1.In order to prove this lemma, we need the control parametersΛ_c(z)= max_k=⌈ℓ⌉^L max_ν∈𝐌_kmax_j = 1^N d^k/2/√(N)|(G_M(z))_ν j|,Λ_d(z)= max_k = ⌈ℓ⌉^L max_μ∈𝐌_kd^k/N(G_M(z)+T)_μμ,as well as the following two lemmas: For any fixed τ > 0 and any fixed z ∈𝐃_τ, we havemax_k=⌈ℓ⌉^L max_i=1^N max_μ∈𝐌_k d^k (G_M^(i)-G_M)_μμ≺ NΛ_c(z)^2.For any fixed τ > 0 and any fixed z ∈𝐃_τ, we haveΛ_c(z) ≺√(Λ_d(z)+Λ_c(z)^2+1/N),Λ_d(z) ≺√(Λ_d(z)+Λ_c(z)^2+1).From here the proof is straightforward:From the definition of stochastic domination, it is an elementary exercise to verify, for X_N and Y_N positive and real, that X_N ≺X_N/N + Y_N implies X_N ≺ Y_N; thus one upgrades (<ref>) intoΛ_c(z) ≺√(Λ_d(z)+1/N).Plugging this into (<ref>), one obtains similarlyΛ_d(z) ≺√(Λ_d(z) + Λ_d(z)+1/N + 1)≺√(Λ_d(z)+1),from which it is another elementary exercise from the definition of stochastic domination to conclude Λ_d(z) ≺ 1, which is exactly (<ref>). Plugging this back into (<ref>), one obtains Λ_c(z) ≺1/√(N).Combining this with Lemma <ref> yields (<ref>). Finally, (<ref>) is immediate from combining (<ref>) and (<ref>). The proofs of Lemmas <ref> and <ref> regularly use the following short lemma. The proof is a short exercise in the definition of stochastic domination, so we omit it. Suppose we have random variables X_N,k,i,μ depending on k ∈⌈ℓ⌉, L, on i ∈ 1, N, and on μ∈𝐌_k, such that for some Y_N we haveX_N,k,i,μ≺ Y_Nfor each k, i, and μ. If, for each fixed k, the distribution of X_N,k,i,μ depends neither on i nor on μ∈𝐌_k, then max_k=⌈ℓ⌉^L max_i=1^N max_μ∈𝐌_kX_N,k,i,μ≺ Y_N. Combining the resolvent identity G^(i)_μμ = G_μμ - G_μ iG_i μ/G_ii from (<ref>) with min_j=1^N G_jj(z)≻ 1 from (<ref>), we obtain d^k (G_M^(i)-G_M)_μμ≺ NΛ_c(z)^2for each k, i, and μ. Applying Lemma <ref> completes the proof.First, we claim that for each μ and each i we have∑_ν∈𝐌 U_ν i G^(i)_νμ≺1/√(N)√(∑_ν∈𝐌G^(i)_νμ^2).Indeed, by summing over k ∈⌈ℓ⌉, L, to show (<ref>) it suffices to show∑_ν∈𝐌_k U_ν i G^(i)_νμ≺1/√(N)√(∑_ν∈𝐌_kG^(i)_νμ^2)for each k; we write this in terms of the X's, cancelling the factor 1/√(N), as∑_a_1 < ⋯ < a_k^d X_a_1i⋯ X_a_ki G^(i)_νμ≺√(∑_ν∈𝐌_kG^(i)_νμ^2),where ν = (a_1, …, a_k). But estimates of this form are essentially standard, and are typically called “large deviations bounds” in the local-law literature. Simple ones take the form ∑_a_1 ≠ a_2 X_a_1 X_a_2 b_a_1a_2≺ (∑_a_1 ≠ a_2b_a_1a_2^2)^1/2 (see, e.g., <cit.>, which is based on <cit.>), where the X_a's are i.i.d. centered random variables with unit variance, the b_a_1a_2 are deterministic, and the result is crucially uniform in b_a_1a_2. In our case, the sum is over k indices rather than two, but this generalization is routine, as already noted in <cit.> and <cit.>. Furthermore, in our case the role of b_a_1a_2 is played instead by G^(i)_μν. These resolvent entries are not deterministic, but they are independent of X_i, so we can condition on them; since the result is uniform in deterministic b_a_1a_2, we can safely integrate over the randomness in G^(i), obtaining (<ref>) and thus (<ref>).Now fix k and μ∈𝐌_k. On the one hand, if we start with the resolvent identity G_iμ = - G_ii∑_ν∈𝐌 U_ν i G_νμ^(i) from (<ref>) and use the estimates max_i G_ii≤ 1/η≺ 1, which is trivial since the G_ii are the diagonal elements of a resolvent, and (<ref>), we obtainmax_i=1^N G_iμ≺max_i=1^N 1/√(N)√(∑_ν∈𝐌G_νμ^(i)^2).On the other hand, if we start with the resolvent identity (G_M+T)_μμ = -T_μ∑_i=1^N G_ii∑_ν∈𝐌 U_μ i U_ν i G_νμ^(i), from (<ref>), and use U_μ j≺ 1/√(N) as well as (<ref>) (to which we can add max_i=1^N on both sides by Lemma <ref>), we obtain(G_M+T)_μμ≺√(N/d^k)max_i=1^N √(∑_ν∈𝐌G^(i)_νμ^2).Assume momentarily the estimated^k/N∑_ν∈𝐌G_νμ^(i)^2 ≺Λ_e(z) + Λ_c(z)^2 + 1.Lemma <ref> allows us to add max_k=⌈ℓ⌉^L max_i=1^N max_μ∈𝐌_k to the left-hand side for free. Combining this with (<ref>) yields (<ref>); combining it with (<ref>) instead yields (<ref>). Thus it remains only to prove (<ref>). We do this using the full Ward inequality, Lemma <ref>: Since T is diagonal and a+b^2 ≤ 2a^2 + 2b^2 ≺a^2 + b^2, we find∑_ν∈𝐌G_νμ^(i)^2 = ∑_ν^(μ)(G_M^(i)+T)_νμ^2 + (G_M^(i)+T)_μμ -T_μ^2 ≺∑_ν∈𝐌(G^(i)+T)_νμ^2 + T_μ^2 ≺ G_μμ^(i)/η + N/d^k.Since T is real, we haveG_μμ^(i)/η≺ G_μμ^(i) =(G_M^(i)-G_M)_μμ +(G_M+T)_μμ≺N/d^k(Λ_e(z) + Λ_c(z)^2),which completes the proof of (<ref>), and thus of the lemma. § SELF-CONSISTENT EQUATIONS I: PROOF OF PROPOSITION <REF>In this section, we prove Proposition <ref>. The bulk of the proof is Lemma <ref>, which says roughly that, for each i, we have ∑_μ, ν U_μ i (G^(i)_M+T)_μν U_ν i≈ϕs. This should be thought of as a kind of concentration result: Since [U_μ i U_ν i] = δ_μν/N and G_M^(i)+T is independent of the X_i's, the partial expectation of ∑_μ, ν U_μ i (G^(i)+T)_μν U_ν i over just the X_i's is 1/N(G^(i)_M+T); and if one replaces G^(i)_M with G_M in this expression, one gets exactly ϕs. Lemma <ref> itself relies fundamentally on the partial Ward inequalities from Section <ref>.For any fixed τ > 0, any fixed z ∈𝐃_τ, and any i ∈ [N], we have∑_μ,ν U_μ i(G_M^(i)(z)+T)_μν U_ν i - ϕs(z)≺1/d^1/2min(1,ℓ). From the Schur complement formula (<ref>) and Lemma <ref>, we haveG_ii^-1 =-z - ∑_μ, ν U_μ i(G_M^(i)+T)_μν U_ν i = -z-ϕs̃(z) + _≺(1/d^1/2min(1,ℓ)).Multiplying both sides by G_ii and using the deterministic bound G_ii≤ 1/η (so that G_ii_≺(d^1/2min(1,ℓ)) = _≺(d^1/2min(1,ℓ))), we find1 = (-z - ϕs(z))G_ii + _≺(1/d^1/2min(1,ℓ)).The error term is uniform in i, since all the variables are exchangeable in i. Thus we can average both sides over i and rearrange to obtain the result.Since ϕs = 1/N(G_M + T) = 1/N∑_μ (G_M+T)_μμ, it suffices to show the following three bounds:E_1∑_μ≠ν U_μ i(G^(i)+T)_μν U_ν i = ∑_μ≠ν U_μ iG^(i)_μν U_ν i≺ d^-1/2min(1,ℓ),E_2∑_μ( U_μ i^2 - 1/N) (G^(i)+T)_μμ≺1/√(d),E_31/N ((G^(i)_M) - (G_M)) ≺1/d^ℓ.Notice that the min only appears in the estimate of E_1, and that the estimate on E_3 is much better than needed. First we consider the E_1 term, which is the most complicated. It is convenient to split the sum over all μ≠ν into terms which fix the length of (number of X's contained in) μ, fix the length of ν, and fix their overlap, by definingG^(k_1,k_2,s)(μ, ν) : μ≠ν, μ∈𝐌_k_1, ν∈𝐌_k_2, μ,ν = sand E_1,k_1,k_2,s∑_(μ, ν) ∈G^(k_1,k_2,s) U_μ iG^(i)_μν U_ν i = α_k_1,k_2,s/N∑_a_1, …,a_k_1b_s+1, …, b_k_2^d,∗ X_a_1 i^2 … X_a_s i^2 · X_a_s+1 i… X_a_k_1 i G^(i)_μν X_b_s+1 i… X_b_k_2 i,where α_k_1,k_2,s = 1/s!(k_1-s)!(k_2-s)! is a combinatorial factor accounting for the fact that the indices in μ and ν are not only distinct but also ordered: We can reconstruct μ and ν from (a_1,…,a_k_1) and (a_1,…,a_s,b_s+1,…,b_k_2) just by ordering, but given μ and ν with s shared indices, there are s! ways to label the shared indices as (a_1,…,a_s), (k_1-s)! ways to label the remaining μ indices as (a_s+1,…,a_k_1), and (k_2-s)! ways to label the remaining ν indices as (b_s+1,…,b_k_2). Thus α_k_1,k_2,s≺ 1, which is shortly how we will absorb it. For fixed k_1 and k_2, the possible s values range from 0 up to min(k_1,k_2), unless k_1 = k_2 = k, in which case the range is s ∈ 0, k-1, since if μ,ν equals the common length of μ and ν, then μ = ν, which is forbidden in the sum.Notice E_1 =∑_k_1,k_2,sE_1,k_1,k_2,s; since (k_1,k_2,s) takes values in a finite set, we can estimate each E_1,k_1,k_2,s separately. Now for each {a_1,…,a_s} all distinct we writeS_a_1,…, a_s = ∑_a_s+1,…,a_k_1,b_s+1,…,b_k_2=1^d,∗ X_a_s+1i⋯ X_a_k_1i G_μν^(i) X_b_s+1i⋯ X_b_k_2i,where for each a_s+1,…,a_k_1, b_s+1,…,b_k_2, the μ and ν inside the sum denote the (reordered as necessary) tuples (a_1,…,a_k_1) and (a_1,…,a_s,b_s+1,…,b_k_2), respectively. (Recall that our definition of μ and ν involves strict ordering, so this is unambiguous.) Define G^(k_1,k_2,s)_a_1,…,a_s to be the set of all pairs (μ,ν) indicated by this sum (i.e., the set of all pairs (μ,ν) ∈G^(k_1,k_2,s) for which the specific overlapping indices are {a_1, …, a_s}), and notice that these partition:_a_1 < ⋯ < a_sG^(k_1,k_2,s)_a_1,…,a_s = G^(k_1,k_2,s).We claim (absorbing another order-one combinatorial factor into ≺) that S_a_1,…,a_s^2 ≺∑_μ, ν∈G^(k_1,k_2,s)_a_1,…,a_sG_μν^(i)^2,uniformly in {a_1,…,a_s}. Indeed, this is simply another standard large-deviations bound, as discussed in the proof of (<ref>) above.At the same time, since we assumed all finite moments, one can easily see X_ai^4 ≺ 1, uniformly in a (and i); hence X_a_1i^4 ⋯ X_a_si^4 ≺ 1, uniformly in {a_1, …, a_s}; henceX_a_1i^4 ⋯ X_a_si^4 S_a_1,…,a_s^2 ≺∑_μ, ν∈G^(k_1,k_2,s)_a_1,…,a_sG_μν^(i)^2,uniformly in {a_1, …, a_s} (we absorb another combinatorial factor into ≺). Combining these with (<ref>), we find∑_a_1 < ⋯ < a_s X_a_1i^4 ⋯ X_a_si^4 S_a_1,…,a_s^2≺∑_a_1 < ⋯ < a_s∑_μ, ν∈G^(k_1,k_2,s)_a_1,…,a_sG_μν^(i)^2 = ∑_(μ, ν) ∈G^(k_1,k_2,s)G_μν^(i)^2,Now we apply Cauchy-Schwarz to (<ref>) and use these estimates to obtainE_1,k_1,k_2,s = α_k_1,k_2,ss!/N∑_a_1 < ⋯ < a_s 1 · (X_a_1i^2 ⋯ X_a_si^2 S_a_1,…,a_s)≤d^s/2α_k_1,k_2,s/N( ∑_a_1 < ⋯ < a_s X_a_1i^4 ⋯ X_a_si^4 S_a_1,…,a_s^2 )^1/2≺d^s/2/N( ∑_(μ, ν) ∈G^(k_1,k_2,s)G_μν^(i)^2 )^1/2. Suppose without loss of generality that k_1 ≤ k_2. Then we apply the partial Ward inequality, Lemma <ref>, to each fixed μ (recall that T is diagonal, so that (G^(i)+T)_μν = G^(i)_μν whenever μ≠ν); since the result is uniform in μ, we can also sum over μ in the sense of stochastic domination to findE_1,k_1,k_2,s≺d^s/2/N(∑_μ∈𝐌_k_1 G^(i)_μμ/η d^max(-s,ℓ-k_2))^1/2.Recall that η is order one, and that (<ref>) yields G^(i)_μμ =(G^(i)+T)_μμ≤(G^(i)+T)_μμ≺N/d^k_1.Notice also that 𝐌_k_1≤ d^k_1, and that s ≤ k_2-1 (indeed, either k_1 < k_2, in which case s ≤ k_1 < k_2, or k_1 = k_2, in which case s ≤ k_2 - 1 because of the μ≠ν restriction explained above); thusE_1,k_1,k_2,s≺d^s/2/N (Nd^max(-s,ℓ-k_2))^1/2 = 1/√(N) d^1/2max(0,s+ℓ-k_2)≤1/√(N) d^1/2max(0,ℓ-1)≺ d^1/2max(-ℓ,-1),which finishes the proof that E_1≺ d^-1/2min(1,ℓ).Next we estimate E_2. Again it is convenient to split the sum over all μ's into finitely many partial sumsE_2,k = ∑_μ∈𝐌_k(U_μ i^2 - 1/N)(G^(i)+T)_μμand show E_2,k≺1/√(d)for each k. Fix ϵ and D; since G^(i) is independent of U_i, we have(E_2,k > d^ϵ-1/2)≤_G^(i)[ _U_i[1{E_2,k > d^ϵ-1/2}] 1{max_μ∈𝐌_k(G^(i)+T)_μμ≤N/d^k d^ϵ/2}]+ ( max_μ∈𝐌_k(G^(i)+T)_μμ≥N/d^k d^ϵ/2).Applying Lemma <ref> below and (<ref>)to the first and second terms on the right-hand side, respectively, we find that each is at most some C_ϵ,D d^-D. This gives E_2≺ 1/√(d) as claimed.Finally we estimate E_3, again splitting E_3 = ∑_k=⌈ℓ⌉^L E_3,k withE_3,k = 1/N∑_μ∈𝐌_k (G^(i)_μμ - G_μμ).From (<ref>) we haveE_3,k≤d^k/Nmax_μ∈𝐌_kG^(i)_μμ - G_μμ≺1/N,which completes the proof. Fix k, and fix some deterministic sequence (b_μ = b^(d)_μ)_μ∈𝐌_k of complex numbers withsup_μ∈𝐌_kb_μ≤α_dfor some sequence (α_d)_d=1^∞ (recall that the set 𝐌_k depends on d). Then∑_μ∈𝐌_k( U_μ i^2 - 1/N) b_μ≺d^kα_d/N√(d)uniformly in (b_μ) subject to (<ref>). The proof goes by high moments: For p ∈, we have[ ∑_μ∈𝐌_k( U_μ i^2 - 1/N) b_μ^2p]= ∑_μ_1,μ'_1,μ_2,μ'_2,…,μ_p,μ'_p ∈𝐌_k[ ∏_j=1^p (U_μ_ji^2 - 1/N) ( U_μ'_j i^2 - 1/N) ] ∏_j=1^p b_μ_jb_μ'_j≤ (α_d)^2p∑_μ_1,μ'_1,μ_2,μ'_2,…,μ_p,μ'_p ∈𝐌_k[ ∏_j=1^p (U_μ_ji^2 - 1/N) ( U_μ'_j i^2 - 1/N) ]= (α_d)^2p∑_μ_1,…,μ_2p∈𝐌_k[ ∏_j=1^2p(U_μ_ji^2 - 1/N)]where the last equality is just a convenient relabeling (after we stop distinguishing the complex conjugates between b_μ_j and b_μ'_j, we no longer need to pair the terms μ_j and μ'_j).Given a tuple of tuples (μ_1,…,μ_2p) ∈ (𝐌_k)^2p, we say that some tuple μ_j is isolated if max_m ≠ jμ_j,μ_m = 0, i.e., if μ_j has its own set of X's, none of which appears in any other tuple μ_m. Consider the set G^2p = {(μ_1,…,μ_2p) ∈ (𝐌_k)^2p : No μ_j is isolated}.On the complement of this set, at least one tuple μ_j is isolated in this sense, meaning that at least one (U_μ_ji^2-1/N) is independent of everything else; since these variables have mean zero, such expectations vanish, meaning that∑_μ_1,…,μ_2p∈𝐌_k[ ∏_j=1^2p(U_μ_ji^2 - 1/N) ] = ∑_(μ_1,…,μ_2p) ∈G^2p[ ∏_j=1^2p(U_μ_ji^2 - 1/N) ].Furthermore, we claim that G^2p≤ C_2p d^2pk-p.Indeed, the total number of tuples (μ_1,…,μ_2p) ∈ (𝐌_k)^2p is at most d^2pk, because each of the 2p μ_j's includes k X's. To ensure that none is isolated, while using as many X's as possible, each μ_j should use k-1 of its own X's and have a final X which it shares with exactly one other tuple μ_j'; this pairing subtracts p off the naive count, while adding a combinatorial factor C_2p tracking which μ_j's pair. At the same time, we claim that for each p there exists C_2p with sup_μ_1,…,μ_2p∈𝐌_k[ ∏_j=1^2p(U_μ_ji^2 - 1/N) ]≤C_2p/N^2pIndeed, writing A_jU_μ_j i^2-1/N, we can use the generalized Hölder's inequality [∏_j=1^2p A_j]≤∏_j=1^2p ([A_j^2p])^1/2p, then the triangle inequality: ([A_j^2p])^1/2p = U_i^2-1/N_2p≤U_i^2_2p + 1/N ≤ C_k,p/N. Combining (<ref>), (<ref>), and (<ref>), we find[ ∑_μ∈𝐌_k( U_μ i^2 - 1/N) b_μ^2p] ≤C_2p/N^2p (α_d)^2pG^2p≤ C_2p( d^k α_d/N√(d))^2p,which suffices.§ SELF-CONSISTENT EQUATIONS II: PROOFS OF PROPOSITIONS <REF> AND <REF>The goal of this section is to prove Propositions <ref> and <ref>. The latter follows from the former fairly quickly.For any fixed τ > 0 and any fixed z ∈𝐃_τ, we have1 ≺z+ϕs(z)≺ 1, 1 ≺(z+ϕs(z)) ≺ 1.The estimate (<ref>) follows immediately from Lemma <ref>, which shows 1 ≺s(z)≺ 1, and from Proposition <ref>, which shows 1+s(z)(z+ϕs(z))≺ d^-1/2min(1,ℓ). The upper bound of (<ref>) is immediate from that of (<ref>). For the lower bound, we note that the imaginary part of s is almost surely nonnegative: Indeed, from (<ref>) we haves(z) = 1/M((TUG_N(z)U^∗ T)) = (TU( G_N(z)) U^∗ T) ≥ 0where we used that G_N(z) is a resolvent, so that its imaginary part is positive definite, as well as the general result that B^∗ A B = (A^1/2 B)^∗ (A^1/2B) is positive semidefinite if A is (square and) positive definite and B is any (possibly rectangular) matrix.For any fixed τ > 0 and any fixed z ∈𝐃_τ, we haveϕs(z) - 1/N∑_μT_μ^2/T_μ -z - ϕs(z)≺1/d^1/2min(1,ℓ).Defineℰ_μ^(1)-T_μ∑_j=1^N (G_jj + (z + ϕs(z))^-1) ∑_ν U_μ jU_ν j G^(j)_νμ,ℰ_μ^(2)T_μ (z + ϕs(z))^-1∑_j=1^N ∑_ν^(μ) U_μ jU_ν j G^(j)_νμ,ℰ_μ^(3)T_μ (z + ϕs(z))^-1∑_j=1^N(U_μ j^2-1/N) G^(j)_μμ,ℰ_μ^(4) 1/N T_μ (z + ϕs(z))^-1∑_j=1^N (G_μμ^(j) - G_μμ),so that, by the resolvent identity G_μμ + T_μ = -T_μ∑_j G_jj∑_ν U_μ j U_ν j G^(j)_νμ from (<ref>), we haveE_μ E^(1)_μ + E^(2)_μ + E^(3)_μ + E^(4)_μ = -T_μ∑_j G_jj∑_ν U_μ j U_ν j G^(j)_νμ - T_μ G_μμ/z+ϕs(z) = G_μμ + T_μ - T_μ G_μμ/z+ϕs(z)= ( 1- T_μ/z+ϕs(z)) (G_μμ + T_μ) + T_μ^2/z+ϕs(z) = (z+ϕs(z)-T_μ)(G_μμ+T_μ) + T_μ^2/z+ϕs(z).ThusG_μμ + T_μ - T_μ^2/T_μ - z - ϕs(z) = ( z+ϕs(z)/z+ϕs(z) - T_μ) E_μ,so if we defineE_k^(a)∑_μ∈𝐌_kE_μ^(a),a = 1, 2, 3, 4,then ϕs(z) - 1/N∑_μT_μ^2/T_μ -z - ϕs(z) = 1/N∑_μ( G_μμ + T_μ - T_μ^2/T_μ - z - ϕs(z)) = 1/N∑_μ( z+ϕs(z)/z+ϕs(z)-T_μE_μ) = 1/N∑_k=⌈ℓ⌉^L ∑_a=1^4 z+ϕs(z)/z+ϕs(z)-T_μE^(a)_k.Thus the problem reduces to showing1/Nz+ϕs(z)/z+ϕs(z)-T_kE^(a)_k≺1/d^1/2min(1,ℓ)for k ∈⌈ℓ⌉, L and a ∈ 1, 4. Lemma <ref> shows z+ϕs(z)≺ 1 as well asz+ϕs(z) - T_k≥(z+ϕs(z) - T_k) = (z+ϕs(z)) ≻ 1,so z+ϕs(z)/z+ϕs(z) - T_k≺ 1, and we only need show1/NE^(a)_k≺1/d^1/2min(1,ℓ)for k ∈⌈ℓ⌉, L and a ∈ 1, 4. In the following we will often, but not always, use the estimate T_μ (z+ϕs(z))^-1≺ 1, from (<ref>). We handle one a at a time: * (a = 1): On the one hand, from (<ref>) and (<ref>) we haveG_jj + (z+ϕs(z))^-1 = G_jj(z+ϕs(z))+1/z+ϕs(z)≺1/d^1/2min(1,ℓ).Since the distribution of the left-hand side does not depend on j, we can put a maximum over j on the left-hand side. On the other hand, we claimT_k ∑_μ∈𝐌_k∑_ν U_μ j U_ν j G^(j)_νμ≺ 1.Assume (<ref>) momentarily. Since we can put max_j=1^N on the left-hand side for the same reasons as above, we use it along with (<ref>) to find1/NE^(1)_k≤( max_j=1^N G_jj+(z+ϕs(z))^-1) ( max_j=1^N T_k ∑_μ∈𝐌_k∑_ν U_μ j U_ν j G^(j)_νμ) ≺1/d^1/2min(1,ℓ).Thus it remains only to check (<ref>). We split ∑_ν into the term ν = μ and the remainder. For the latter, we recall that (<ref>) shows that ∑_μ∈𝐌_k∑_ν∈𝐌_k', μ,ν = s U_μ j U_ν j G^(j)_νμ≺ d^-1/2min(1,ℓ) for each k' and s ≤min(k,k') (except when k = k', in which case s is at most k-1). By summing this over the various values of k' and s, and using the trivial bound T_k≺ 1, we obtainT_k ∑_μ∈𝐌_k∑_ν^(μ) U_μ j U_ν j G^(j)_νμ≺∑_μ∈𝐌_k∑_ν^(μ) U_μ j U_ν j G^(j)_νμ≺1/d^1/2min(1,ℓ),which is better than claimed – this is not the main term. When ν = μ, we haveT_k ∑_μ∈𝐌_k U_μ j^2 G^(j)_μμ≺√(N/d^k)∑_μ∈𝐌_k U_μ j^2 (G^(j)+T)_μμ + N/d^k∑_μ∈𝐌_k U_μ j^2≺∑_μ∈𝐌_k U_μ j^2 (G^(j)+T)_μμ + 1,where we used the simple bound ∑_μ∈𝐌_k U_μ j^2≤ d^k max_μ∈𝐌_k U_μ j^2 ≺d^k/N. The remaining term is also straightforward: (<ref>) implies U_μ j^2 (G^(j)+T)_μμ≺1/d^k, and taking a maximum over μ finishes the proof of (<ref>).* (a = 2): In (<ref>) we showed∑_μ∈𝐌_k∑_ν^(μ) U_μ j U_ν j G^(j)_νμ≺1/d^1/2min(1,ℓ).Since the distribution of the left-hand side does not depend on j, we can also put a maximum over j on the left-hand side, and use this to obtain1/NE^(2)_k = 1/NT_k (z+ϕs(z))^-1∑_j=1^N ∑_μ∈𝐌_k∑_ν^(μ) U_μ j U_ν j G^(j)_νμ≺max_j=1^N ∑_μ∈𝐌_k∑_ν^(μ) U_μ j U_ν j G^(j)_νμ≺1/d^1/2min(1,ℓ). * (a = 3): From Lemma <ref>, we have∑_μ∈𝐌_k(U_μ j^2 - 1/N) T_μ≺1/√(d)d^k/N√(N/d^k) = 1/√(d)√(d^k/N).Since the distribution of the left-hand side does not depend on j, we can also put a maximum over j on the left-hand side, and obtain1/N√(N/d^k)∑_μ∈𝐌_k∑_j=1^N (U_μ j^2 - 1/N) T_μ≤√(N/d^k)max_j=1^N ∑_μ∈𝐌_k( U_μ j^2 - 1/N) T_μ≺1/√(d).Similarly, in (<ref>) we showed ∑_μ∈𝐌_k(U_μ j^2 - 1/N) (G^(j)_μμ + T_μ)≺1/√(d),and running the same argument about taking the maximum over j yields1/N√(N/d^k)∑_μ∈𝐌_k∑_j=1^N (U_μ j^2 - 1/N) (G^(j)_μμ + T_μ)≺1/√(d)(actually we discard the √(N/d^k) in the upper bound here). Then, splitting G^(j)_μμ = (G^(j)_μμ + T_μ) - T_μ and using these two bounds, we obtain1/NE^(3)_k = 1/NT_μ (z+ϕs(z))^-1∑_μ∈𝐌_k∑_j=1^N (U_μ j^2 - 1/N) G^(j)_μμ≺1/N√(N/d^k)∑_μ∈𝐌_k∑_j=1^N (U_μ j^2 - 1/N) G^(j)_μμ≺1/√(d),which is better than needed.* (a = 4): In (<ref>) we showed that1/N∑_μ∈𝐌_k (G^(j)_μμ - G_μμ)≺1/N.As always we can put a maximum over j, then estimate1/NE^(4)_k≤1/N^2T_μ (z+ϕs(z))^-1∑_j ∑_μ∈𝐌_k (G^(j)_μμ - G_μμ)≺1/Nmax_j=1^N ∑_μ∈𝐌_k (G^(j)_μμ - G_μμ)≺1/Nwhich is much better than needed.In the following result, recall that ℓ_c is the least integer strictly bigger than ℓ.For any fixed τ > 0 and any fixed z ∈𝐃_τ, we have1/N∑_μT_μ^2/T_μ -z - ϕs(z) - γ_a/γ_b - z - ϕs(z) + γ_c/z+ϕs(z)≺1/d^(ℓ_c-ℓ)/2 if ℓ is not an integer, 1/d^(ℓ_c-ℓ)/2 + N/d^ℓ - κ if ℓ is an integer.From the definition T_k = c_k √(k!)√(N/d^k), we find1/N∑_μT_μ^2/T_μ -z - ϕs(z) = 1/N∑_k=⌈ℓ⌉^L ∑_μ∈𝐌_kT_k^2/T_k -z - ϕs(z) = ∑_k=⌈ℓ⌉^L c_k^2(k!) M_k/d^k1/T_k - z - ϕs(z).Furthermore, since (k!)M_k counts the number of tuples (a_1,…,a_k) ∈ [d]^k which are all distinct, we have (k!)M_k/d^k = 1 + (1/d) when c_k ≠ 0 (recall M_k = 0 otherwise); since we showed in (<ref>) that T_k - z - ϕs(z)≻ 1, this gives1/N∑_μT_μ^2/T_μ -z - ϕs(z) - ∑_k=⌈ℓ⌉^L c_k^2/T_k - z - ϕs(z)≺1/d.For k > ℓ with strict inequality (i.e., k ≥ℓ_c), we have T_k ≺1/d^k-ℓ/2. Since T_k - z - ϕs(z)≻ 1 (from (<ref>)) and z+ϕs(z)≻ 1 (from (<ref>)), the definition γ_c = ∑_k=ℓ_c^L c_k^2 (recall that ℓ_c is the smallest integer strictly bigger than ℓ) gives∑_k=ℓ_c^L c_k^2/T_k - z - ϕs(z) + γ_c/z+ϕs(z) = ∑_k=ℓ_c^L ( c_k^2/T_k - z - ϕs(z) + c_k^2/z+ϕs(z)) ≺∑_k=ℓ_c^L T_k≺ d^-(ℓ_c-ℓ)/2.If ℓ is not an integer, then ⌈ℓ⌉ = ℓ_c and the proof is complete. Otherwise, the remaining term is k = ℓ, for which we have T_ℓ - γ_b = c_ℓ√(ℓ!) (√(N/d^ℓ) - √(κ)) = ( N/d^ℓ - κ)which is (1) by the assumption (<ref>), and thus (since γ_b - z- ϕs(z)≻ 1 by the same argument as in (<ref>))c_ℓ^2/T_ℓ - z - ϕs(z) - γ_a/γ_b - z - ϕs(z)≺T_ℓ - γ_b≺N/d^ℓ - κ,which completes the proof.This follows immediately from Proposition <ref> and Lemma <ref>, simply by noting thatq_ℓ = min{ℓ_c-ℓ/2, min(1,ℓ)/2}. This is just an exercise in showing that the stochastic domination bound in (<ref>) interacts nicely with the arithmetic. We have1/s(z) + z + γ_a s(z)/1+γ_b s(z) + γ_c s(z) ≤1/s(z)+z+ϕs(z) + γ_a 1/γ_b + 1/s(z) - 1/γ_b - z - ϕs(z) + γ_c s(z) + 1/z+ϕs(z) +-ϕs(z) + γ_a/γ_b - z - ϕs(z) - γ_c/z+ϕs(z)By Proposition <ref>, the last term on the right-hand side is stochastically dominated by d^-q_ℓ, plus N/d^ℓ-κ in the case that ℓ is an integer. Now we make some simple estimates before bounding the first three terms, frequently using that the quantities s(z), (s(z)), z+ϕs(z), and (z+ϕs(z)) are stochastically dominated above and below by 1, from Lemmas <ref> and <ref>, respectively. For example, since γ_b is real, these give usγ_b + 1/s(z)≥(1/s(z))≻ 1andγ_b-z-ϕs(z)≥(z+ϕs(z))≻ 1,so that1/γ_b + 1/s(z) - 1/γ_b - z - ϕs(z)≤1/s(z)+z+ϕs(z)/γ_b + 1/s(z)γ_b - z - ϕs(z)≺1/s(z)+z+ϕs(z).Since 1/s(z) + z + ϕs(z)≺1+s(z)(z+ϕs(z)) and s(z) + 1/z+ϕs(z)≺1+s(z)(z+ϕs(z)), we bound the first three terms on the right-hand side of (<ref>) by (1+γ_a + γ_c) 1+s(z)(z+ϕs(z))≺1/d^1/2min(1,ℓ)≤1/d^q_ℓ,where we applied Proposition <ref> in the penultimate step, completing the proof of (<ref>). § ERROR ANALYSISThe goal of this section is to prove Proposition <ref>, which replaces the given matrices A and A with a friendlier matrix B which has the same global spectral behavior. We start by importing the following estimate of <cit.>. <cit.> If H_1, H_2 are N × N Hermitian matrices with Stieltjes transforms s_1, s_2, then s_1(z)-s_2(z) ≤H_1-H_2_/√(N)η^2,s_1(z)-s_2(z) ≤C (H_1-H_2)/Nη,where C is an absolute constant. We will apply this to compare the given matrices A and A with the matrix B defined in (<ref>). We will need the intermediate error matrices B^ and B^, whose definitions we give below, along with recalling the definitions of A, A, and B for the reader's convenience. We recall that, throughout, f is a finite-degree polynomial. All matrices are real-symmetric, N × N, and defined entrywise: A_ij = δ_i ≠ j/√(N) f( X_i,X_j/√(d))A_ij = δ_i ≠ j/√(N) f( X_i,X_j/√(d)√(d)/X_i√(d)/X_j)if X_i≠ 0 ≠X_j, 0otherwise, (B^)_ij = δ_i ≠ j/√(N)∑_k=0^L (d/X_iX_j)^k c_k/d^k/2√(k!)∑_a_1,…,a_k=1^d,∗ X_a_1i… X_a_ki X_a_1j… X_a_kj if X_i≠ 0 ≠X_j, 0otherwise, (B^)_ij = δ_i ≠ j/√(N)∑_k=0^L c_k/d^k/2√(k!)∑_a_1,…,a_k=1^d,∗ X_a_1i… X_a_ki X_a_1j… X_a_kj, B_ij = δ_i ≠ j/√(N)∑_k=⌈ℓ⌉^L c_k/d^k/2√(k!)∑_a_1,…,a_k=1^d,∗ X_a_1i… X_a_ki X_a_1j… X_a_kj,(By convention, if k = 0, we set ∑_a_1,…,a_k=1^d,∗ X_a_1i… X_a_ki X_a_1j… X_a_kj = 1. In this section, we find it easier to work with ∑_a_1,…,a_k=1^d,∗ than ∑_a_1 < ⋯ < a_k; this is why we write the factors √(k!) where we do.) The notation “full” means that the sum on k in the definitions of B^ and B^ includes k = 0, …, ⌈ℓ⌉ - 1, which are morally low-rank terms that do not affect the global law. We remove these terms in the step going from B^ to B.It turns out that all five of these matrices have the same global law, as we will see by considering the error matricesE_A,A A - A,E_A,B^A - B^,E_B^,B^B^ - B^,E_B^,B B^ - B.The first three of these matrices each have small Frobenius norm. The last is treated differently, since it contains a low-rank part (this can create spikes but does not affect the global law) which may not have small Frobenius norm, but after subtracting this low-rank part the remainder has small Frobenius norm. We remark that “small” means only ·_≪√(N); this kind of estimate does not suffice to compare operator norms of A, A and so on (indeed, <cit.> gives an example where A_ = _≺(1) but A_→∞ due to spike eigenvalues), but it does suffice to say that all the matrices A, A, etc. have the same global law. We have the entrywise bounds(E_A,A)_ij≺1/√(Nd), (E_A,B^)_ij≺1/√(Nd), (E_B^,B^)_ij≺1/√(Nd),and hence (immediately, since E_A,A, E_A,B^, and E_B^,B^ each have zero diagonal and equidistributed off-diagonal elements)E_A,A_≺√(N/d), E_A,B^_≺√(N/d), E_B^,B^_≺√(N/d).The matrix E_B^,B can be decomposed into a low-rank part and a small-Frobenius-norm partE_B^,B = E_ + E_,where there exists a deterministic sequence (r_N)_N=1^∞ such that (E_)≤ r_N almost surely, r_N= (d^⌈ℓ⌉ -1),E__ ≺ d^(⌈ℓ⌉ - 1)/2. Since B^ - (B+E_) = E_B^,B - E_ = E_, Lemma <ref> givess_A(z) - s_B(z) ≤s_A(z) - s_B^(z) + s_B^(z) - s_B^(z) + s_B^(z) - s_B+E_(z) + s_B+E_(z) - s_B(z)≤E_A,B^_/√(N)η^2 + E_B^,B^_/√(N)η^2 + E__/√(N)η^2 + Cr_N/Nη.By Propositions <ref> and <ref>, the right-hand side is stochastically dominated by d^(⌈ℓ⌉ - ℓ - 1)/2 = d^-r_ℓ (the worst term is the third), which tends to zero. As a very weak consequence of this, for any ϵ, D > 0 we have(s_A(z) - s_B(z)≥ϵ) ≤ C_ϵ,Dd^-Dwhich suffices for almost-sure convergence by the Borel-Cantelli lemma. The comparison of s_A to s_B is similar. In the remaining sections, we prove Propositions <ref> and <ref>.§.§ Common estimatesIn the proof, we will deal with generic i.i.d. vectors X and Y, only later selecting X = X_i and Y = X_j, and we will frequently work on the good eventG_XY = {X≠ 0 ≠Y}which has high probability, as we will see.If X ∈^d has i.i.d. entries, centered with unit variance, then X^2 - d≺√(d),X - √(d)≺ 1,d/X^2 - 11{X≠ 0}≺1/√(d),√(d)/X - 11{X≠ 0}≺1/√(d).Since X^2 - d = ∑_i=1^d (X_i^2-1) is a sum of centered independent variables with all finite moments, the estimate (<ref>) is standard; see, e.g., <cit.>. This gives (<ref>) which in turn gives (<ref>) and (<ref>).Let μ_X, μ_Y be centered probability measures onwith unit variance and all finite moments, and let (X_a)_a=1^d, (Y_a)_a=1^d be independent vectors with all entries i.i.d. samples from μ_X and μ_Y, respectively. SetX = √(d)/X Xif X≠ 0, 0otherwise, Y = √(d)/Y Yif Y≠ 0, 0otherwise,Then for each g ∈ we have∑_a_1,…,a_g=1^d,∗ X_a_1… X_a_gY_a_1… Y_a_g≺ d^g/2and∑_a_1,…,a_g=1^d,∗X_a_1…X_a_gY_a_1…Y_a_g≺ d^g/2Set F_d ∑_a_1,…,a_g=1^d,∗ X_a_1… X_a_g Y_a_1… Y_a_g. For p ∈, we have[(F_d)^2p] = ∑_a^(1)_1,…,a^(1)_g=1^d,∗⋯∑_a^(2p)_1,…,a^(2p)_g=1^d,∗[ ∏_b=1^2p X_a^(b)_1… X_a^(b)_g Y_a^(b)_1… Y_a^(b)_g]_ G(a^(1)_1,…,a^(1)_g,…,a^(2p)_1,…,a^(2p)_g).Since all the entries of X and Y are centered and independent, and X and Y are independent of one another, G(a^(1)_1,…,a^(1)_g,…,a^(2p)_1,…,a^(2p)_g) if any of its arguments appears only one time. This forces index coincidences, specifically of the form #{a^(1)_1,…,a^(1)_g,…,a^(2p)_1,…,a^(2p)_g}≤ pgwhen G(a^(1)_1,…,a^(1)_g,…,a^(2p)_1,…,a^(2p)_g) ≠ 0, instead of the naive 2pg. Thus#{(a^(1)_1,…,a^(1)_g,…,a^(2p)_1,…,a^(2p)_g) : G(a^(1)_1,…,a^(1)_g,…,a^(2p)_1,…,a^(2p)_g) ≠ 0}≤ C_p,g d^pg,since one can first select at most pg elements of 1, d to be the values of the set {a^(1)_1,…,a^(1)_g,…,a^(2p)_1,…,a^(2p)_g}, and once these values are selected, choosing which values to assign to which a's only adds a multiplicative factor C_p,g. Furthermore, since the X and Y entries have all finite moments, Hölder's inequality gives usG(a^(1)_1,…,a^(1)_g,…,a^(2p)_1,…,a^(2p)_g)≤ C_p,g,and thus[(F_d)^2p] ≤ C_p,gd^pgwhich suffices for (<ref>). For (<ref>), we set F_d = ∑_a_1,…,a_g=1^d,∗X_a_1…X_a_gY_a_1…Y_a_g. If X = 0 or Y = 0 then (<ref>) is trivial, so it suffices to restrict to the good event G_XY from (<ref>); on this event, we haveF_d = F_d(√(d)/X)^g(√(d)/Y)^g≺F_d≺ d^g/2,where the first inequality follows from Lemma <ref> and the second from the first half of this proof. §.§ Estimates for EA, ALet μ be a centered probability measure onwith unit variance and all finite moments, and let (X_a)_a=1^d, (Y_a)_a=1^d be independent vectors with all entries i.i.d. samples from μ.SetX = √(d)/X Xif X≠ 0, 0otherwise, Y = √(d)/Y Yif Y≠ 0, 0otherwise.Then for each k we haveH_k(∑_a=1^d X_a Y_a/√(d)) - H_k(∑_a=1^d X_a Y_a/√(d))≺1/√(d).The good set G_XY = {X≠ 0 ≠Y} from (<ref>) has much higher probability than required by stochastic domination, since if (X_a = 0) = p < 1 then we have(G_XY^c) = p^2dwhich tends to zero exponentially quickly in d. Thus we can restrict to G_XY when showing (<ref>). On this event, from Lemma <ref> we have√(d)/X = 1 + δ_X, √(d)/Y = 1 + δ_Ywith error terms δ_X, δ_Y satisfying δ_X≺ 1/√(d) and δ_Y≺1/√(d). Thus∑_a=1^d X_a Y_a/√(d) = ∑_a=1^d X_a Y_a/√(d)√(d)/X√(d)/Y = ∑_a=1^d X_a Y_a/√(d) (1+δ_X)(1+δ_Y) = ∑_a=1^d X_a Y_a/√(d) + (δ_X + δ_Y + δ_Xδ_Y) ∑_a=1^d X_a Y_a/√(d)_=:ϵ_XY,with an error term satisfying ϵ_XY≺ 1/√(d). This already completes the proof if k = 0, 1. For k > 1, we will Taylor expand; in order to do this, given η > 0 we introduce the good eventG_η = {ϵ_XY≤ d^η/4-1/2}∩{∑_a=1^d X_a Y_a≤ d^η/4(k-1)+1/2}.Since E_XY≺ 1/√(d) and ∑_a X_a Y_a≺√(d), we know that for every D > 0 there exists C_D and d_0(η,D) such that, for d ≥ d_0(η,D), (G_η^c) ≤ C_D d^-D.At the same time, since H'_k is a degree-(k-1) polynomial, there exists C_k such that for every α > 0 we have sup_x≤ d^αH'_k(x)≤ C_k d^(k-1)α. Thus, on G_η, a first-order Taylor expansion givesH_k(∑_a=1^d X_a Y_a/√(d)) - H_k(∑_a=1^d X_a Y_a/√(d)) = H_k(∑_a=1^d X_a Y_a/√(d) + ϵ_XY) - H_k(∑_a=1^d X_a Y_a/√(d))≤ϵ_XYsup_x≤ d^η/2(k-1)H'_k(x)≤ d^η/2-1/2and therefore( H_k(∑_a=1^d X_a Y_a/√(d)) - H_k(∑_a=1^d X_a Y_a/√(d))≥ d^η-1/2) ≤(G_η^c).Combined with (<ref>), this completes the proof.§.§ Estimates for E A, BfullLet μ be a centered probability measure onwith unit variance and all finite moments, and let (X_a)_a=1^d, (Y_a)_a=1^d be independent vectors with all entries i.i.d. samples from μ.SetX = √(d)/X Xif X≠ 0, 0otherwise, Y = √(d)/Y Yif Y≠ 0, 0otherwise.Then for each k we haveH_k(∑_a=1^d X_a Y_a/√(d)) - 1/d^k/2∑_a_1, …, a_k =1^d,∗X_a_1…X_a_kY_a_1…Y_a_k≺1/√(d).(Here H_k is the kth monic Hermite polynomial, which satisfies H_k = √(k!)h_k; we use this for this lemma only so that we do not need to carry √(k!)'s everywhere.) We restrict to the good event G_XY = {X≠ 0 ≠Y} in the same way as before. On this event, we first compute∑_a=1^d X_a^2 Y_a^2 = ∑_a=1^d ((X_a^2-1)+1)((Y_a^2-1)+1) = ∑_a=1^d (X_a^2-1)(Y_a^2-1) + d,since ∑_a (X_a^2-1) = ∑_a (Y_a^2-1) = 0, so that∑_a X_a^2 Y_a^2/d - 1 = ∑_a (X_a^2-1)(Y_a^2-1)/dΔ.We will need the estimate Δ≺ 1/√(d). This does not follow from Lemma <ref>, because the variables X_a^2-1 are not independent and centered, nor are they the normalizations of such variables. To handle this, we rewriteΔ = ∑_a (X_a^2-1)(Y_a^2-1)/d = ∑_a (X_a^2-d/X^2 + d/X^2 - 1)(Y_a^2 - d/Y^2 + d/Y^2 -1)/d= ∑_a (X_a^2 - d/X^2)(Y_a^2 - d/Y^2)/d + (d/X^2-1)(d/Y^2 - 1) = (d/X^2d/Y^2) ∑_a (X_a^2-1)(Y_a^2-1)/d + (d/X^2-1)(d/Y^2-1).Since the variables X_a^2-1 are centered with order-one variance, Lemma <ref> does apply to them; the estimate (<ref>) gives ∑_a (X_a^2-1)(Y_a^2-1)≺√(d). Combining this with several applications of Lemma <ref>, we findΔ≺1/√(d).Now we prove (<ref>) by induction on k, using the three-term recurrence formulaH_k+1(x) = x H_k(x) - k H_k-1(x).* 𝐤=0, 1: These are trivial, since the left-hand side of (<ref>) is deterministically zero.* 𝐤=2: Since H_2(x) = x^2 - 1, we haveH_2 ( ∑_a X_a Y_a/√(d)) - 1/d∑_a,b=1^d,∗X_a X_b Y_a Y_b= 1/d∑_a,b=1^d X_a X_b Y_a Y_b - 1 - 1/d∑_a,b=1^d,∗X_a X_b Y_a Y_b= ∑_a=1^d X_a^2 Y_a^2/d - 1 = Δ≺1/√(d). * 𝐤 ≥ 3: We claim that1/d∑_a_1,…,a_k=1^d,∗X_a_1^2 X_a_2…X_a_kY_a_1^2 Y_a_2…Y_a_k -∑_a_1,…,a_k-1=1^d,∗X_a_1…X_a_k-1Y_a_1…Y_a_k-1≺ d^k-2/2.Assume this claim momentarily. By induction, we haveH_k ( ∑_a X_a Y_a/√(d))= 1/d^k/2∑_a_1, …, a_k =1^d,∗X_a_1…X_a_kY_a_1…Y_a_k + E_k, H_k-1( ∑_a X_a Y_a/√(d))= 1/d^(k-1)/2∑_a_1, …, a_k-1 =1^d,∗X_a_1…X_a_k-1Y_a_1…Y_a_k-1 + E_k-1with error terms satisfying E_k≺ 1/√(d) and E_k-1≺ 1/√(d). From the three-term recurrence for Hermite polynomials, we obtainH_k+1( ∑_a X_a Y_a/√(d)) =( ∑_b X_b Y_b/√(d)) H_k ( ∑_a X_a Y_a/√(d)) - k H_k-1( ∑_a X_a Y_a/√(d)) =( ∑_b X_b Y_b/√(d))(1/d^k/2∑_a_1, …, a_k =1^d,∗X_a_1…X_a_kY_a_1…Y_a_k + E_k) - k( 1/d^(k-1)/2∑_a_1, …, a_k-1 =1^d,∗X_a_1…X_a_k-1Y_a_1…Y_a_k-1 + E_k-1).Consider the product ( ∑_b X_b Y_b/√(d))(1/d^k/2∑_a_1, …, a_k =1^d,∗X_a_1…X_a_kY_a_1…Y_a_k).When we multiply the sums together, either the index b is distinct from the indices {a_1,…,a_k}, or it is not. If b is distinct, this contributes to the main term; if b is not distinct, we end up with a term of the form ∑_a_1,…,a_k=1^d,∗X_a_1^2 X_a_2…X_a_kY_a_1^2 Y_a_2…Y_a_k. Since the indices {a_1,…,a_k} are themselves distinct, b can only match with one of them, and this can happen in k ways; thus the expression in (<ref>) is equal to 1/d^(k+1)/2∑_a_1,…,a_k+1=1^d,∗X_a_1…X_a_k+1Y_a_1…Y_a_k+1 + k/d^(k+1)/2∑_a_1,…,a_k=1^d,∗X_a_1^2 X_a_2…X_a_kY_a_1^2 Y_a_2…Y_a_k.Combining this with the estimate (∑_b X_b Y_b/√(d))E_k≺E_k≺ 1/√(d), from Lemma <ref>; the estimate kE_k-1≺ 1/√(d), of course; and (<ref>), we obtainH_k+1( ∑_a X_a Y_a/√(d)) = 1/d^(k+1)/2∑_a_1,…,a_k+1=1^d,∗X_a_1…X_a_k+1Y_a_1…Y_a_k+1 + _≺( 1/√(d))as desired.Now we prove (<ref>). Applying the same type of expansions as discussed just after (<ref>), we find1/d∑_a_1,…,a_k=1^d,∗X_a_1^2 X_a_2…X_a_kY_a_1^2 Y_a_2…Y_a_k -∑_a_1,…,a_k-1=1^d,∗X_a_1…X_a_k-1Y_a_1…Y_a_k-1= 1/d( ∑_b=1^d X_b^2 Y_b^2 ) ( ∑_a_2,…,a_k=1^d,∗X_a_2…X_a_kY_a_2…Y_a_k) - k-1/d∑_a_1,…,a_k-1=1^d,∗X_a_1^3 X_a_2…X_a_k-1Y_a_1^3 Y_a_2…Y_a_k-1 - ∑_a_1,…,a_k-1=1^d,∗X_a_1…X_a_k-1Y_a_1…Y_a_k-1= Δ( ∑_a_1,…,a_k-1=1^d,∗X_a_1…X_a_k-1Y_a_1…Y_a_k-1) - k-1/d∑_a_1,…,a_k-1=1^d,∗X_a_1^3 X_a_2…X_a_k-1Y_a_1^3 Y_a_2…Y_a_k-1.By the estimate (<ref>) and Lemma <ref>, the first term on the right-hand side is stochastically dominated by 1/√(d) d^k-1/2 = d^k-2/2 in absolute value. To handle the second term on the right-hand side, we will make expansions like ∑_a_1,…,a_k-1=1^d,∗X_a_1^3 X_a_2…X_a_k-1Y_a_1^3 Y_a_2…Y_a_k-1= ( ∑_b=1^d X_b^3 Y_b^3 ) ( ∑_a_2,…,a_k-1=1^d,∗X_a_2…X_a_k-1Y_a_2…Y_a_k-1) - (k-2) ∑_a_1,…,a_k-2=1^d,∗X_a_1^4 X_a_2…X_a_k-2Y_a_1^4 Y_a_2…Y_a_k-1,then expand from fourth powers into fifth powers, and so on, until the process terminates when all that remains is ∑_a=1^d X_a^p Y_a^p for some power p (and an irrelevant prefactor C_k). To track this, we introduce the following bookkeeping notation: Definingα_k,p ∑_a_1,…,a_k+2-p=1^d,∗X_a_1^p X_a_2…X_a_k+2-pY_a_1^p Y_a_2…Y_a_k+2-p β_k,p ( ∑_b=1^d X_b^p Y_b^p ) ( ∑_a_2, …, a_k+2-p=1^d,∗X_a_2…X_a_k+2-pY_a_2…Y_a_k+2-p)expansions like those above show, with x_n the falling factorial x_n = x(x-1)(x-2) ⋯ (x-n+1), α_k,p = β_k,p - (k+1-p) α_k,p+1 = β_k,p - (k+1-p)(β_k,p+1 - (k-p) α_k,p+2) = β_k,p - (k+1-p) β_k,p+1 + (k+1-p)_2 (β_k,p+2 - (k-1+p)α_k,p+3) = ( ∑_j=0^k+1 (-1)^j (k+1-p)_j β_k,p+j) + (k+1-p)_k+2α_k,p+k+2.It is easy to compute ∑_a=1^d X_a^p Y_a^p≺ d for any fixed p; in particularα_k,p+k+2 = ∑_a=1^d X_a^p+k+2Y_a^p+k+2≺ d,and combining this with (<ref>) we obtainβ_k,p≺ d^1+k+1-p/2(since the indexing in the definition of β starts with a_2). Thusα_k,p≺ d^1+k+1-p/2as long as k+1-p ≥ 0. In particular, k-1/d∑_a_1,…,a_k-1=1^d,∗X_a_1^3 X_a_2…X_a_k-1Y_a_1^3 Y_a_2…Y_a_k-1≺1/dα_k,3≺ d^k-2/2which completes the proof of (<ref>).§.§ Estimates for E Bfull,BfullLet μ be a centered probability measure onwith unit variance and all finite moments, and let (X_a)_a=1^d, (Y_a)_a=1^d be independent vectors with all entries i.i.d. samples from μ. SetX = √(d)/X Xif X≠ 0, 0otherwise, Y = √(d)/Y Yif Y≠ 0, 0otherwise.Then for each k we have1/d^k/2∑_a_1, …, a_k =1^d,∗X_a_1…X_a_kY_a_1…Y_a_k - 1/d^k/2∑_a_1, …, a_k =1^d,∗ X_a_1… X_a_k Y_a_1… Y_a_k≺1/√(d).We restrict to the good event G_XY = {X≠ 0 ≠Y} in the usual way. From (<ref>) we have1/d^k/2∑_a_1, …, a_k =1^d,∗X_a_1…X_a_kY_a_1…Y_a_k - 1/d^k/2∑_a_1, …, a_k =1^d,∗ X_a_1… X_a_k Y_a_1… Y_a_k= ( d/XY)^k - 11/d^k/2∑_a_1, …, a_k =1^d,∗ X_a_1… X_a_k Y_a_1… Y_a_k≺( d/XY)^k - 1.But d/X_iX_j - 1≤√(d)/X_i√(d)/X_j - 1 + √(d)/X_i - 1≺1/√(d)and thus( d/X_iX_j)^k - 1≤∑_j=1^k (d/X_iX_j)^j - (d/X_iX_j)^j-1 = d/X_iX_j - 1∑_j=1^k (d/X_iX_j)^j ≺d/X_iX_j - 1≺1/√(d),which finishes the proof. §.§ Proof of Propositions <ref> and <ref>The decomposition f(x) = ∑_k=0^L c_k h_k(x) induces decompositionsA = ∑_k=0^L c_k A_k, A = ∑_k=0^L c_k A_k, B^ = ∑_k=0^L c_k (B^)_k,B^ = ∑_k=0^L c_k (B^)_k,which in turn induce decompositionsE_A, A = ∑_k=0^L c_k E_A,A,k,E_A, B^ = ∑_k=0^L c_k E_A, B^, k,E_B^, B^ = ∑_k=0^L c_k E_B^, B^, k.Since there is a finite number of terms in the sum, it suffices to prove the desired estimates one k at a time. In the usual way, we can restrict to the good event {X_i≠ 0 ≠X_j}, in which case we can apply the preceding lemmas by choosing X = X_i and Y = X_j: The estimate |(E_A,A,k)_ij| ≺ 1/√(Nd) follows from Lemma <ref>; the estimate |(E_A,B^,k)_ij| ≺ 1/√(Nd) for follows from Lemma <ref>; the estimate |(E_B^,B^,k)_ij| ≺ 1/√(Nd) follows from Lemma <ref>. In the decomposition B^ = ∑_k=0^L c_k (B^)_k, it suffices to show that (B^)_k admits a low-rank-plus-small-Frobenius-norm decomposition for each k = 0, …, ⌈ℓ⌉ - 1. Fix such k. The decomposition merely adds in the “missing diagonal”: Dropping k from the notation, E_ = E_,k and E_ = E_,k are defined entrywise by(E_)_ij = 1/√(N)d^k/2√(k!)∑_a_1,…,a_k=1^d,∗ X_a_1i… X_a_ki X_a_1j… X_a_kj(E_)_ij = δ_ij/√(N)d^k/2√(k!)∑_a_1,…,a_k=1^d,∗ X_a_1i^2 … X_a_ki^2.The matrix E_ is a sum of d^k rank-one matrices of the form M_ij = X_a_1i… X_a_ki X_a_1j… X_a_kj (actually slightly fewer than d^k, since the sum only counts {a_1, …, a_k} distinct), hence has rank at most d^k ≤ d^⌈ℓ⌉ -1. The matrix E_ is diagonal, and its entries are bounded above by1/√(N)d^k/2∑_a_1,…,a_k=1^d X_a_1i^2 … X_a_ki^2 = 1/√(N)d^k/2X^2k≺√(d^k/N) = (d^(⌈ℓ⌉ - 1 - ℓ)/2).Hence E__≺ d^(⌈ℓ⌉ - 1)/2, completing the proof. § GENERAL NONLINEARITIES BY APPROXIMATION: PROOF OF THEOREM <REF>In this appendix, we prove Theorem <ref> about general nonlinearities, via approximation by polynomials. The structure mimics the proof of <cit.>. For the whole section, μ will be a centered probability measure onwith unit variance and all finite moments, and X_1, X_2 ∈^d will be i.i.d. random vectors each of whose entries is an i.i.d. sample from μ. Consider the inner product on functions with respect to Gaussian weight, f,g_Z ∼N(0,1)[f(Z)g(Z)],and corresponding norm f^2 = f,f. If f satisfies Assumption <ref>, it is easy to show that f^2 = σ^2.Fix some ϵ. The theorem involves some z in the complex upper half plane; recall that we write η > 0 for its imaginary part. Since ∑_k c_k^2 converges, there exists some integer L ≥ℓ+1 such thatσ^2 - (η^4ϵ^2/64) ≤∑_k=0^L-1 c_k^2 ≤σ^2.For this L, we define the approximating polynomialf_(x) ∑_k=0^L-1 c_k h_k(x) + c_L h_L(x)with the adjustment c_L (σ^2 - ∑_k=0^L-1 c_k^2)^1/2, which is made so that γ_c = γ̂_̂ĉ, where the former is defined by (<ref>) with respect to f_, and the latter is defined by (<ref>) with respect to f. Notice that f_ always satisfies Assumption <ref>.We also define the error-like functione_f,L(x) = f(x) - ∑_k=0^L c_k h_k(x).Since the Hermite polynomials are orthogonal, we havef-f_^2 = (c_L - c_L)h_L(x) + e_f,L^2 = (c_L-c_L)^2 + (σ^2 - ∑_k=0^L c_k^2) ≤ 2c_L^2 + 2c_L^2 + σ^2 - ∑_k=0^L c_k^2 = c_L^2 + 2c_L^2 + σ^2 - ∑_k=0^L-1 c_k^2.By (<ref>) we have c_L^2 ≤η^4ϵ^2/64 and c_L^2 ≤η^4ϵ^2/64; thus we havef-f^2 ≤η^4ϵ^2/16.Now let A_ be the unnormalized matrix (<ref>) but for f_, with corresponding Stieltjes transform s_A_. Similarly, let A_ be the normalized matrix (<ref>) but for f_, with corresponding Stieltjes transform s_A_. On the one hand, from Theorem <ref> we haves_A_(z) - m(z) ≺1/√(d),s_A_(z) - m(z) ≺1/√(d).On the other hand, from Lemma <ref> below, we have max( s_A(z) - s_A_(z), s_A(z) - s_A_(z))≤1/η^2f - f_ + (1/η^2) + _≺(1/η√(N)) ≤ϵ/4 + (1/η^2) + _≺(1/η√(N)) ≤ϵ/2 + _≺(1/η√(N)),where the last inequality holds for d large enough depending on ϵ and η. Thuss_A(z) - m(z)≤ϵ/2 + _≺(1/d^ℓ/2),meaning that for any ϵ and D we have(s_A(z) - m(z) > ϵ) ≤ C_ϵ,D d^-Dfor d large enough. By fixing D and applying the Borel-Cantelli lemma, this suffices to show the almost-sure convergence of s_A(z) to m(z). Fix two functions f, f_ : → that each satisfy Assumption <ref>, and a centered probability measure μ onwith unit variance and all finite moments. Write A for the matrix (<ref>), constructed with f, where the i.i.d. vectors (X_i)_i=1^N have i.i.d. entries drawn from μ, with corresponding Stieltjes transform s_A(z). Write A_ for the analogue with f replaced by f_, with Stieltjes transform s_A_(z). Write also A for the corresponding normalized model (<ref>), constructed with f, and A_ for the analogue constructed with f_. Then s_A(z) - s_A_(z)≤1/η^2( _Z ∼N(0,1)[(f(Z) - f_(Z))^2] )^1/2 + ( 1/η^2) + _≺(1/η√(N)),s_A(z) - s_A_(z)≤1/η^2( _Z ∼N(0,1)[(f(Z) - f_(Z))^2] )^1/2 + ( 1/η^2) + _≺(1/η√(N)),Lemma 19 of <cit.> showss_A(z) - s_A_(z)≤1/η^2( _X_1,X_2[(f(X_1,X_2/√(d)) - f_(X_1,X_2/√(d)))^2] )^1/2 + _≺(1/η√(N)).Applying Lemma <ref> below to the function g(x) = f(x) - f_(x) yields (<ref>). The proof of (<ref>) is a little more involved for technical reasons, sinceA_ij = δ_i ≠ j/√(N) f( X_i,X_j/√(d)√(d)/X_i√(d)/X_j) 1{X_i≠ 0 ≠X_j}but as written Lemma 19 of <cit.> only allows a direct comparison of matrices of the form(A_f)_ij = δ_i ≠ j/√(N) f( X_i,X_j/√(d)√(d)/X_i√(d)/X_j1{X_i≠ 0 ≠X_j})(by taking what they call 𝐱_i to be what we call X_i/X_i if well-defined, or the zero vector otherwise), for various choices of f. That is, on the exponentially unlikely occasions where X_i = 0, the corresponding row and column are set to zero in A but f(0)/√(N) in A_f. By applying Lemma 19 of <cit.> and Lemma <ref> as in the unnormalized case, we obtains_A_f(z) - s_A_f_(z)≤1/η^2( _Z ∼N(0,1)[(f(Z) - f_(Z))^2] )^1/2 + (1/η^2) + _≺(1/η√(N)).It remains to compare s_A and s_A_f, as well as s_A_f_ and s_A_, which we will do with the rank estimate (<ref>). Bounding the rank of a matrix by its number of nonzero rows, this givesmax( s_A(z) - s_A_f(z), s_A_f_(z) - s_A_(z))≤Cmax((A - A_f),(A_f_ - A_))/Nη≤C#{i : X_i = 0}/Nη≺1/Nη,where the last estimate holds since (X_i = 0) is exponentially small, so that #{i : X_i = 0}≺ 1. Absorbing this into _≺((η√(N))^-1) completes the proof. Fix g : → that satisfies Assumption <ref>. If μ has all finite moments, then_X_1,X_2[ g^2 ( X_1,X_2/√(d)) ] d →∞→_Z[g^2(Z)], _X_1,X_2[ g^2 ( X_1,X_2/√(d)√(d)/X_1√(d)/X_21{X_1≠ 0 ≠X_2}) ] d →∞→_Z[g^2(Z)].Write μ_d for the law of X_1,X_2/√(d), and μ_d for the law of X_1,X_2/√(d)√(d)/X_1√(d)/X_21{X_1≠ 0 ≠X_2} – neither of which necessarily has a density with respect to Lebesgue measure – as well as μ_G for standard Gaussian measure. By the usual central limit theorem, μ_G is the d →∞ weak limit of the measures μ_d. It is straightforward to show that d/X_11{X_1≠ 0} and d/X_21{X_2≠ 0} each converge in probability to one, so that X_1,X_2/√(d)√(d)/X_1√(d)/X_21{X_1≠ 0 ≠X_2} also converges to a Gaussian variable, meaning that μ_G is also the d →∞ weak limit of the measures μ_d.Fix any constant M ≥max(1,α_1, α_K). Write g_M for the function which agrees with g on [-M,M], vanishes outside [-(M+1),M+1], linearly interpolates on [M,M+1] between g(M) and 0, and linearly interpolates on [-(M+1),-M] between 0 and g(-M), and define e_M byg^2(x) = g_M^2(x) + e_M(x).The result will follow if we can showlim_d →∞∫_ g_M^2(x)(μ_d - μ_G)( x)= 0,lim_d →∞∫_ g_M^2(x)(μ_d - μ_G)( x)= 0,for any fixed M, as well aslim_M →∞∫_ e_M(x) μ_G( x) = 0,lim_M →∞lim sup_d →∞∫_ e_M(x) μ_d( x) = 0,lim_M →∞lim sup_d →∞∫_ e_M(x) μ_d( x) = 0. We start with the proof of (<ref>) and (<ref>). Notice that, since g(x) is the difference of the nonlinear function f(x) and a finite degree polynomial, and f(x) is piecewise continuous with a polynomial growth rate, g^2(x) is also piecewise continous with a polynomial growth rate, and g_M^2(x) is piecewise continuous (with “no growth rate” since it vanishes outside [-(M+1),M+1]. The possible discontinuities of g^2(x) and g_M^2(x) occur at {α_1, …, α_K}. Write h(x) = g_M^2(x). For any ϵ∈ (0, min_1 ≤ i < K{α_i+1 - α_i}/2), construct a “smoothed” version h_ϵ(x) such that h_ϵ(x) = h(x) for x ∈∖⋃_1 ≤ i ≤ K [α_i-ϵ, α_i + ϵ]; on the interval [α_i - ϵ, α_i +ϵ], for i ∈{1, 2, …, K}, the function h_ϵ(x) is a linear interpolation between h(α_i-ϵ) and h(α_i+1 + ϵ). We have∫ h(x) (μ_d - μ_G)(dx)≤∫ h_ϵ(x) (μ_d - μ_G)(dx) + ∫h(x) - h_ϵ(x)μ_d(dx) + ∫h(x) - h_ϵ(x)μ_ G(dx).Since h_ϵ(x) is a continuous function with compact support, the first term on the right-hand sides converges to 0 as d →∞ due to the CLT. To control the second and third terms on the right-hand side, we note that h(x) - h_ϵ(x)≤ C_M ∑_1 ≤ i ≤ K1_[α_i-ϵ, α_i + ϵ](x),where C_M is some finite constant that can depend on M. Since μ_G has a bounded density function,∫h(x) - h_ϵ(x)μ_G(dx) ≤ C_M ∑_1≤ i ≤ K∫1_[α_i-ϵ, α_i + ϵ](x) μ_G(dx) ≤ 2K C_M ϵ.Similarly, ∫h(x) - h_ϵ(x)μ_d(dx)≤ C_M ∑_1≤ i ≤ K∫1_[α_i-ϵ, α_i + ϵ](x) μ_d(dx)= C_M ∑_1≤ i ≤ K∫1_[α_i-ϵ, α_i + ϵ](x) μ_G(dx) + _d(1)≤ 2K C_M ϵ + _d(1),where in the second step we have applied the CLT, with the approximation error captured by _d(1). Since ϵ can be chosen to be arbitrarily small, we have established (<ref>), about the unnormalized model. The estimate for the normalized model, (<ref>), is the same, since it relies only on weak convergence of μ_d or μ_d to μ_G.Now we study the e_M terms (<ref>), (<ref>), and (<ref>). First we observe that, by construction,e_M(x)≤ Cx^C 1_x≥ Mfor some C, which can now change from line to line. Indeed, e_M(x) agrees with g^2(x) on {x≥ M+1}, where this growth rate is by assumption, and is constructed from g and a linear interpolation on {x∈ [M,M+1]}. From this, standard tail bounds give∫_ e_M(x) μ_G( x)≤ C∫_M^∞x^C μ_G( x) = (e^-M^2/4),which proves (<ref>). Next, we combine the standard estimate sup_d [(X_1,X_2/√(d))^2p] ≤ C_pwith Cauchy–Schwartz and Markov's inequality to find∫_ e_M(x) μ_d( x)≤ C ∫_x > Mx^Cμ_d( x) ≤ 2α_1^2 ( [X_1,X_2/√(d)^2C] (X_1,X_2/√(d) > M) )^1/2≤2α_1^2/M( [X_1,X_2/√(d)^2C] [(X_1,X_2/√(d))^2] )^1/2 =(1/M),which verifies (<ref>). Finally, we need to bound∫_ e_M(x) μ_d( x) ≤ C ∫_x > Mx^Cμ_d( x) = C [X_1,X_2/√(d)√(d)/X_1√(d)/X_2^C1_X_1,X_2/√(d)√(d)/X_1√(d)/X_2≥ M].We will split this estimate into two parts, namely on and off the good eventE_ = {X_1≥√(d)/2 and X_2≥√(d)/2}.On this event,[X_1,X_2/√(d)√(d)/X_1√(d)/X_2^C1_X_1,X_2/√(d)√(d)/X_1√(d)/X_2≥ M1_E_] ≤ C [ X_1,X_2/√(d)^C1_X_1,X_2/√(d)≥M/4],which is (1/M) by the same arguments as in (<ref>). On its complement, we apply the deterministic estimate X_1,X_2/X_1X_2≤ 1 to find[X_1,X_2/√(d)√(d)/X_1√(d)/X_2^C1_X_1,X_2/√(d)√(d)/X_1√(d)/X_2≥ M1_E_^c] ≤ d^C/2(E_^c).The estimate X - √(d)≺ 1 from (<ref>) gives (E_^c) ≤ C_Dd^-D for any fixed D > 0; if we take D > C/2, we find that this is (1) as d →∞. Thus lim sup_d →∞∫_ e_M(x) μ_d( x) = (1/M), which verifies (<ref>) and finishes the proof. tocsectionReferences alpha-abbrvsort | http://arxiv.org/abs/2310.18280v1 | {
"authors": [
"Sofiia Dubova",
"Yue M. Lu",
"Benjamin McKenna",
"Horng-Tzer Yau"
],
"categories": [
"math.PR",
"stat.ML",
"60B20, 15B52"
],
"primary_category": "math.PR",
"published": "20231027171555",
"title": "Universality for the global spectrum of random inner-product kernel matrices in the polynomial regime"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.